Creation of a cartoon shader of water for the web. Part 2

In first part we examined the setting of the environment and the water surface. In this part we will add objects to buoyancy, add water to the surface of the line and create foam lines with a buffer of depths around the boundaries of objects intersecting with the surface.
 
 
To make the scene look a little better, I made a small change to it. You can customize your scene the way you want, and I did the following:
 
 
 
Added models of the lighthouse and the octopus.
 
Added a model of the earth with color # FFA457 .  
The color of the sky was added to the camera. # 6CC8FF .  
I added a highlight color to the scene. # FFC480 (these parameters can be found in the settings of the scene).  

 
My original scene now looks like this.
 
 
Creation of a cartoon shader of water for the web. Part 2 this texture is , which you can freely use. Any texture will do, provided that it can be paved without noticeable joints.
 
 
After picking up the texture you like, drag it into the Assets window of your project. We need to refer to this texture from the script Water.js, so we'll create an attribute for it:
 
 
    Water.attributes.add ('surfaceTexture', {
type: 'asset',
AssetType: 'texture',
Title: 'Surface Texture'
});

 
And then assign it in the editor:
 
 

 
Now we need to transfer it to the shader. Go to Water.js and set in function CreateWaterMaterial new parameter:
 
 
    material.setParameter ('uSurfaceTexture', this.surfaceTexture.resource);    

 
Now go back to Water.frag and declare a new uniform:
 
 
    uniform sampler2D uSurfaceTexture;    

 
We're almost done. To render a texture on a plane, we need to know where each pixel is in the mesh. That is, we need to transfer data from the vertex shader to the fragmented one.
 
 

Varying variables are


 
Varying Variables allow you to transfer data from a vertex shader to a fragmented one. This is the third type of special variables that can be used in a shader (the first two are
? uniform
? and attribute ). A variable is specified for each vertex and each pixel can access it. Since pixels are much larger than vertices, the value is interpolated between vertices (hence the name "varying" - it deviates from the values ​​passed to it).
 
 
To test it in work, we will declare in Water.vert new variable as variable:
 
 
    varying vec2 ScreenPosition;    

 
And then assign it the value gl_Position after its calculation:
 
 
    ScreenPosition = gl_Position.xyz;    

 
Now back to Water.frag and declare the same variable. We can not get the output of debugging data from the shader in any way, but we can use color for visual debugging. Here's how to do it:
 
 
    uniform sampler2D uSurfaceTexture;
varying vec3 ScreenPosition;
void main (void)
{
vec4 color = vec4 (0.?0.?1.?0.5);
//Check our new varying variable
color = vec4 (vec3 (ScreenPosition.x), 1.0);
gl_FragColor = color;
}

 
The plane should now look black and white, and the line separating the colors will pass where ScreenPosition.x = 0. The color values ​​change only from 0 to ? but the values ​​in ScreenPosition may be outside this range. They are automatically limited, so when you see a black color, it can be 0 or a negative number.
 
 
What we have now done: have transmitted the screen position of each vertex to each pixel. You can see that the line dividing the black and white sides will always pass through the center of the screen, regardless of where the surface in the world actually is.
 
 
Task 1: create a new varying variable to transfer the position in the world instead of the screen position. Visualize it in the same way. If the color does not change along with the movement of the camera, then everything is done right.

 

Use of UV


 
UV Are the 2D coordinates of each vertex on the mesh, normalized from 0 to 1. They are necessary for correct sampling of the texture on the plane, and we have already set them up in the previous part.
 
 
We will declare in Water.vert new attribute (this name is taken from the definition of the shader in Water.js):
 
 
    attribute vec2 aUv0;    

 
And now we just need to transfer it to the fragment shader, so just create varying and assign it the attribute value:
 
 
    //In Water.vert
//We declare a variable with other variables on top of
varying vec2 vUv0;
//
//Under the main function, we store the value of the attribute
//in varying, so that the fragment shader
has access to it. vUv0 = aUv0;

 
Now we will declare the same varying-variable in the fragment shader. To make sure that everything works, we can still visualize the debugging, and then Water.frag will look like this:
 
 
    uniform sampler2D uSurfaceTexture;
varying vec2 vUv0;
void main (void)
{
vec4 color = vec4 (0.?0.?1.?0.5);
//Confirm with UV
color = vec4 (vec3 (vUv0.x), 1.0);
gl_FragColor = color;
}

 
You should see a gradient confirming that we have a value of 0 from one end and 1 from the other. Now, in order to sample the texture for real, we just need to do the following:
 
 
    color = texture2D (uSurfaceTexture, vUv0);    

 
After that we will see the texture on the surface:
 
 

 

Styling the texture


 
Instead of simply setting the texture as a new color, let's combine it with the existing blue:
 
 
    uniform sampler2D uSurfaceTexture;
varying vec2 vUv0;
void main (void)
{
vec4 color = vec4 (0.?0.?1.?0.5);
vec4 WaterLines = texture2D (uSurfaceTexture, vUv0);
color.rgba + = WaterLines.r;
gl_FragColor = color;
}

 
This works because the texture color is black (0) everywhere except the water lines. Adding it, we do not change the original blue color, except for places with lines where it becomes lighter.
 
 
However, this is not the only way to combine colors.
 
 
Task 2: can you combine the colors so that you get the weaker effect shown below?

 

 

Moving the texture


 
As a final effect, we want the lines to move over the surface and it does not look so static. To do this, we use the fact that any value outside the range from 0 to ? passed to the function texture2D , will be transferred (for example, both 1.5 and 2.5 become equal to 0.5). Therefore, we can increase our position by the already defined uniform time variable to increase or decrease the density of lines on the surface, which will give the final fragment shader the form:
 
 
    uniform sampler2D uSurfaceTexture;
uniform float uTime;
varying vec2 vUv0;
void main (void)
{
vec4 color = vec4 (0.?0.?1.?0.5);
vec2 pos = vUv0;
//Multiply by more than 1
//the texture begins to repeat more often
pos * = 2.0;
//Shift the entire texture so that it moves over the surface of
pos.y + = uTime * ???;
vec4 WaterLines = texture2D (uSurfaceTexture, pos);
color.rgba + = WaterLines.r;
gl_FragColor = color;
}

 

Foam lines and depth buffer


 
Rendering of foam lines around objects in water makes it much easier to see how objects are immersed and where they cross the surface. In addition, so our water becomes much more believable. To realize foam lines, we somehow need to find out where the boundaries of each object are, and do it effectively.
 
 

The trick is


 
We need to learn to determine if a pixel is on the surface of the water close to the object. If this is so, then we can paint it in the color of the foam. There are no simple ways to solve this problem (as far as I know). Therefore, to solve it, I use a useful technique for solving problems: I'll take an example, the answer for which we know, and see if we can generalize it.
 
 
Look at the image below.
 
 

 
Which pixels should be part of the foam? We know that it should look something like this:
 
 

 
So let's look at two specific pixels. Below, I marked them with asterisks. Black will be on the foam, and red - not. How do we distinguish them in a shader?
 
 

 
We know that even though these two pixels in the screen space are close to each other (both renders over the beacon's body), in reality they are very far in the space of the world. We can see this by looking at the same scene from a different angle.
 
 

 
Notice that the red asterisk is not on the beacon's body, as it seemed to us, but the black one is actually there. We can distinguish between by using the distance to the camera, which is usually called the "depth". Depth 1 means that the point is very close to the camera, the depth of 0 is that it is very far away. But this is not only a matter of absolute distances in the world, depth or chamber. The depth of is important. with respect to the pixel behind it is .
 
 
Look again at the first view. Let's say the beacon body has a depth value of 0.5. The depth of the black asterisk will be very close to 0.5. That is, it and the pixel below it have very close depths. On the other hand, the red asterisk will have a much greater depth, because it is closer to the camera, say 0.7. And although the pixel behind it is still on the lighthouse, it has a depth value of 0.? that is, there is a difference here.
 
 
This is the trick. When the pixel depth on the surface of the water is sufficiently close to the depth of the pixel over which it is drawn, then we are pretty close to the boundary of some object and we can render the pixel as a foam.
 
 
That is, we need more information than we have in any pixel. We somehow need to know the depth of the pixel over which it should be drawn. And here we need a buffer of depths.
 
 

Buffer of depths


 
You can imagine a buffer or frame buffer as an off-screen target renderer or texture. When we need to read data, we need to render outside the screen. This technique is used in smoke effect .
 
 
The depth buffer is a special target renderer, which contains information about the depth values ​​of each pixel. Do not forget that the value in gl_Position , calculated in the vertex shader, was the value of the screen space, but it also has a third coordinate, the Z value. This Z value is used to calculate the depth that is written to the depth buffer.
 
 
The depth buffer is designed for correct rendering of the scene without the need to sort objects backwards. Each pixel that needs to be rendered first checks the depth buffer. If its depth value is greater than the value in the buffer, it is drawn, and its own value overwrites the value of the buffer. Otherwise, it is discarded (because this means that there is another object in front of it).
 
 
In fact, you can turn off writing to the depth buffer to see how things will look without it. Let's try to do this in Water.js:
 
 
    material.depthTest = false;    

 
You will notice that water will now always be drawn from above, even if it is behind opaque objects.
 
 

Visualization of the depth buffer


 
Let's add a way to visualize the depth buffer for debugging purposes. Create a new script DepthVisualize.js . Attach it to the camera.
 
 
To access the depth buffer in the PlayCanvas, simply write the following:
 
 
    this.entity.camera.camera.requestDepthMap ();    

 
So we automatically inject uniform-variable into all our shaders, which we can use, declaring it as follows:
 
 
    uniform sampler2D uDepthMap;    

 
Below is an example of a script that requests a depth map and renders it on top of the scene. He has a hot reboot.
 
 
    var DepthVisualize = pc.createScript ('depthVisualize');
//code initialize, called once for each entity
DepthVisualize.prototype.initialize = function () {
this.entity.camera.camera.requestDepthMap ();
this.antiCacheCount = 0; //Disallow the engine to cache the shader so that we can update it in real time
this.SetupDepthViz ();
};};
DepthVisualize.prototype.SetupDepthViz = function () {
var device = this.app.graphicsDevice;
var chunks = pc.shaderChunks;
this.fs = '';
this.fs + = 'varying vec2 vUv0;';
this.fs + = 'uniform sampler2D uDepthMap;';
this.fs + = '';
this.fs + = 'float unpackFloat (vec4 rgbaDepth) {';
this.fs + = 'const vec4 bitShift = vec4 (1.0 /(256.0 * 256.0 * 256.0), 1.0 /(256.0 * 256.0), 1.0 /256.? 1.0);';
this.fs + = 'float depth = dot (rgbaDepth, bitShift);';
this.fs + = 'return depth;';
this.fs + = '}';
this.fs + = '';
this.fs + = 'void main (void) {';
this.fs + = 'float depth = unpackFloat (texture2D (uDepthMap, vUv0)) * 30.0; ';
this.fs + = 'gl_FragColor = vec4 (vec3 (depth), 1.0);';
this.fs + = '}';
this.shader = chunks.createShaderFromCode (device, chunks.fullscreenQuadVS, this.fs, "renderDepth" + this.antiCacheCount);
this.antiCacheCount ++;
//We manually create a drawing call to render the depth map on top of everything else
this.command = new pc.Command (pc.LAYER_FX, pc.BLEND_NONE, function () {
pc.drawQuadWithShader (device, null, this.shader);
} .bind (this));
this.command.isDepthViz = true; //Just tag it so that you can delete it later
this.app.scene.drawCalls.push (this.command);
};};
//code update, called in each frame
DepthVisualize.prototype.update = function (dt) {
};};
//swap method, called for hot restart of the script
//here inherits the state of our script
DepthVisualize.prototype.swap = function (old) {
this.antiCacheCount = old.antiCacheCount;
//Delete the draw call for rendering the depths
for (var i = 0; i .
.
 
Try to copy the code and comment /uncomment the line this.app.scene.drawCalls.push (this.command); to enable /disable depth rendering. It should look like the image below.
 
 

 
Task 3: the water surface is not drawn into the depth buffer. The PlayCanvas engine does this on purpose. Can you figure out why? What is special about water? In other words, considering our rules for depth testing, what would happen if water pixels were written to the depth buffer?

 
Tip: in Water.js you can change one line, which allows you to write water to the buffer of depths.
 
 
It should also be noted that in the initialize function, I multiply the depth by 30. This is necessary to clearly see it, because otherwise the range of values ​​would be too small to display color shades.
 
 

Implementation of the trick


 
In the PlayCanvas engine there are several auxiliary functions for working with depth values, but at the time of writing this article they were not released in production, so we will have to customize them ourselves.
 
 
Define in Water.frag the following uniform variables:
 
 
    //All these uniform variables are automatically injected by the PlayCanvas
engine. uniform sampler2D uDepthMap;
uniform vec4 uScreenSize;
uniform mat4 matrix_view;
//We need to configure these ourselves
uniform vec4 camera_params;

 
We define these auxiliary functions over the main function:
 
 
    #ifdef GL2
float linearizeDepth (float z) {
z = z * 2.0 - 1.0;
return 1.0 /(camera_params.z * z + camera_params.w);
}
#else
#ifndef UNPACKFLOAT
#define UNPACKFLOAT
float unpackFloat (vec4 rgbaDepth) {
const vec4 bitShift = vec4 (1.0 /(256.0 * 256.0 * 256.0), 1.0 /(256.0 * 256.0), 1.0 /256.? 1.0);
return dot (rgbaDepth, bitShift);
}
#endif
#endif
float getLinearScreenDepth (vec2 uv) {
#ifdef GL2
return linearizeDepth (texture2D (uDepthMap, uv) .r) * camera_params.y;
#else
return unpackFloat (texture2D (uDepthMap, uv)) * camera_params.y;
#endif
}
float getLinearDepth (vec3 pos) {
return - (matrix_view * vec4 (pos, 1.0)). z;
}
float getLinearScreenDepth () {
vec2 uv = gl_FragCoord.xy * uScreenSize.zw;
return getLinearScreenDepth (uv);
}

 
Let's give the shader information about the camera in Water.js . Paste it where you pass other uniform variables like uTime:
 
 
    if (! this.camera) {
this.camera = this.app.root.findByName ("Camera"). camera;
}
var camera = this.camera;
var n = camera.nearClip;
var f = camera.farClip;
var camera_params =[
1/f,
f,
(1-f /n) /2,
(1 + f /n) /2
];
material.setParameter ('camera_params', camera_params);

 
Finally, we need a position in the world of each pixel for our fragment shader. We need to get it from the vertex shader. Therefore, we define in Water.frag varying-variable:
 
 
    varying vec3 WorldPosition;    

 
We define the same varying variable in Water.vert . Then we assign it a distorted position from the vertex shader so that the complete code looks like this:
 
 
    attribute vec3 aPosition;
attribute vec2 aUv0;
varying vec2 vUv0;
varying vec3 WorldPosition;
uniform mat4 matrix_model;
uniform mat4 matrix_viewProjection;
uniform float uTime;
void main (void)
{
vUv0 = aUv0;
vec3 pos = aPosition;
pos.y + = cos (pos.z * 5.0 + uTime) * 0.1 * sin (pos.x * 5.0 + uTime);
gl_Position = matrix_viewProjection * matrix_model * vec4 (pos, 1.0);
WorldPosition = pos;
}

 

Realize the trick really


 
Now we are finally ready to implement the technique described at the beginning of this section. We want to compare the depth of the pixel in which we are, with the depth of the pixel below it. The pixel in which we are is taken from the position in the world, and the pixel below it is obtained from the screen position. Therefore, we take these two depths:
 
 
    float worldDepth = getLinearDepth (WorldPosition);
float screenDepth = getLinearScreenDepth ();

 
Task 4: one of these values ​​will never be greater than the other (assuming depthTest = true). Can you determine which?

 
We know that the foam will be there where the distance between the two values ​​is small. So let's render this difference for each pixel. Paste this at the end of the shader (and turn off the depth rendering script from the previous section):
 
 
    color = vec4 (vec3 (screenDepth - worldDepth), 1.0);
gl_FragColor = color;

 
And it should look something like this:
 
 

 
That is, we correctly select the boundaries of any object immersed in water in real time! Of course, you can scale the difference to make the foam thicker or less often.
 
 
Now we have a lot of options how to combine this output with the surface of the water to obtain beautiful foam lines. You can leave them as a gradient, use them for sampling from another texture, or give them a certain color if the difference is less than or equal to a certain limiting value.
 
 
I most like the assignment of a color similar to the lines of static water, so my finished main function looks like this:
 
 
    void main (void)
{
vec4 color = vec4 (0.?0.?1.?0.5);
vec2 pos = vUv0 * 2.0;
pos.y + = uTime * ???;
vec4 WaterLines = texture2D (uSurfaceTexture, pos);
color.rgba + = WaterLines.r * 0.1;
float worldDepth = getLinearDepth (WorldPosition);
float screenDepth = getLinearScreenDepth ();
float foamLine = clamp ((screenDepth - worldDepth), 0.?1.0);
if (foamLine < 0.7){
color.rgba + = 0.2;
}
.
.gl_FragColor = color;
}

 

Summing up


 
We created the buoyancy of submerged objects, applied a mobile texture to the surface to simulate caustics, and learned how to use the depth buffer to create dynamic foam bands.
 
 
In the third and last part, we'll add post-processing effects and learn how to use them to create the effect of underwater distortion.
 
 

The source code is


 
Ready project PlayCanvas can be found here . In our repository there are also port of the project under Three.js .
+ 0 -

Add comment