Creating tracks on the snow in the Unreal Engine 4

Creating tracks on the snow in the Unreal Engine 4
 
If you play modern AAA games, you might notice a tendency to use snow-covered landscapes. For example, they are in Horizon Zero Dawn , Rise of the Tomb Raider and God of War . In all these games, snow has an important feature: on it you can leave traces!
 
 
Due to this interaction with the environment, the player's immersion in the game is intensified. It makes the environment more realistic, and we will be honest - it's just interesting. Why spend long hours creating curious mechanics if you can just let the player fall to the ground and make snow angels?
 
 
In this tutorial you will learn the following:
 
 
 
Create traces by using a scene capture to mask objects close to the ground
 
Use the mask with the terrain material to create deformable snow
 
For optimization, display tracks on the snow only next to the player
 
Unreal Engine for Beginners .
 

Let's get to work


 
Download materials for this tutorial. Unzip them, go to
SnowDeformationStarter
and open
SnowDeformation.uproject
. In this tutorial we will create traces with the help of a character and several boxes.
 
 

 
Before we begin, you need to know that the way in this tutorial will only keep tracks in the specified area, and not all over the world, because the speed depends on the resolution of the target render.
 
 
For example, if we want to store traces for a large area, then we'll have to increase the resolution. But it also increases the impact of capturing the scene on the speed of the game and the amount of memory for the target render. To optimize, you must limit the scope and resolution.
 
 
Having dealt with this, let's find out what is needed to implement the tracks on the snow.
 
 

Implementation of tracks on the snow


 
The first thing you need to create tracks is
target renderer
. The target render will be a mask in grayscale, in which white indicates a trace, and black indicates a missing trace. Then we can project the target render to the ground and use it to blend the textures and shift the vertices.
 
 

 
The second thing we need is a method of masking only objects that affect the snow. This can be done first by rendering objects in
Custom Depth
. Then you can use
scene capture
with
postprocessing material
to mask all objects rendered in Custom Depth. Then you can remove the mask in the target renderer.
 
 
Note:
The scene capture is, in fact, a camera with the ability to output the target render.
 
The most important part of capturing the scene is
place
its location. Below is an example of a target rendered from
top view of
. Here, the character from the third person and the boxes are disguised.
 
 

 
At first glance, the capture with a view from above is suitable for us. Forms look like mesh, so there should not be any problems, right?
 
 
Not really. The problem of capture from the top view is that it does not capture anything under the widest point. Here is an example:
 
 

 
imagine that the yellow arrows go all the way to the ground. In the case of a cube and a cone, the point of the arrow will always remain inside the object. However, in the case of a sphere, the point as it approaches the ground emerges from it. But according to the camera, the point is always inside the sphere. Here's how the sphere for the camera will look:
 
 

 
Therefore, the mask of the sphere will be greater than it should, even if the area of ​​contact with the ground is small.
 
 
In addition, this problem is complemented by the fact that it is difficult for us to determine whether a land object is concerned.
 
 

 
To cope with both of these problems can be with the capture of
from below
.
 
 

Capture from the bottom


 
The grip from below looks like this:
 
 

 
As you can see, the camera now grabs the underside, that is, the one that touches the ground. This eliminates the "widest area" problem that appears when you capture from above.
 
 
To determine if a ground object is concerned, you can use postprocessing material to perform a depth check. He checks to see if the depth of the object is and if it is below the specified offset. If both conditions are met, then we can mask this pixel.
 
 

 
Below is an example of an inside engine with a capture zone in 20 units above ground. Note that the mask only appears when the object passes through a certain point. Also notice that the mask becomes whiter when the object approaches the ground.
 
 

 
First, we create a postprocessing material to perform the depth test.
 
 

Create a depth verification material


 
To test the depth, you need to use two depth buffers - one for the ground, one for the objects that affect the snow. Since the capture of the scene only sees the earth,
Scene Depth
will output depth for the earth. To get the depth for objects, we'll just render them
Custom Depth
.
 
 
Note:
To save time, I have already rendered the character and boxes in Custom Depth. If you want to add other objects affecting the snow, you must include
for them. Render CustomDepth Pass
.
 
First, you need to calculate the distance of each pixel to the ground. Open
MaterialsPP_DepthCheck
and create the following:
 
 

 
Next, you need to create a capture zone. To do this, add the selected nodes:
 
 

 
Now if the pixel is within
25
units from the ground, he will appear in the mask. The brightness of the masking depends on how close the pixel is to the ground. Click on
Apply
and return to the main editor.
 
 
Next, you need to create a scene capture.
 
 

Create a scene capture


 
First we need a target renderer, in which we can record the capture of the scene. Go to the folder
RenderTargets
and create a new
Render Target
under the name
RT_Capture
.
 
 
Now let's create a scene capture. In this tutorial, we'll add the capture of the scene to the bluetooth, because later we'll need a script for it. Open
BlueprintsBP_Capture
and add
Scene Capture Component 2D
. Call it
SceneCapture
.
 
 

 
First we need to set the rotation of the grip so that it looks at the ground. Go to the Details panel and set
Rotation
the values ​​are
(? 9? 90)
.
 
 

 
Next comes the type of projection. Since the mask is a 2D representation of the scene, we need to get rid of the perspective distortion. For this, we set for
ProjectionProjection Type
the value
Orthographic
.
 
 

 
Next, we need to report the capture of the scene in which the target renderer to perform the recording. For this we choose for
Scene CaptureTexture Target
the value
RT_Capture
.
 
 

 
Finally, we need to use the depth test material. Add to
Rendering FeaturesPost Process Materials
PP_DepthCheck
. In order for postprocessing to work, we also need to change
Scene CaptureCapture Source
on
Final Color (LDR) in RGB
.
 
 

 
Now that the scene capture is configured, we need to specify the size of the capture area.
 
 

Specify the size of the capture area


 
Since it is better to use low resolutions for the target rendering, we need to use the space effectively. That is, we must choose which area will cover one pixel. For example, if the resolutions of the capture area and the target render are the same, then we get the ratio 1: 1. Each pixel will cover an area of ​​1 × 1 (in units of the world).
 
 
For traces on snow, the ratio 1: 1 is not required, because we probably will not need such detail. I recommend using more ratios, because this will allow you to increase the size of the capture area at a low resolution. But do not make the ratio too big, otherwise the details will start to get lost. In this tutorial we will use the ratio 8: ? that is, the size of each pixel will be 8 × 8 world measurement units.
 
 
You can change the size of the capture area by changing the property
Scene CaptureOrtho Width
. For example, if you want to capture a 1024 × 1024 area, set the value to 1024. Since we are using the 8: 1 ratio, set the value to
2048
(the resolution of the target renderer is 256 × 256 by default).
 
 

 
This means that the capture of the scene will capture the area
2048 × 2048
. This is approximately 20 × 20 meters.
 
 
The material of the earth also needs access to the capture size for proper projection of the target render. The easiest way to do this is to save the capture size in
Material Parameter Collection
. In fact, this is a collection of variables that can access. any material.
 
 

Saving the capture size


 
Return to the main editor and go to the folder
Materials
. Create
Material Parameter Collection
, which will be in
Materials & Textures
. Rename it to
MPC_Capture
and open it.
 
 
Then create a new
Scalar Parameter
and call it
CaptureSize
. Do not worry about setting its meaning - we'll do it in the bluprints.
 
 

 
Go back to
BP_Capture
and add to
Event BeginPlay
the allocated nodes. Select for
Collection
the value
MPC_Capture
, but for
Parameter Name
the value
CaptureSize
.
 
 

 
Now any material can get the value
Ortho Width
, reading it from the parameter
CaptureSize
. So far we have finished with the capture of the scene. Click on
Compile
and return to the main editor. The next step is to project the target render to the ground and use it to deform the terrain.
 
 

Deformation of the landscape


 
Open
M_Landscape
and go to the Details panel. Then set the following properties:
 
 
 
For
Two Sided
select the value
enabled
. Since the capture of the scene will "look" from the bottom, he will only see the opposite sides of the earth. By default, the engine does not render the opposite faces of the meshes. This means that it will not keep the depth of the earth in the buffer of depths. To fix this, we You need to tell the engine to render both sides of the mesh.
 
For
D3D11 Tessellation
select the value
Flat Tessellation
(You can also use PN Triangles). Tessellation will break the mesh triangles into smaller ones. In fact, this increases the resolution of the mesh and allows us to obtain more fine details when the vertices are shifted. Without this, the vertex density will be too small to create plausible traces.
 
 

 
After switching on the tessellation,
is turned on. World Displacement
and
Tessellation Multiplier
.
 
 

 
Tessellation Multipler
controls the amount of tessellation. In this tutorial, we will not connect this node, that is, we use the default value (3r3r???r3r3998.).
 
 
World Displacement
receives a vector value that describes in which direction and how much to move the vertex. To calculate the value for this contact, we need to first project the target render to the ground.
 
 

Projecting the target renderer


 
To project the target render, you need to calculate its UV coordinates. To do this, create the following schema:
 
 

 
What is going on here:
 
 
 
First we need to get the position along the XY of the current vertex. Since we perform the capture from the bottom, the X coordinate is turned, so we need to flip it back (if we did the capture from above, we would not need it).
 
In this part, two tasks are performed. First, it centers the target rendering in such a way that its center is in the coordinates
(? 0)
world space. Then it converts the coordinates from the world space to the UV space.
 
 
Next, create the selected nodes and merge the previous calculations as shown below. For the texture
Texture Sample
select the value
RT_Capture
.
 
 

 
This will project the target render to the ground. However, all the vertices outside the capture area will sample the faces of the target render. This is actually a problem, because the target renderer should only be used for vertices within the capture area. Here's how it looks in the game:
 
 

 
To fix this, we need to disguise all UVs that are outside the range of 0 to 1 (that is, the capture area). For this, I created the function
MF_MaskUV0-1
. It returns
0
, if the transmitted UV is outside the range of 0 to 1 and returns
1
, if within it. Multiplying the result by the target renderer, we perform masking.
 
 
Now that we've projected the target render, you can use it to blend the colors and shift the vertices.
 
 

Using the target renderer


 
Let's start by mixing colors. For this we simply connect
1-x
with
Lerp
:
 
 

 
Note:
if you do not understand why I use
1-x
, I will explain - this is necessary for inverting the target renderer, so that the calculations become a little easier.
 
Now that we have a trace, the color of the earth turns brown. If there is no color, it remains white.
 
 
The next step is the displacement of the vertices. To do this, add the selected nodes and merge all as follows:
 
 

 
This will cause all snow areas to move upwards to
25
units. Areas without snow have a zero offset, so that a trace will be created.
 
 
Note:
you can change
DisplacementHeight
to increase or decrease the snow level. Also note that DisplacementHeight is the same value as the capture offset. When they have the same value, this gives us an accurate deformation. But there are cases when you want to change them separately, so I left them with separate parameters.
 
Click on
Apply
and return to the main editor. Create an instance of
at the level. BP_Capture
and give him the coordinates
(? ? -2000)
to place it underground. Click on
Play
and wander around with the keys
W
,
A
,
S
and
D
to deform the snow.
 
 

 
Deformation works, but there are no traces! This happened because the capture overwrites the target render each time the capture is executed. We need some way to make traces
constant
.
 
 

Creating permanent tracks


 
To create constancy, we need another target renderer ( ? a permanent buffer of
), In which all the contents of the capture will be saved before overwriting. Then we will add a persistent buffer to the capture (after it's overwritten). We get a loop in which each target renderer writes to the other. That's how we will create the consistency of the tracks.
 
 

 
First, we need to create a permanent buffer.
 
 

Create a persistent buffer


 
Go to the folder
RenderTargets
and create a new
Render Target
under the name
RT_Persistent
. In this tutorial, we do not have to change texture parameters, but in your own project you will need to make sure that both target renderers use the same resolution.
 
 
Next, we need material that will copy the capture into a permanent buffer. Open
MaterialsM_DrawToPersistent
and add the node
Texture Sample
. Select the texture
for it. RT_Capture
and connect it as follows:
 
 

 
Now we need to use the draw material. Click on
Apply
, and then open
BP_Capture
. First, create a dynamic instance of the material (later we will need to pass the values ​​to it). Add to
Event BeginPlay
the selected nodes:
 
 


 
Nodes
Clear Render Target 2D
clear each target renderer before use.
 
 
Then open the function
DrawToPersistent
and add the selected nodes:
 
 

 
Next, we need to make it so that the drawing in the permanent buffer is executed in each frame, because the capture occurs in each frame. To do this, add
DrawToPersistent
to
Event Tick
.
 
 

 
Finally, we need to add a persistent buffer back to the target capture renderer.
 
 

Recording back to capture


 
Click on
Compile
and open
PP_DepthCheck
. Then add the selected nodes. For
Texture Sample
Set the value to
RT_Persistent
:
 
 

 
Now, when the target renderers write to each other, we get the remaining traces. Click on
Apply
, and then close the material. Click on
Play
and start leaving traces!
 
 

 
The result looks good, but the resulting scheme works only for one area of ​​the map. If you go beyond the area of ​​capture, the traces will cease to appear.
 
 

 
You can solve this problem by moving the capture area together with the player. This means that traces will always appear around the area in which the player is located.
 
 
Note:
since the capture moves, all information outside the capture area is removed. This means that if you go back to the area where there were already traces, they will already be gone. In the next tutorial, I'll tell you how to create partially persistent tracks.
 
 

Move the gripper


 
You can decide that it's enough just to tie the capture position by XY to the position of the player by XY. But if you do this, the target renderer will begin to blur. This is because we are moving the target render in steps that are smaller than the pixel. When this happens, the new pixel position is between pixels. As a result, one pixel is interpolated by several pixels. Here's how it looks:
 
 

 
To eliminate this problem, we need to move the capture in discrete steps. We will compute
the pixel size in the world is
, and then move the capture to steps equal to this size. Then each pixel will never be between the others, so the blur will not appear.
 
 
First, let's create a parameter in which the capture location will be stored. It will be needed for the material of the earth to perform projecting calculations. Open
MPC_Capture
and add
Vector Parameter
under the name
CaptureLocation
.
 
 

 
Next, you need to update the earth material to use the new parameter. Close
MPC_Capture
and open
M_Landscape
. Change the first part of the projection calculation as follows:
 
 

 
Now the target renderer will always be projected onto the capture location. Click on
Apply
and close the material.
 
 
Next, we will make the gripper move with a discrete step.
 
 

Move the gripper with a discrete step


 
To calculate the pixel size in the world, you can use the following equation:
 
 
(1 /RenderTargetResolution) * CaptureSize
 
To calculate a new position, we use the equation shown below for each position component (in our case, for the X and Y coordinates).
 
 
(floor (Position /PixelWorldSize) + 0.5) * PixelWorldSize
 
Now we use them in the bluprinter capture. To save time, I created the macro
for the second equation. SnapToPixelWorldSize
. Open
BP_Capture
, and then open the function
MoveCapture
. Then create the following scheme:
 
 

 
It will calculate the new location, and then save the difference between the new and the current location in
MoveOffset
. If you use a resolution other than 256 × 25? change the highlighted value.
 
 
Next, add the selected nodes:
 
 

 
This scheme will move the gripper with the calculated offset. Then it will save the new capture location in
MPC_Capture
, so that he could use the material of the earth.
 
 
Finally, we need to update the position in each frame. Close the function and add in
Event Tick
before
DrawToPersistent
MoveCapture
.
 
 

 
Moving the capture is only half the solution. Also we need to move the persistent buffer. Otherwise, the capture and the persistent buffer of desynchronizationand will create strange results.
 
 

 

Move the persistent buffer


 
To shift the constant buffer, we need to transfer the calculated displacement of the displacement. Open
M_DrawToPersistent
and add the selected nodes:
 
 

 
Due to this, the permanent buffer will be shifted by the amount of the transferred offset. As in the material of the earth, we need to turn the X coordinate and perform a disguise. Click on
Apply
and close the material.
 
 
Then you need to transfer the offset. Open
BP_Capture
, and then open the function
DrawToPersistent
. Then add the selected nodes:
 
 

 
So we will convert
MoveOffset
in the UV space, and then transfer it to the rendering material.
 
 
Click on
Compile
, and then close the bluprint. Click on
Play
and run for good! No matter how far you run, there will always be traces around you.
 
 

 

Where to move next?


 
The finished project can be downloaded from here.
 
 
It is not necessary to use the tracks created in this tutorial only for snow. You can apply them even for things like mashed grass (in the next tutorial I'll show you how to create an extended version of the system).
 
 
If you want to work with landscapes and target renderers, I recommend watching the video of Chris Murphy Building High-End Gameplay Effects with Blueprint . From this tutorial you will learn how to create a huge laser that burns earth and grass!
+ 0 -

Add comment