# Learn OpenGL. Lesson 6.4 - IBL. Mirror irradiance

3r31659. 3r3-31. 3r3161313. 3r31659. In 3r31488. the previous lesson

we prepared our PBR model to work with the IBL method — for this we needed to prepare in advance an irradiance map that describes the diffuse part of indirect illumination. In this lesson we will focus on the second part of the expression of reflectivity, the mirror one: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r318. 3r31644. 3r3161313. 3r31659.

Contents 3r3303.

Part 1. Start 3r3r16613. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r3333. OpenGL 3r31644. 3r31645. 3r31659. 3r31642. 3r3338. Creating windows 3r31644. 3r31645. 3r31659. 3r31642. Hello Window 3r31645. 3r31659. 3r31642. Hello Triangle 3r31645. 3r31659. 3r31642. Shaders 3r31645. 3r31659. 3r31642. Textures 3r31645. 3r31659. 3r31642. 3r3633. Transformations 3r3r161644. 3r31645. 3r31659. 3r31642. Coordinate systems 3r31644. 3r31645. 3r31659. 3r31642. 3r373. Camera 3r31644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 2. Basic lighting 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642.

Colors 3r31645. 3r31659. 3r31642. 3r33939. Lighting Basics 3r31644. 3r31645. 3r31659. 3r31642. Materials 3r31644. 3r31645. 3r31659. 3r31642. 3r3102. Texture maps 3r3161644. 3r31645. 3r31659. 3r31642.

Light sources 3r31645. 3r31659. 3r31642. 3r33112. Multiple sources of lighting 3r3161644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 3. Downloading 3D models

3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. Assimp library 3r31645. 3r31659. 3r31642. Mesh mesh class is 3r31645. 3r31659. 3r31642. Class of the 3D model 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 4. Advanced features of OpenGL 3r3r161613. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r33150. Depth test 3r3161644. 3r31645. 3r31659. 3r31642. Test stencil 3r31645. 3r31659. 3r31642. 3r3-3160. Mixing colors 3r31644. 3r31645. 3r31659. 3r31642. 3r3165. Trimming faces

3r31645. 3r31659. 3r31642. 3r33170. Framebuffer

3r31645. 3r31659. 3r31642. 3r33175. Cube maps 3r31644. 3r31645. 3r31659. 3r31642. 3r33180. Advanced work with data 3r3r1644. 3r31645. 3r31659. 3r31642. 3r3185. Advanced GLSL

3r31645. 3r31659. 3r31642. 3r3190. Geometric shader

3r31645. 3r31659. 3r31642. 3r3195. Instancing

3r31645. 3r31659. 3r31642. Smoothing 3r3161644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 5. Advanced lighting 3r31613. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642.

Advanced lighting. Model Blinna-Phong. 3r31644. 3r31645. 3r31659. 3r31642.

Gamma Correction 3r31645. 3r31659. 3r31642. Shadow maps 3r3161644. 3r31645. 3r31659. 3r31642.

Omnidirectional shadow maps 3r3r161644. 3r31645. 3r31659. 3r31642.

Normal Mapping 3r31645. 3r31659. 3r31642. Parallax Mapping 3r31645. 3r31659. 3r31642. HDR 3r31645. 3r31659. 3r31642. Bloom 3r31644. 3r31645. 3r31659. 3r31642.

Pending rendering 3r31645. 3r31659. 3r31642. SSAO 3r31644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 6. PBR

3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r31286. Theory 3r31645. 3r31659. 3r31642. Analytical light sources 3r3r161644. 3r31645. 3r31659. 3r31642. 3r31488. IBL. Diffuse irradiation. 3r31644. 3r31645. 3r31659. 3r31642. 3r3302. IBL. Mirror irradiance. 3r3303. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. 3r31655. 3r31655. 3r3161313. 3r31659. You can see that the Cooks-Torrens mirror component (the subexpression with the factor 3r31475.

) Is not constant and depends on the direction of the incident light, 3r3302. as well as 3r3303. from the direction of observation. The solution of this integral for all possible directions of incidence of light, together with all possible directions of observation in real time, is simply not feasible. Therefore, Epic Games researchers have proposed an approach called 3r3-31605. approximation by a separate sum 3r3-31606. (3r3-31605. Split sum approximation 3r3-331606.), Allowing to partially prepare data for the mirror component in advance, under certain conditions. 3r3161313. 3r31659. In this approach, the mirror component of the expression of reflectivity is divided into two parts, which can be pre-convolved separately, and then combined into a PBR shader, to be used as a source of indirect mirror radiation. As with the formation of an irradiance map, the convolution process accepts an HDR environment map at its entrance. 3r3161313. 3r31659. To understand the method of approximation by a separate sum, take another look at the expression of reflectivity, leaving only the subexpression for the mirror component in it (the diffuse part was considered separately in 3r31488. The previous lesson 3r31644.): 3r31613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r33320. 3r31477. 3r? 31308. 3r3161313. 3r31659. As is the case with the preparation of the irradiance map, this integral has no possibility to solve in real time. Therefore, it is desirable to predistin a map in the same way for the mirror component of the expression of reflectivity, and in the main render cycle you can do with a simple sample of this map based on the surface normal. However, everything is not so simple: the irradiance map was obtained relatively easily due to the fact that the integral depended only on 3r31475. 3r33417. 3r31477. , and the constant subexpression for the Lambert diffuse component could be taken out of the integral sign. In this case, the integral depends not only on r3r31475. 3r33417. 3r31477. that is easy to understand from the formula BRDF: 3r3r16613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r33337. 3r31477. 3r? 31308. 3r3161313. 3r31659. The expression below the integral also depends on

3r33385. 3r31477. - using two direction vectors, it is almost impossible to select from a previously prepared cube map. The position of the point

3r33333. 3r31477. in this case, you can ignore - why it was so considered in the previous lesson. Preliminary calculation of the integral for all possible combinations of 3r3r1475. 3r33417. 3r31477. and 3r31475. 3r33385. 3r31477. impossible in real-time tasks. 3r3161313. 3r31659. 3r3161313. 3r31659. The split-sum method from Epic Games solves this problem by splitting the preliminary calculation task into two independent parts, the results of which can be combined later to obtain the total predicted value. The split sum method distinguishes two integrals from the original expression for the mirror component: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31065. 3r31477. 3r? 31308. 3r3161313. 3r31659. The result of the calculation of the first part is usually called

{.

glTexImage2D (GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, ? GL_RGB16F, 12? 12? ? GL_RGB, GL_FLOAT, nullptr);.

}.

glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE) ;

GlTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

GlTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G,_CLAMP_TO_EDGE); 3r31659. glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 3r31659. glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 3r31659. 3r31659. glGenerateMipmap (GL_TEXTURE_CUBE_MAP); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Please note: since the sample is from 3r3-3?605. prefilterMap [/i] will be carried out taking into account the existence of mip levels, then it is necessary to set the mode of the decrease filter to

3r31477. , even if the roughness is not zero:

3r31659. 3r31558. 3r? 3516. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. The generalized form of possible outgoing reflection directions is called 3r3-31605. mirror lobe [/i] (3r3-331605. Specular lobe 3r3-331606.; “Petal of the mirror pattern” - perhaps too verbose, 3r3-331605. Approx. 3r3-331606.). As the roughness increases, the petal grows and expands. Also, its shape changes depending on the direction of light falling. Thus, the shape of the petal strongly depends on the properties of the material. 3r3161313. 3r31659. Returning to the model of microsurfaces, you can imagine the shape of the mirror lobe as describing the orientation of the reflection relative to the median vector of the micro-surfaces, taking into account some given direction of light falling. Realizing that most of the rays of the reflected light lies inside the mirror petal, oriented on the basis of the median vector, it makes sense to create a vector of the sample, oriented in a similar way. Otherwise, many of them will be useless. This approach is called 3r3-31605. sampling by significance 3r3-31606. (3r3-31605. Importance sampling 3r3-331606.). 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. Monte Carlo integration and sampling by significance 3-33991. 3r3161313. 3r31659. To fully understand the essence of the sample in terms of significance, you will first have to become familiar with such a mathematical apparatus as the Monte-Carlo integration method. This method is based on a combination of statistics and probability theory and helps to carry out a numerical solution of a certain statistical problem on a large sample without the need to consider 3r31605. each [/i] element of this sample. 3r3161313. 3r31659. 3r3161313. 3r31659. For example, you want to calculate the average population growth of a country. To obtain an accurate and reliable result, one would have to measure the growth of 3r3-31605. each [/i] citizen and average result. However, since the population of most countries is quite large, this approach is practically unrealizable, since it requires too many resources for implementation. 3r3161313. 3r31659. Another approach is to create a smaller subsample filled with truly random (unbiased) elements of the original sample. Next, you also measure growth and average the result for this subsample. You can take at least a hundred people and get a result, albeit not completely accurate, but still close enough to the real situation. The explanation for this method lies in the consideration of the law of large numbers. And its essence is described as follows: the result of some measurement in a smaller subsample of size 3r31475. 3r33714. 3r31477. , composed of truly random elements of the original set, will be close to the control result of the measurement carried out on the entire original set. Moreover, the approximate result tends to the true with the growth of

3r33714. 3r31477. . 3r3161313. 3r31659. Monte-Carlo integration is an application of the law of large numbers to solve integrals. Instead of solving the integral taking into account the entire (possibly infinite) set of values

3r33561. 3r31477. , we use 3r31475. 3r33714. 3r31477. random sampling points and averaging the result. With an increase of

3r33714. 3r31477. the approximate result is guaranteed to approach the exact solution of the integral. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r33575. 3r31477. 3r? 31308. 3r3161313. 3r31659. To solve the integral, we obtain the value of the integrand for 3r31475. 3r33714. 3r31477. random points from a sample within[a, b], the results are summed up and divided by the total number of points taken for averaging. Element

3r33584. 3r31477. describes

), Create

3r33714. 3r31477. random sample points lying inside, and conduct a weighted summation of the obtained values. 3r3161313. 3r31659. 3r3161313. 3r31659. The Monte Carlo method is a very extensive topic for discussion, and here we will no longer go into details, but another important detail remains: there is not a single way to create 3r-31605. random samples [/i] . By default, each sample point is completely (conformed) random - which is what we expect. But using certain properties of quasi-random sequences, it is possible to create sets of vectors that, although random, have interesting properties. For example, when creating random samples for the integration process, you can use the so-called 3r3-31605. low mismatch sequences [/i] (3r3-331605. Low-discrepancy sequences 3r3-331606.), Which ensure the randomness of the created sampling points, but in the general set they are more evenly 3r3-31605. distributed [/i] :

3r31659. 3r31558. (3r3-31605. Quasi-Monte Carlo intergration 3r3-31606.). Monte Carlo quasi-methods converge much faster than the general approach, which is a very attractive feature for applications with high performance requirements. 3r3161313. 3r31659. 3r3161313. 3r31659. So, we know about the general and quasi-Monte-Carlo method, but there is one more detail that will provide an even greater convergence rate: sampling by significance. 3r3161313. 3r31659. As already noted in the lesson, for specular reflections, the direction of the reflected light is enclosed in a mirror lobe, the size and shape of which depends on the roughness of the reflecting surface. Understanding that any (quasi) random sampling vectors that are outside the mirror lobe will not affect the integral expression of the mirror component, i.e. are useless. It makes sense to focus the generation of sampling vectors in the region of the mirror lobe using the offset estimation function for the Monte Carlo method. 3r3161313. 3r31659. 3r3161313. 3r31659. This is the essence of sampling in importance: the creation of sampling vectors lies in a certain area, oriented along the median vector of micro-surfaces, and the shape of which is determined by the roughness of the material. Using a combination of the quasi-Monte Carlo method, low discrepancy sequences and the displacement of the process of creating sample vectors by sampling in importance, we achieve very high rates of convergence. Since the convergence to the solution is fast enough, we can use a smaller number of sample vectors while still achieving a fairly reasonable estimate. The described combination of methods, in principle, allows graphic applications to even solve the integral of the mirror component in real time, although a preliminary calculation still remains a much more advantageous approach. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. The sequence is a low mismatch 3r3991. 3r3161313. 3r31659. In this lesson, we still use the preliminary calculation of the mirror component of the expression of reflectivity for indirect radiation. And we will use the sampling by significance using a random sequence of low discrepancy and the quasi-Monte Carlo method. The sequence used is known as

{

bits = (bits 16u) | (bits 16u); 3r31659. bits = ((bits & 0x55555555u) 1u) | ((bits & 0xAAAAAAAAu) 1u); 3r31659. bits = ((bits & 0x33333333u) 2u) | ((bits & 0xCCCCCCCCu) 2u); 3r31659. bits = ((bits & 0x0F0F0F0Fu) 4u) | ((bits & 0xF0F0F0F0u) 4u); 3r31659. bits = ((bits & 0x00FF00FFu) 8u) | ((bits & 0xFF00FF00u) 8u); 3r31659. return float (bits) * ???e-10; ///0x100000000

}

//------------------------------------------------ ----------------------------

vec2 Hammersley (uint i, uint N)

{

return vec2 (float (i) /float (N), RadicalInverse_VdC (i)); 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Function

3r3161313. 3r31659. 3r3161313. 3r31659. It is enough to set the flag somewhere in the application initialization code and this artifact is over. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. The appearance of bright dots 3r3-3991. 3r3161313. 3r31659. Since specular reflections in general contain high-frequency details, as well as areas with very different brightness, their convolution requires the use of a large number of sample points to correctly take into account the large scatter of values inside the HDR reflections from the environment. In the example, we already take a sufficiently large number of samples, but for certain scenes and high levels of roughness of the material this will still not be enough, and you will witness the appearance of many spots around bright areas: 3r31613. 3r31659. 3r31558. 3r3997. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. It is possible to continue to increase the number of samples, but this will not be a universal solution and in some conditions will still allow an artifact. But you can refer to the method Chetan Jags , which allows to reduce the manifestation of the artifact. To do this, at the preliminary convolution stage, the sample from the environment map should not be directly taken, but from one of its mip levels, based on the value obtained from the probability distribution function of the integrand expression and roughness: 3r3r1616. 3r31659. 3r31492. 3r31493. float D = DistributionGGX (NdotH, roughness); 3r31659. float pdf = (D * NdotH /(4.0 * HdotV)) + ???; 3r31659. 3r31659. //resolution for each face of the original cubic map

float resolution = 512.0; 3r31659. float saTexel = 4.0 * PI /(6.0 * resolution * resolution); 3r31659. float saSample = 1.0 /(float (SAMPLE_COUNT) * pdf + ???); 3r31659. 3r31659. float mipLevel = roughness == 0.0? 0.0: 0.5 * log2 (saSample /saTexel); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Just do not forget to enable trilinear filtering for the environment map in order to successfully sample from mip levels: 3r31616. 3r31659. 3r31492. 3r31493. glBindTexture (GL_TEXTURE_CUBE_MAP, envCubemap); 3r31659. glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Also, do not forget to create the texture mip levels directly using OpenGL, but only after the main mip level is fully formed:

3r31659. 3r31492. 3r31493. 3r31659. //HDR conversion of a rectangular environment map into a cubic map 3r3161659.[]3r31659. //create mip levels

glBindTexture (GL_TEXTURE_CUBE_MAP, envCubemap); 3r31659. glGenerateMipmap (GL_TEXTURE_CUBE_MAP); 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. This method works surprisingly well, removing almost all (and often all) spots of the filtered map, even at high levels of roughness. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. Preliminary calculation of BRDF 3r3r1612. 3r3161313. 3r31659. So, we have successfully processed the environment map with a filter and now we can concentrate on the second part of the approximation in the form of a separate amount, which is BRDF. To refresh your memory, look again at the complete record of the approximate solution: 3r3r16613 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31065. 3r31477. 3r? 31308. 3r3161313. 3r31659. We previously calculated the left part of the sum and recorded the results for different levels of roughness into a separate cubic map. The right-hand side will require convolving the BDRF expression together with the following parameters: angle 3r31475. 3r31071. 3r31477. , surface roughness and Fresnel coefficient

3r31476. 3r31477. . The process is similar to the integration of a mirrored BRDF for a completely white environment or with a constant energy brightness of 3r31475. 3r31077. 3r31477. . BRDF convolution for three variables is not a trivial task, but in this case 3r31475. 3r31476. 3r31477. can be derived from an expression describing the mirror BRDF: 3r3r16613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31088. 3r31477. 3r? 31308. 3r3161313. 3r31659. Here is 3r3r1475. 3r31197. 3r31477. - A function describing the calculation of the Fresnel kit. By moving the divisor to the expression for BRDF, you can go to the following equivalent entry: 3r3r161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r? 31102. 3r31477. 3r? 31308. 3r3161313. 3r31659. Replacing the right entry

3r31197. 3r31477. on the Fresnel-Schlick approximation, we get: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31116. 3r31477. 3r? 31308.

3r31659. Denote the expression 3r31475. 3r? 31122. 3r31477. like 3r31475. 3r3-131125. 3r31477. to simplify the decision regarding 3r31475. 3r31476. 3r31477. :

3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31136. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31145. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31154. 3r31477. 3r? 31308. 3r3161313. 3r31659. Further function 3r31475. 3r31197. 3r31477. we break into two integrals:

3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31168. 3r31477. 3r? 31308. 3r3161313. 3r31659. Therefore,

3r31476. 3r31477. will be constant under the integral, and we can take it out of the integral sign. Next, we will reveal

3r31177. 3r31477. into the original expression and get the final entry for BRDF as a separate amount: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31185. 3r31477. 3r? 31308. 3r3161313. 3r31659. The two integrals obtained are the scale and the offset for the value

3r31476. 3r31477. respectively. Notice that

3r31194. 3r31477. contains an entry

3r31197. 3r31477. therefore, these occurrences cancel out and disappear from the expression. 3r3161313. 3r31659. 3r3161313. 3r31659. Using the already developed approach, we can carry out the BRDF convolution together with the input dаta: roughness and angle between vectors 3r31475. 3r31477. and 3r31475. 3r31477. . The result is written in 2D texture - 3r3-31605. BRDF interconnect map [/i] (3r3-31605. BRDF integration map 3r3-331606.), Which will serve as an auxiliary table of values for use in the final shader, where the final result of indirect specular lighting will be formed. 3r3161313. 3r31659. 3r3161313. 3r31659. The BRDF convolution shader works on a plane, directly using two-dimensional texture coordinates as input parameters of the convolution process (3r3-31605. NdotV 3r3-331606. And 3r3-331605. Roughness 3r33131606.). The code is much like the convolution convolution, but here the sampling vector is processed taking into account the BRDF geometric function and the Fresnel-Schlick approximation expression: 3r31613. 3r31659. 3r31492. 3r31442. vec2 IntegrateBRDF (float NdotV, float roughness)

{

vec3 V; 3r31659. V.x = sqrt (1.0 - NdotV * NdotV); 3r31659. V.y = 0.0; 3r31659. V.z = NdotV; 3r31659. 3r31659. float A = 0.0; 3r31659. float B = 0.0; 3r31659. 3r31659. vec3 N = vec3 (0.? 0.? 1.0); 3r31659. 3r31659. const uint SAMPLE_COUNT = 1024u; 3r31659. for (uint i = 0u; i < SAMPLE_COUNT; ++i)

{

vec2 Xi = Hammersley (i, SAMPLE_COUNT);

vec3 H = ImportanceSampleGGX (Xi, N, roughness);

vec3 L = normalize (???n3 N, R, 3) H) * H - V);

Float NdotL = max (Lz, 0.0);

Float NdotH = max (Hz, 0.0);

Float VdotH = max (dot (V, H), 0.0) ;

If (NdotL> 0.0)

{

Float G = GeometrySmith (N, V, L, roughness);

Float G_Vis = (G * VdotH) /(NdotH) oddi Vd, Vd, Vd, Vd, Vd, Vd, Vd, Vd, Vrd; Fc = pow (1.0 - VdotH, 5.0);

A + = (1.0 - Fc) * G_Vis;

B + = Fc * G_Vis; 3rr161659.} 3r3r161659. ; 3r31659. B /= float (SAMPLE_COUNT); 3r3r1616. Return vec2 (A, B);

} 3r31659. //---------------------- ---------------------------------------------- --------

void main ()

{

vec2 integratedBRDF = IntegrateBRDF (TexCoords.x, TexCoords.y); 3r31659. FragColor = integratedBRDF; 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. As can be seen, the BRDF convolution is implemented in the form of a practically literal arrangement of the above mathematical calculations. The input parameters of roughness and angle

are taken. 3r31275. 3r31477. , the sample vector is generated based on a sample of importance, processed using the geometry function and the converted Fresnel expression for BRDF. As a result, for each sample, the magnitude of the scaling and displacement of the value

is obtained. 3r31476. 3r31477. which at the end are averaged and returned as 3r3-31605. vec2 [/i] . 3r3161313. 3r31659. 3r3161313. 3r31659. In 3r31286. theoretical 3r31644. the lesson mentioned that the geometrical component of BRDF is slightly different when calculating IBL, since the coefficient is 3r31475. 3r31477. set differently:

3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31297. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31477. 3r? 31308. 3r3161313. 3r31659. Since the BRDF convolution is part of the solution of the integral in the case of calculating IBL, we will use the coefficient 3r31475. 3r31312. 3r31477. for calculating the geometry function in the Schlick-GGX model:

3r31659. 3r31492. 3r31442. float GeometrySchlickGGX (float NdotV, float roughness)

{

float a = roughness; 3r31659. float k = (a * a) /2.0; 3r31659. 3r31659. float nom = NdotV; 3r31659. float denom = NdotV * (1.0 - k) + k; 3r31659. 3r31659. return nom /denom; 3r31659.}

//------------------------------------------------ ----------------------------

float GeometrySmith (vec3 N, vec3 V, vec3 L, float roughness)

{

float NdotV = max (dot (N, V), 0.0); 3r31659. float NdotL = max (dot (N, L), 0.0); 3r31659. float ggx2 = GeometrySchlickGGX (NdotV, roughness); 3r31659. float ggx1 = GeometrySchlickGGX (NdotL, roughness); 3r31659. 3r31659. return ggx1 * ggx2; 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Note that the coefficient is

3r31477. calculated on the basis of parameter 3r3-31605. a [/i] . In this case, the parameter

3r31659. 3r31492. 3r31493. unsigned int brdfLUTTexture; 3r31659. glGenTextures (? & brdfLUTTexture); 3r31659. 3r31659. //reserve enough memory to store the auxiliary texture

glBindTexture (GL_TEXTURE_2D, brdfLUTTexture); 3r31659. glTexImage2D (GL_TEXTURE_2D, ? GL_RG16F, 51? 51? ? GL_RG, GL_FLOAT, 0); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. As recommended by Epic Games, the 16-bit floating point format is used here. Be sure to set the repeat mode to

3r31659. 3r31492. 3r31442. void main ()

{

[]3r31659. vec3 R = reflect (-V, N); 3r31659. 3r31659. const float MAX_REFLECTION_LOD = 4.0; 3r31659. vec3 prefilteredColor = textureLod (prefilterMap, R, roughness * MAX_REFLECTION_LOD) .rgb; 3r31659.[]3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. At the preliminary convolution stage, we prepared only 5 mip levels (from zero to fourth), a constant 3r3-31605. MAX_REFLECTION_LOD [/i] serves to limit the selection of the generated mip levels. 3r3161313. 3r31659. Next, we make a sample of the BRDF integration map based on the roughness and angle between the normal and the direction of gaze: 3r3161613. 3r31659. 3r31492. 3r31493. vec3 F = FresnelSchlickRoughness (max (dot (N, V), 0.0), F? roughness); 3r31659. vec2 envBRDF = texture (brdfLUT, vec2 (max (dot (N, V), 0.0), roughness)). rg; 3r31659. vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. The value obtained from the map contains the scaling and offset factors for

3r31476. 3r31477. (here, the value 3r3–31605. F 3r3–31606 is taken. - Fresnel coefficient). The converted value is

Lesson:

3r31659. 3r31492. 3r31493. vec3 F = FresnelSchlickRoughness (max (dot (N, V), 0.0), F? roughness); 3r31659. 3r31659. vec3 kS = F; 3r31659. vec3 kD = 1.0 - kS; 3r31659. kD * = 1.0 - metallic; 3r31659. 3r31659. vec3 irradiance = texture (irradianceMap, N) .rgb; 3r31659. vec3 diffuse = irradiance * albedo; 3r31659. 3r31659. const float MAX_REFLECTION_LOD = 4.0; 3r31659. vec3 prefilteredColor = textureLod (prefilterMap, R, roughness * MAX_REFLECTION_LOD) .rgb;

vec2 envBRDF = texture (brdfLUT, vec2 (max (dot (N, V), 0.0), roughness)). rg; 3r31659. vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y); 3r31659. 3r31659. vec3 ambient = (kD * diffuse + specular) * ao; 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. I note that the value of

3r31659. 3r31558. 3r31537. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Or even download a smart model along with prepared PBR textures from Andrew Maximov :

3r31659. 3r31558. 3r31548. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. I think that no one should be particularly convinced that the current lighting model looks much more convincing. Moreover, the lighting looks physically correct, regardless of the environment map. Below are used several completely different HDR environment maps that completely change the nature of the lighting - but all the images look physically reliable, despite the fact that no parameters were needed in the model! (In principle, in this simplification of work with materials lies the main plus of the PBR pipeline, and a better picture can be said to be a pleasant consequence. 3r-331605. Appro. F. 3r3-331606.) 3r31659. 3r31558. 3r31559. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Phew, our journey into the essence of the PBR render came out pretty voluminous. We went through the whole series of short steps to the result and, of course, a lot can go wrong with the first approaches. Therefore, in case of any problems, I advise you to carefully understand the code for the examples for 3r31565. monochrome 3r3r161644. and 3r31567. textured 3r31644. spheres (and in the shader code, of course!). Or ask for advice in the comments. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. What's next? 3r31612. 3r3161313. 3r31659. I hope that by reading these lines you have already made out for yourself an understanding of the work of the PBR model of the render, and also figured out and successfully launched a test application. In these lessons, all the necessary auxiliary texture maps for the PBR model were calculated in our application in advance, before the main render cycle. For learning tasks, this approach is suitable, but not for practical use. First, such preliminary preparation should occur once, and not every application launch. Secondly, if you decide to add some more environment maps, then they will also have to be processed at startup. And if some more cards are added? This snowball. 3r3161313. 3r31659. 3r3161313. 3r31659. That is why, in general, the irradiance map and the pre-processed environment map are prepared once and then stored on disk (the BRDF integration map does not depend on the environment map, so that it can be calculated or loaded once). It follows that you will need a format for storing HDR cubic cards, including their mip levels. Well, or you can store and load them, using one of the most common formats (for example, 3r3-31605. .Ds [/i] Supports saving mip levels). 3r3161313. 3r31659. 3r3161313. 3r31659. Another important point: in order to give a deep understanding of the PBR pipeline in these lessons, I gave a description of the full process of preparing for the PBR renderer, including preliminary calculations of auxiliary cards for IBL. However, in your practice, you might as well take advantage of one of the great utilities that will prepare these cards for you: for example, 3r31587. cmftStudio

or IBLBaker 3r3r1644. . 3r3161313. 3r31659. 3r3161313. 3r31659. We also did not consider the process of preparing cubic cards 3r3-31605. Sample Reflection [/i] (3r3-31605. Reflection probes 3r3-331606.) And related processes of interpolation of cubic cards and correction of parallax. Briefly, this technique can be described as follows: we place in our scene a set of objects of reflection samples, which form a local environment picture in the form of a cubic map, and then on the basis of it all the necessary auxiliary maps for the IBL model are formed. By interpolating data from several samples based on the distance from the camera, you can get highly detailed lighting based on the image, the quality of which is essentially limited only by the number of samples that we are ready to place in the scene. This approach allows the lighting to change correctly, for example, when moving from a brightly lit street to the dusk of a certain room. Probably, I will write a lesson about the reflection samples in the future, but at the moment I can only recommend for review the article by Chetan Jags, given below. 3r3161313. 3r31659. 3r3161313. 3r31659. (You can see the implementation of the samples, and much more, in the raw materials of the author of the author of tutorials 3r3-331603. Here , 3r3-331605. Approx. 3r31659. 3r3161313. 3r31659. 3r31611. Additional materials 3r31612. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r31618. Real Shading in Unreal Engine 4

: Explanation of the Epic Games approach to approximating the expression for the mirror component by a separate sum. Based on this article, the code for the IBL PBR lesson was designed 3r31645. 3r31659. 3r31642. 3r31623.

Physically Based Shading and Image Based Lighting. : Excellent article describing the process of incorporating the calculation of the IBL mirror component into a PBR pipeline interactive application. 3r31645. 3r31659. 3r31642. 3r31628. Image Based Lighting

: a very voluminous and detailed post about mirror IBL and related issues, including the task of light probe interpolation. 3r31645. 3r31659. 3r31642. 3r31633. Moving Frostbite to PBR

: a well-designed and fairly technically detailed presentation showing the process of integrating the PBR model into the “AAA” class game engine. 3r31645. 3r31659. 3r31642. 3r31638. Physically Based Rendering - Part Three

: an overview of IBL and PBR from JMonkeyEngine developers. 3r31645. 3r31659. 3r31642. 3r31643. Implementation Notes: Runtime Environment Filtering for Image Based Lighting

: detailed information about the pre-filtering process of HDR environment maps, as well as possible optimization of the sampling process. 3r31645. 3r31659. 3r31647. 3r31655. 3r31659. 3r31659. 3r31659. 3r31652. ! function (e) {function t (t, n) {if (! (n in e)) {for (var r, a = e.document, i = a.scripts, o = i.length; o-- ;) if (-1! == i[o].src.indexOf (t)) {r = i[o]; break} if (! r) {r = a.createElement ("script"), r.type = "text /jаvascript", r.async =! ? r.defer =! ? r.src = t, r.charset = "UTF-8"; var d = function () {var e = a.getElementsByTagName ("script")[0]; e.parentNode.insertBefore (r, e)}; "[object Opera]" == e.opera? a.addEventListener? a.addEventListener ("DOMContentLoaded", d,! 1): e.attachEvent ("onload", d ): d ()}}} t ("//mediator.mail.ru/script/2820404/"""_mediator") () (); 3r31653. 3r31659. 3r31655. 3r31659. 3r31659. 3r31659. 3r31659.

we prepared our PBR model to work with the IBL method — for this we needed to prepare in advance an irradiance map that describes the diffuse part of indirect illumination. In this lesson we will focus on the second part of the expression of reflectivity, the mirror one: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r318. 3r31644. 3r3161313. 3r31659.

Contents 3r3303.

Part 1. Start 3r3r16613. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r3333. OpenGL 3r31644. 3r31645. 3r31659. 3r31642. 3r3338. Creating windows 3r31644. 3r31645. 3r31659. 3r31642. Hello Window 3r31645. 3r31659. 3r31642. Hello Triangle 3r31645. 3r31659. 3r31642. Shaders 3r31645. 3r31659. 3r31642. Textures 3r31645. 3r31659. 3r31642. 3r3633. Transformations 3r3r161644. 3r31645. 3r31659. 3r31642. Coordinate systems 3r31644. 3r31645. 3r31659. 3r31642. 3r373. Camera 3r31644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 2. Basic lighting 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642.

Colors 3r31645. 3r31659. 3r31642. 3r33939. Lighting Basics 3r31644. 3r31645. 3r31659. 3r31642. Materials 3r31644. 3r31645. 3r31659. 3r31642. 3r3102. Texture maps 3r3161644. 3r31645. 3r31659. 3r31642.

Light sources 3r31645. 3r31659. 3r31642. 3r33112. Multiple sources of lighting 3r3161644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 3. Downloading 3D models

3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. Assimp library 3r31645. 3r31659. 3r31642. Mesh mesh class is 3r31645. 3r31659. 3r31642. Class of the 3D model 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 4. Advanced features of OpenGL 3r3r161613. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r33150. Depth test 3r3161644. 3r31645. 3r31659. 3r31642. Test stencil 3r31645. 3r31659. 3r31642. 3r3-3160. Mixing colors 3r31644. 3r31645. 3r31659. 3r31642. 3r3165. Trimming faces

3r31645. 3r31659. 3r31642. 3r33170. Framebuffer

3r31645. 3r31659. 3r31642. 3r33175. Cube maps 3r31644. 3r31645. 3r31659. 3r31642. 3r33180. Advanced work with data 3r3r1644. 3r31645. 3r31659. 3r31642. 3r3185. Advanced GLSL

3r31645. 3r31659. 3r31642. 3r3190. Geometric shader

3r31645. 3r31659. 3r31642. 3r3195. Instancing

3r31645. 3r31659. 3r31642. Smoothing 3r3161644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 5. Advanced lighting 3r31613. 3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642.

Advanced lighting. Model Blinna-Phong. 3r31644. 3r31645. 3r31659. 3r31642.

Gamma Correction 3r31645. 3r31659. 3r31642. Shadow maps 3r3161644. 3r31645. 3r31659. 3r31642.

Omnidirectional shadow maps 3r3r161644. 3r31645. 3r31659. 3r31642.

Normal Mapping 3r31645. 3r31659. 3r31642. Parallax Mapping 3r31645. 3r31659. 3r31642. HDR 3r31645. 3r31659. 3r31642. Bloom 3r31644. 3r31645. 3r31659. 3r31642.

Pending rendering 3r31645. 3r31659. 3r31642. SSAO 3r31644. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. Part 6. PBR

3r31659. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r31286. Theory 3r31645. 3r31659. 3r31642. Analytical light sources 3r3r161644. 3r31645. 3r31659. 3r31642. 3r31488. IBL. Diffuse irradiation. 3r31644. 3r31645. 3r31659. 3r31642. 3r3302. IBL. Mirror irradiance. 3r3303. 3r31645. 3r31659. 3r31647. 3r3161313. 3r31659. 3r31655. 3r31655. 3r3161313. 3r31659. You can see that the Cooks-Torrens mirror component (the subexpression with the factor 3r31475.

) Is not constant and depends on the direction of the incident light, 3r3302. as well as 3r3303. from the direction of observation. The solution of this integral for all possible directions of incidence of light, together with all possible directions of observation in real time, is simply not feasible. Therefore, Epic Games researchers have proposed an approach called 3r3-31605. approximation by a separate sum 3r3-31606. (3r3-31605. Split sum approximation 3r3-331606.), Allowing to partially prepare data for the mirror component in advance, under certain conditions. 3r3161313. 3r31659. In this approach, the mirror component of the expression of reflectivity is divided into two parts, which can be pre-convolved separately, and then combined into a PBR shader, to be used as a source of indirect mirror radiation. As with the formation of an irradiance map, the convolution process accepts an HDR environment map at its entrance. 3r3161313. 3r31659. To understand the method of approximation by a separate sum, take another look at the expression of reflectivity, leaving only the subexpression for the mirror component in it (the diffuse part was considered separately in 3r31488. The previous lesson 3r31644.): 3r31613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r33320. 3r31477. 3r? 31308. 3r3161313. 3r31659. As is the case with the preparation of the irradiance map, this integral has no possibility to solve in real time. Therefore, it is desirable to predistin a map in the same way for the mirror component of the expression of reflectivity, and in the main render cycle you can do with a simple sample of this map based on the surface normal. However, everything is not so simple: the irradiance map was obtained relatively easily due to the fact that the integral depended only on 3r31475. 3r33417. 3r31477. , and the constant subexpression for the Lambert diffuse component could be taken out of the integral sign. In this case, the integral depends not only on r3r31475. 3r33417. 3r31477. that is easy to understand from the formula BRDF: 3r3r16613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r33337. 3r31477. 3r? 31308. 3r3161313. 3r31659. The expression below the integral also depends on

3r33385. 3r31477. - using two direction vectors, it is almost impossible to select from a previously prepared cube map. The position of the point

3r33333. 3r31477. in this case, you can ignore - why it was so considered in the previous lesson. Preliminary calculation of the integral for all possible combinations of 3r3r1475. 3r33417. 3r31477. and 3r31475. 3r33385. 3r31477. impossible in real-time tasks. 3r3161313. 3r31659. 3r3161313. 3r31659. The split-sum method from Epic Games solves this problem by splitting the preliminary calculation task into two independent parts, the results of which can be combined later to obtain the total predicted value. The split sum method distinguishes two integrals from the original expression for the mirror component: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31065. 3r31477. 3r? 31308. 3r3161313. 3r31659. The result of the calculation of the first part is usually called

*pre-filtered environment map 3r3-31606. (*

3r3161313. 3r31659. 3r3161313. 3r31659. In such conditions, the direction of sight is not required in the process of convolving the environment map, which makes the calculation feasible in real time. But on the other hand, we lose the characteristic distortion of the mirror reflections when observed at an acute angle to the reflecting surface, as seen in the image below (from publication 3r31633. Moving Frostbite to PBR*Pre-filtered environment map*), And is the environment map subjected to the convolution process specified by this expression. All this is similar to the process of obtaining an irradiance map, but in this case the convolution is carried out taking into account the roughness value. High roughness values lead to the use of more disparate sampling vectors during the convolution process, which leads to more blurred results. The result of the convolution for each next selected level of roughness is stored in the next mip level of the prepared map of the environment. For example, an environment map convolving for five different levels of roughness contains five mip levels and looks like this: 3r3161613. 3r31659. 3r31558. 3r31659. 3r31492. 3r31493. vec3 N = normalize (w_o); 3r31659. vec3 R = N; 3r31659. vec3 V = R; 3r3-31509.3r3161313. 3r31659. 3r3161313. 3r31659. In such conditions, the direction of sight is not required in the process of convolving the environment map, which makes the calculation feasible in real time. But on the other hand, we lose the characteristic distortion of the mirror reflections when observed at an acute angle to the reflecting surface, as seen in the image below (from publication 3r31633. Moving Frostbite to PBR

*). In general, such a compromise is considered acceptable. 3r3161313. 3r31659. 3r31558. 3r3404. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. The second part of the split-sum expression contains the BRDF of the original expression for the mirror component. If we assume that the incoming energy brightness is spectrally represented by white light for all directions (i.e., 3r31475.*

), Then it is possible to pre-calculate the value for BRDF with the following input parameters: material roughness and angle of normal

3r31477. and the direction of light is 3r31475. 3r33417. 3r31477. (or

). The Epic Games approach involves storing the results of the BRDF calculation for each combination of roughness and angle between the normal and the direction of light in the form of a two-dimensional texture, known as 3r-31605. BRDF interconnect card(), Then it is possible to pre-calculate the value for BRDF with the following input parameters: material roughness and angle of normal

3r31477. and the direction of light is 3r31475. 3r33417. 3r31477. (or

). The Epic Games approach involves storing the results of the BRDF calculation for each combination of roughness and angle between the normal and the direction of light in the form of a two-dimensional texture, known as 3r-31605. BRDF interconnect card

*BRDF integration map*), Which is later used as a reference table (*Look-up table*,*LUT 3r33131606.). This reference texture uses red and green output channels to store the scale and offset to calculate the surface Fresnel coefficient, which ultimately allows us to solve the second part of the expression for the separate sum: 3r31613. 3r31659. 3r31411. 3r3161313. 3r31659. 3r3161313. 3r31659. This auxiliary texture is created as follows: texture coordinates horizontally (ranging from[0., 1.]) Are considered as values of the input parameter 3r31475. 3r31071. 3r31477. BRDF functions; Texture coordinates on the vertical are considered as input roughness values. 3r3161313. 3r31659. 3r3161313. 3r31659. As a result, having such an integration map and a pre-processed environment map, you can combine samples from them to get the total value of the integral expression of the mirror component: 3r31616. 3r31659. 3r31492. 3r31493. float lod = getMipLevelFromRoughness (roughness); 3r31659. vec3 prefilteredColor = textureCubeLod (PrefilteredEnvMap, refVec, lod); 3r31659. vec2 envBRDF = texture2D (BRDFIntegrationMap, vec2 (NdotV, roughness)). xy; 3r31659. vec3 indirectSpecular = prefilteredColor * (F * envBRDF.x + envBRDF.y)*

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. This review of the split-sum method from Epic Games should help create an impression of the process of approximate calculation of the part of the reflectivity that is responsible for the mirror component. Now we will try to prepare these maps on our own. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. Pre-filtering of the HDR environment map 3r31612. 3r3161313. 3r31659. The pre-filtering of the environment map proceeds in a similar way to what was done to obtain an irradiance map. The only difference is that we now take into account the roughness and save the result for each level of roughness in the new mip-level cubic map. 3r3161313. 3r31659. 3r3161313. 3r31659. To begin with, you will have to create a new cubic map that will contain the result of the preliminary filtering. In order to create the required number of mip levels, we simply call 3r3-31605. glGenerateMipmaps ()- the necessary memory will be allocated for the current texture: 3r3161313. 3r31659. 3r31492. 3r31493. unsigned int prefilterMap; 3r31659. glGenTextures (? & prefilterMap); 3r31659. glBindTexture (GL_TEXTURE_CUBE_MAP, prefilterMap); 3r31659. for (unsigned int i = 0;. i < 6; ++i)3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. This review of the split-sum method from Epic Games should help create an impression of the process of approximate calculation of the part of the reflectivity that is responsible for the mirror component. Now we will try to prepare these maps on our own. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. Pre-filtering of the HDR environment map 3r31612. 3r3161313. 3r31659. The pre-filtering of the environment map proceeds in a similar way to what was done to obtain an irradiance map. The only difference is that we now take into account the roughness and save the result for each level of roughness in the new mip-level cubic map. 3r3161313. 3r31659. 3r3161313. 3r31659. To begin with, you will have to create a new cubic map that will contain the result of the preliminary filtering. In order to create the required number of mip levels, we simply call 3r3-31605. glGenerateMipmaps ()

{.

glTexImage2D (GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, ? GL_RGB16F, 12? 12? ? GL_RGB, GL_FLOAT, nullptr);.

}.

glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE) ;

GlTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

GlTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G, R_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_G,_CLAMP_TO_EDGE); 3r31659. glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 3r31659. glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 3r31659. 3r31659. glGenerateMipmap (GL_TEXTURE_CUBE_MAP); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Please note: since the sample is from 3r3-3?605. prefilterMap [/i] will be carried out taking into account the existence of mip levels, then it is necessary to set the mode of the decrease filter to

*GL_LINEAR_MIPMAP_LINEAR*to enable trilinear filtering. Pre-processed images of mirror reflections are stored in separate faces of a cube map with a resolution at the base mip level of only 128x128 pixels. For most materials, this is quite enough, however, if you have an increased amount of smooth, shiny surfaces in your scene (for example, a new car), you may need to increase this resolution. 3r3161313. 3r31659. 3r3161313. 3r31659. In the previous lesson, we convolved the environment map by creating sample vectors that are evenly distributed in the hemisphere 3r31475. 3r33737. 3r31477. using spherical coordinates. For obtaining irradiance, this method is quite effective, which cannot be said about the calculations of specular reflections. The mirror glare physics tells us that the direction of the specularly reflected light is adjacent to the reflection vector 3r31475. 3r31477. for a surface with a normal3r31477. , even if the roughness is not zero:

3r31659. 3r31558. 3r? 3516. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. The generalized form of possible outgoing reflection directions is called 3r3-31605. mirror lobe [/i] (3r3-331605. Specular lobe 3r3-331606.; “Petal of the mirror pattern” - perhaps too verbose, 3r3-331605. Approx. 3r3-331606.). As the roughness increases, the petal grows and expands. Also, its shape changes depending on the direction of light falling. Thus, the shape of the petal strongly depends on the properties of the material. 3r3161313. 3r31659. Returning to the model of microsurfaces, you can imagine the shape of the mirror lobe as describing the orientation of the reflection relative to the median vector of the micro-surfaces, taking into account some given direction of light falling. Realizing that most of the rays of the reflected light lies inside the mirror petal, oriented on the basis of the median vector, it makes sense to create a vector of the sample, oriented in a similar way. Otherwise, many of them will be useless. This approach is called 3r3-31605. sampling by significance 3r3-31606. (3r3-31605. Importance sampling 3r3-331606.). 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. Monte Carlo integration and sampling by significance 3-33991. 3r3161313. 3r31659. To fully understand the essence of the sample in terms of significance, you will first have to become familiar with such a mathematical apparatus as the Monte-Carlo integration method. This method is based on a combination of statistics and probability theory and helps to carry out a numerical solution of a certain statistical problem on a large sample without the need to consider 3r31605. each [/i] element of this sample. 3r3161313. 3r31659. 3r3161313. 3r31659. For example, you want to calculate the average population growth of a country. To obtain an accurate and reliable result, one would have to measure the growth of 3r3-31605. each [/i] citizen and average result. However, since the population of most countries is quite large, this approach is practically unrealizable, since it requires too many resources for implementation. 3r3161313. 3r31659. Another approach is to create a smaller subsample filled with truly random (unbiased) elements of the original sample. Next, you also measure growth and average the result for this subsample. You can take at least a hundred people and get a result, albeit not completely accurate, but still close enough to the real situation. The explanation for this method lies in the consideration of the law of large numbers. And its essence is described as follows: the result of some measurement in a smaller subsample of size 3r31475. 3r33714. 3r31477. , composed of truly random elements of the original set, will be close to the control result of the measurement carried out on the entire original set. Moreover, the approximate result tends to the true with the growth of

3r33714. 3r31477. . 3r3161313. 3r31659. Monte-Carlo integration is an application of the law of large numbers to solve integrals. Instead of solving the integral taking into account the entire (possibly infinite) set of values

3r33561. 3r31477. , we use 3r31475. 3r33714. 3r31477. random sampling points and averaging the result. With an increase of

3r33714. 3r31477. the approximate result is guaranteed to approach the exact solution of the integral. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r33575. 3r31477. 3r? 31308. 3r3161313. 3r31659. To solve the integral, we obtain the value of the integrand for 3r31475. 3r33714. 3r31477. random points from a sample within[a, b], the results are summed up and divided by the total number of points taken for averaging. Element

3r33584. 3r31477. describes

*probability density function 3r3-31606. (3r3–31605. Probability density function 3r3–31606.), Which shows with what probability each selected value is found in the original sample. For example, this function for the growth of citizens would look like this:*

3r31659. 3r31558. 3r? 3593. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. It can be seen that when using random sampling points, we have a much higher chance of meeting the height value of 170 cm than someone with a height of 150 cm. 3r3161313. 3r31659. 3r3161313. 3r31659. It is clear that when carrying out Monte Carlo integration, some sample points are more likely to appear in sequence than others. Therefore, in any expression for the Monte Carlo estimate, we divide or multiply the selected value by the probability of its occurrence using the probability density function. At the moment, when evaluating the integral, we created a set of uniformly distributed sampling points: the chance of obtaining any of them was the same. Therefore, our estimate was 3r3–31605. unbiased(3r3-31605. Unbiased 3r3-331606.), Which means that as the number of sample points grows, our estimate will converge to the exact solution of the integral. 3r3161313. 3r31659. 3r3161313. 3r31659. However, there are evaluation functions being 3r3-31605. offset 3r3-31606. (3r3–31605. Biased 3r3–31606.), I.e. implying the creation of sample points not in a truly random manner, but with a predominance of a certain value or direction. Such evaluation functions allow Monte-Carlo estimations to converge to the exact solution 3r3-31605. much faster [/i] . On the other hand, due to the bias of the evaluation function, the solution may never converge. In the general case, this is considered an acceptable compromise, especially in computer graphics tasks, since the estimate is very close to the analytical result and is not required if its effect visually looks quite reliable. As we will soon see sampling by significance (using an offset estimation function), you can create sample points shifted in a certain direction, which is taken into account by multiplying or dividing each selected value by the corresponding value of the probability density function. 3r3161313. 3r31659. 3r3161313. 3r31659. Monte-Carlo integration is quite common in computer graphics problems, since it is a fairly intuitive method for estimating the value of continuous integrals by a numerical method, which is quite effective. It is enough to take a certain area or volume in which the sampling is carried out (for example, our hemisphere3r31659. 3r31558. 3r? 3593. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. It can be seen that when using random sampling points, we have a much higher chance of meeting the height value of 170 cm than someone with a height of 150 cm. 3r3161313. 3r31659. 3r3161313. 3r31659. It is clear that when carrying out Monte Carlo integration, some sample points are more likely to appear in sequence than others. Therefore, in any expression for the Monte Carlo estimate, we divide or multiply the selected value by the probability of its occurrence using the probability density function. At the moment, when evaluating the integral, we created a set of uniformly distributed sampling points: the chance of obtaining any of them was the same. Therefore, our estimate was 3r3–31605. unbiased

), Create

3r33714. 3r31477. random sample points lying inside, and conduct a weighted summation of the obtained values. 3r3161313. 3r31659. 3r3161313. 3r31659. The Monte Carlo method is a very extensive topic for discussion, and here we will no longer go into details, but another important detail remains: there is not a single way to create 3r-31605. random samples [/i] . By default, each sample point is completely (conformed) random - which is what we expect. But using certain properties of quasi-random sequences, it is possible to create sets of vectors that, although random, have interesting properties. For example, when creating random samples for the integration process, you can use the so-called 3r3-31605. low mismatch sequences [/i] (3r3-331605. Low-discrepancy sequences 3r3-331606.), Which ensure the randomness of the created sampling points, but in the general set they are more evenly 3r3-31605. distributed [/i] :

3r31659. 3r31558. (3r3-31605. Quasi-Monte Carlo intergration 3r3-31606.). Monte Carlo quasi-methods converge much faster than the general approach, which is a very attractive feature for applications with high performance requirements. 3r3161313. 3r31659. 3r3161313. 3r31659. So, we know about the general and quasi-Monte-Carlo method, but there is one more detail that will provide an even greater convergence rate: sampling by significance. 3r3161313. 3r31659. As already noted in the lesson, for specular reflections, the direction of the reflected light is enclosed in a mirror lobe, the size and shape of which depends on the roughness of the reflecting surface. Understanding that any (quasi) random sampling vectors that are outside the mirror lobe will not affect the integral expression of the mirror component, i.e. are useless. It makes sense to focus the generation of sampling vectors in the region of the mirror lobe using the offset estimation function for the Monte Carlo method. 3r3161313. 3r31659. 3r3161313. 3r31659. This is the essence of sampling in importance: the creation of sampling vectors lies in a certain area, oriented along the median vector of micro-surfaces, and the shape of which is determined by the roughness of the material. Using a combination of the quasi-Monte Carlo method, low discrepancy sequences and the displacement of the process of creating sample vectors by sampling in importance, we achieve very high rates of convergence. Since the convergence to the solution is fast enough, we can use a smaller number of sample vectors while still achieving a fairly reasonable estimate. The described combination of methods, in principle, allows graphic applications to even solve the integral of the mirror component in real time, although a preliminary calculation still remains a much more advantageous approach. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. The sequence is a low mismatch 3r3991. 3r3161313. 3r31659. In this lesson, we still use the preliminary calculation of the mirror component of the expression of reflectivity for indirect radiation. And we will use the sampling by significance using a random sequence of low discrepancy and the quasi-Monte Carlo method. The sequence used is known as

*Hammersley sequence 3r3-31606. (3r3-31605. Hammersley sequence 3r3-31606.), A detailed description of which is given 3r3674. Holger Dammertz*

. This sequence, in turn, is based on 3r3-31605. Van der Korputa sequences(. This sequence, in turn, is based on 3r3-31605. Van der Korputa sequences

*Van der Corput sequence*), Which uses a special conversion of the binary decimal point relative to the decimal point. 3r3161313. 3r31659. 3r3161313. 3r31659. Using cunning tricks of bitwise arithmetic, one can effectively define the van der Corput sequence directly in the shader and use it to create the i-th element of the Hammersley sequence from a sample of 3–???. 3r33714. 3r31477. elements: 3r31613. 3r31659. 3r31492. 3r31442. float RadicalInverse_VdC (uint bits){

bits = (bits 16u) | (bits 16u); 3r31659. bits = ((bits & 0x55555555u) 1u) | ((bits & 0xAAAAAAAAu) 1u); 3r31659. bits = ((bits & 0x33333333u) 2u) | ((bits & 0xCCCCCCCCu) 2u); 3r31659. bits = ((bits & 0x0F0F0F0Fu) 4u) | ((bits & 0xF0F0F0F0u) 4u); 3r31659. bits = ((bits & 0x00FF00FFu) 8u) | ((bits & 0xFF00FF00u) 8u); 3r31659. return float (bits) * ???e-10; ///0x100000000

}

//------------------------------------------------ ----------------------------

vec2 Hammersley (uint i, uint N)

{

return vec2 (float (i) /float (N), RadicalInverse_VdC (i)); 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Function

*Hammersley () r3r31606. returns the i-th element of the low discrepancy sequence from a set of samples of size 3r31475. 3r33714. 3r31477. . 3r3161313. 3r31659. 3r3161313. 3r31659. 3r33720. Not all OpenGL drivers support bitwise operations (WebGL and OpenGL ES 2.? for example), so an alternative implementation of their use may be required for certain environments: 3r3161613. 3r31659. 3r31492. 3r31442. float VanDerCorpus (uint n, uint base)*

{

float invBase = 1.0 /float (base); 3r31659. float denom = 1.0; 3r31659. float result = 0.0; 3r31659. 3r31659. for (uint i = 0u; i < 32u; ++i)

{

if (n> 0u)

{

denom = mod (float (n), 2.0);

result + = denom * invBase;

invBase = branse /2.0; 3r31659. N = uint (float (n) /2.0); 3r31659.} 3r31659.} 3r31659. 3r31659. Return result; 3r31659.} 3r31659. //------------- -------------------------------------------------- -------------

Vec2 HammersleyNoBitOps (uint i, uint N)

{

Return vec2 (float (i) /float (N), VanDerCorpus (i, 2u));

} 3r3-331509.

3r3r161613. 3r31659. 3r3161313. 3r31659. I note that due to certain restrictions on the loop operators in the old hardware, this implementation runs through all 32 bits. As a result, this version is not as productive as the first option - but it works on any hardware and even in the absence of bit operations. 3r3754. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. Sampling by importance in the GGX 3r3-3991 model. 3r3161313. 3r31659. Instead of a uniform or random (Monte Carlo) distribution of the generated sample vectors within the hemisphere of 3r31475. 3r33737. 3r31477. , appearing in the integral we are solving, we will try to create vectors so that they reflect the main direction of reflection of light, characterized by a median vector of microsurfaces and depending on the surface roughness. The sampling process itself will be similar to the previously considered one: open a cycle with a sufficiently large number of iterations, create an element of the low discrepancy sequence, use it to create a sampling vector in tangent space, transfer this vector to world coordinates and use it to sample the energy brightness of the scene. In principle, the changes relate only to the fact that the element of the sequence of low inconsistencies is now used to define a new sampling vector: 3r31613. 3r31659. 3r31492. 3r31493. const uint SAMPLE_COUNT = 4096u; 3r31659. for (uint i = 0u; i < SAMPLE_COUNT; ++i)

{

vec2 Xi = Hammersley (i, SAMPLE_COUNT); 3r3r1616. 3r3-331509. 3r31510. 3r3r1313. 3r31659. 3r3161313. 3r31659. In addition, for the complete formation of the vectorand the sample will need to be oriented in some way in the direction of the mirror petal corresponding to a given level of roughness. You can take the NDF (normal distribution function) from 3r31286. Lesson 3r31644. theory and combining with GGX NDF for the method of defining the vector of a sample in a field sponsored by Epic Games:

3r31659. 3r31492. 3r31493. vec3 ImportanceSampleGGX (vec2 Xi, vec3 N, float roughness)

{

float a = roughness * roughness; 3r31659. 3r31659. float phi = 2.0 * PI * Xi.x; 3r31659. float cosTheta = sqrt ((1.0 - Xi.y) /(1.0 + (a * a - 1.0) * Xi.y)); 3r31659. float sinTheta = sqrt (1.0 - cosTheta * cosTheta); 3r31659. 3r31659. //conversion from spherical to Cartesian coordinates

vec3 H; 3r31659. H.x = cos (phi) * sinTheta; 3r31659. H.y = sin (phi) * sinTheta; 3r31659. H.z = cosTheta; 3r31659. 3r31659. //transform from tangent space to world coordinates

vec3 up = abs (N.z) < 0.999 ? vec3(0.0, 0.0, 1.0) : vec3(1.0, 0.0, 0.0);

vec3 tangent = normalize (cross (up, N)); 3r31659. vec3 bitangent = cross (N, tangent); 3r31659. 3r31659. vec3 sampleVec = tangent * H.x + bitangent * H.y + N * H.z; 3r31659. return normalize (sampleVec); 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. The result is a sample vector, approximately oriented along the median vector of microsurfaces, for a given roughness and element of the low mismatch sequence

out vec4 FragColor; 3r31659. in vec3 localPos; 3r31659. 3r31659. uniform samplerCube environmentMap; 3r31659. uniform float roughness; 3r31659. 3r31659. const float PI = ???; 3r31659. 3r31659. float RadicalInverse_VdC (uint bits); 3r31659. vec2 Hammersley (uint i, uint N); 3r31659. vec3 ImportanceSampleGGX (vec2 Xi, vec3 N, float roughness); 3r31659. 3r31659. void main ()

{

vec3 N = normalize (localPos); 3r31659. vec3 R = N; 3r31659. vec3 V = R; 3r31659. 3r31659. const uint SAMPLE_COUNT = 1024u; 3r31659. float totalWeight = 0.0; 3r31659. vec3 prefilteredColor = vec3 (0.0); 3r31659. for (uint i = 0u; i < SAMPLE_COUNT; ++i)

{

vec2 Xi = Hammersley (i, SAMPLE_COUNT);

vec3 H = ImportanceSampleGGX (Xi, N, roughness);

vec3 L = normalize (???n3 N, R, 3) H) * H - V);

Float NdotL = max (dot (N, L), 0.0);

If (NdotL> 0.0) 3r31659. {

PrefilteredColor + = texture (environmentMap, L). rrb; 3r3165; 3r31659. 3r3161313. 3r31659. We pre-filter the environment map on the basis of some specified roughness, the level of which varies for each mip level of the resulting cubic map (from 0.0 to 1.0), and the result of the filter is stored in a variable

3r31659. 3r31492. 3r31493. prefilterShader.use (); 3r31659. prefilterShader.setInt ("environmentMap", 0); 3r31659. prefilterShader.setMat4 ("projection", captureProjection); 3r31659. glActiveTexture (GL_TEXTURE0); 3r31659. glBindTexture (GL_TEXTURE_CUBE_MAP, envCubemap); 3r31659. 3r31659. glBindFramebuffer (GL_FRAMEBUFFER, captureFBO); 3r31659. unsigned int maxMipLevels = 5; 3r31659. for (unsigned int mip = 0; mip < maxMipLevels; ++mip)

{

//specify the size of the framebuffer based on the current mip-level number

unsigned int mipWidth = 128 * std :: pow (0.? mip);

unsigned int mipHeight = 128 o'clock;) . float roughness = (float) mip /(float) (maxMipLevels - 1);

prefilterShader.setFloat ( "roughness", roughness);

for (unsigned int i = 0; i < 6; ++i)

{

prefilterShader.setMat4 ("view", captureViews

3r3161313. 3r31659. 3r3161313. 3r31659. The result of this action will be the following picture:

3r31659. 3r31558. 3r33942. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Looks like a very blurred source environment map. If you have a similar result, then, most likely, the process of pre-filtering the HDR environment map is correct. Try experimenting with samples from different mip levels and watch the gradual increase in blurring with each level. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. Pre-filtering convolution artifacts

3r3161313. 3r31659. For most tasks, the described approach works quite well, but sooner or later you will have to meet with various artifacts that the pre-filtering process generates. Here are the most common and methods of dealing with them. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. The manifestation of the seams of the cubic card

3r3161313. 3r31659. Sampling values from a pre-filtered cubic map for surfaces with high roughness leads to reading data from the mip level somewhere near the end of their chain. When sampling from a cube map, OpenGL does not, by default, perform linear interpolation between the faces of a cube map. Since high mip-levels have a lower resolution, and the environment map has been convolved, taking into account the very large mirror lobe, 3r-31605. no texture filtering between theedges. it becomes obvious: 3r3161613. 3r31659. 3r31558. 3r33939. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Fortunately, OpenGL has the ability to activate such filtering with a simple flag: 3r31613. 3r31659. 3r31492. 3r31493. glEnable (GL_TEXTURE_CUBE_MAP_SEAMLESS); 3r31659. 3r3-31509.{

float invBase = 1.0 /float (base); 3r31659. float denom = 1.0; 3r31659. float result = 0.0; 3r31659. 3r31659. for (uint i = 0u; i < 32u; ++i)

{

if (n> 0u)

{

denom = mod (float (n), 2.0);

result + = denom * invBase;

invBase = branse /2.0; 3r31659. N = uint (float (n) /2.0); 3r31659.} 3r31659.} 3r31659. 3r31659. Return result; 3r31659.} 3r31659. //------------- -------------------------------------------------- -------------

Vec2 HammersleyNoBitOps (uint i, uint N)

{

Return vec2 (float (i) /float (N), VanDerCorpus (i, 2u));

} 3r3-331509.

3r3r161613. 3r31659. 3r3161313. 3r31659. I note that due to certain restrictions on the loop operators in the old hardware, this implementation runs through all 32 bits. As a result, this version is not as productive as the first option - but it works on any hardware and even in the absence of bit operations. 3r3754. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. Sampling by importance in the GGX 3r3-3991 model. 3r3161313. 3r31659. Instead of a uniform or random (Monte Carlo) distribution of the generated sample vectors within the hemisphere of 3r31475. 3r33737. 3r31477. , appearing in the integral we are solving, we will try to create vectors so that they reflect the main direction of reflection of light, characterized by a median vector of microsurfaces and depending on the surface roughness. The sampling process itself will be similar to the previously considered one: open a cycle with a sufficiently large number of iterations, create an element of the low discrepancy sequence, use it to create a sampling vector in tangent space, transfer this vector to world coordinates and use it to sample the energy brightness of the scene. In principle, the changes relate only to the fact that the element of the sequence of low inconsistencies is now used to define a new sampling vector: 3r31613. 3r31659. 3r31492. 3r31493. const uint SAMPLE_COUNT = 4096u; 3r31659. for (uint i = 0u; i < SAMPLE_COUNT; ++i)

{

vec2 Xi = Hammersley (i, SAMPLE_COUNT); 3r3r1616. 3r3-331509. 3r31510. 3r3r1313. 3r31659. 3r3161313. 3r31659. In addition, for the complete formation of the vectorand the sample will need to be oriented in some way in the direction of the mirror petal corresponding to a given level of roughness. You can take the NDF (normal distribution function) from 3r31286. Lesson 3r31644. theory and combining with GGX NDF for the method of defining the vector of a sample in a field sponsored by Epic Games:

3r31659. 3r31492. 3r31493. vec3 ImportanceSampleGGX (vec2 Xi, vec3 N, float roughness)

{

float a = roughness * roughness; 3r31659. 3r31659. float phi = 2.0 * PI * Xi.x; 3r31659. float cosTheta = sqrt ((1.0 - Xi.y) /(1.0 + (a * a - 1.0) * Xi.y)); 3r31659. float sinTheta = sqrt (1.0 - cosTheta * cosTheta); 3r31659. 3r31659. //conversion from spherical to Cartesian coordinates

vec3 H; 3r31659. H.x = cos (phi) * sinTheta; 3r31659. H.y = sin (phi) * sinTheta; 3r31659. H.z = cosTheta; 3r31659. 3r31659. //transform from tangent space to world coordinates

vec3 up = abs (N.z) < 0.999 ? vec3(0.0, 0.0, 1.0) : vec3(1.0, 0.0, 0.0);

vec3 tangent = normalize (cross (up, N)); 3r31659. vec3 bitangent = cross (N, tangent); 3r31659. 3r31659. vec3 sampleVec = tangent * H.x + bitangent * H.y + N * H.z; 3r31659. return normalize (sampleVec); 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. The result is a sample vector, approximately oriented along the median vector of microsurfaces, for a given roughness and element of the low mismatch sequence

*Xi*. Please note that Epic Games uses the square of the roughness value for greater visual quality, which is based on Disney’s original work on the PBR method. 3r3161313. 3r31659. 3r3161313. 3r31659. Having finished the implementation of the Hammersley sequence and the sample vector generation code, we can quote the prefilter and convolution shader code: 3r3r161613. 3r31659. 3r31492. 3r31442. #version 330 coreout vec4 FragColor; 3r31659. in vec3 localPos; 3r31659. 3r31659. uniform samplerCube environmentMap; 3r31659. uniform float roughness; 3r31659. 3r31659. const float PI = ???; 3r31659. 3r31659. float RadicalInverse_VdC (uint bits); 3r31659. vec2 Hammersley (uint i, uint N); 3r31659. vec3 ImportanceSampleGGX (vec2 Xi, vec3 N, float roughness); 3r31659. 3r31659. void main ()

{

vec3 N = normalize (localPos); 3r31659. vec3 R = N; 3r31659. vec3 V = R; 3r31659. 3r31659. const uint SAMPLE_COUNT = 1024u; 3r31659. float totalWeight = 0.0; 3r31659. vec3 prefilteredColor = vec3 (0.0); 3r31659. for (uint i = 0u; i < SAMPLE_COUNT; ++i)

{

vec2 Xi = Hammersley (i, SAMPLE_COUNT);

vec3 H = ImportanceSampleGGX (Xi, N, roughness);

vec3 L = normalize (???n3 N, R, 3) H) * H - V);

Float NdotL = max (dot (N, L), 0.0);

If (NdotL> 0.0) 3r31659. {

PrefilteredColor + = texture (environmentMap, L). rrb; 3r3165; 3r31659. 3r3161313. 3r31659. We pre-filter the environment map on the basis of some specified roughness, the level of which varies for each mip level of the resulting cubic map (from 0.0 to 1.0), and the result of the filter is stored in a variable

*prefilteredColor*. Further, the variable is divided by the total weight for the entire sample, and samples with a smaller contribution to the final result (having a smaller value of 3r3-331605. NdotL 3r3-331606.) Also increase the total weight less. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. Saving pre-filtering data in mip levels 3-33991. 3r3161313. 3r31659. It remains to write the code that directly instructs OpenGL to filter the environment map with different levels of roughness and then save the results in a series of mip levels of the target cubic map. Here the code from the lesson about the calculation of the card 3r31488 will be useful here. irradiance 3r31644. :3r31659. 3r31492. 3r31493. prefilterShader.use (); 3r31659. prefilterShader.setInt ("environmentMap", 0); 3r31659. prefilterShader.setMat4 ("projection", captureProjection); 3r31659. glActiveTexture (GL_TEXTURE0); 3r31659. glBindTexture (GL_TEXTURE_CUBE_MAP, envCubemap); 3r31659. 3r31659. glBindFramebuffer (GL_FRAMEBUFFER, captureFBO); 3r31659. unsigned int maxMipLevels = 5; 3r31659. for (unsigned int mip = 0; mip < maxMipLevels; ++mip)

{

//specify the size of the framebuffer based on the current mip-level number

unsigned int mipWidth = 128 * std :: pow (0.? mip);

unsigned int mipHeight = 128 o'clock;) . float roughness = (float) mip /(float) (maxMipLevels - 1);

prefilterShader.setFloat ( "roughness", roughness);

for (unsigned int i = 0; i < 6; ++i)

{

prefilterShader.setMat4 ("view", captureViews

*);*

glFramebufferTexture2D (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT?

GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, m, ? ? ? ? ? ? ? ? ? 3_3UBE_CUBE_MAP_POSITIVE_X + i, prefilterMap, ? ? ? 3FEXTURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, ? ? ? 3FEXURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, Mf, Mf, MEP_POSTIVE_CUBE_MAP_POSITIVE_X + i, 1659.

GlClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 3r31659. renderCube (); 3r31659.}

}

glBindFramebuffer (GL_FRAMEBUFFER, 0); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. The process is similar to the convolution of the irradiance map, but this time it is necessary at each step to clarify the size of the frame buffer, reducing it twice to match the mip levels. Also, the mip level to which the render will be carried out at the moment must be specified as a parameter of the function 3r3-31605. glFramebufferTexture2D (). 3r3161313. 3r31659. 3r3161313. 3r31659. The result of the execution of this code should be a cubic map containing more and more blurred images of reflections on each subsequent mip level. You can use such a cubic map as a data source for a skybox and take a sample from any mip level to below zero: 3r31613. 3r31659. 3r31492. 3r31493. vec3 envColor = textureLod (environmentMap, WorldPos, 1.2) .rgb; 3r31659. 3r3-31509.glFramebufferTexture2D (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT?

GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, m, ? ? ? ? ? ? ? ? ? 3_3UBE_CUBE_MAP_POSITIVE_X + i, prefilterMap, ? ? ? 3FEXTURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, ? ? ? 3FEXURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, Mf, Mf, MEP_POSTIVE_CUBE_MAP_POSITIVE_X + i, 1659.

GlClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 3r31659. renderCube (); 3r31659.}

}

glBindFramebuffer (GL_FRAMEBUFFER, 0); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. The process is similar to the convolution of the irradiance map, but this time it is necessary at each step to clarify the size of the frame buffer, reducing it twice to match the mip levels. Also, the mip level to which the render will be carried out at the moment must be specified as a parameter of the function 3r3-31605. glFramebufferTexture2D ()

3r3161313. 3r31659. 3r3161313. 3r31659. The result of this action will be the following picture:

3r31659. 3r31558. 3r33942. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Looks like a very blurred source environment map. If you have a similar result, then, most likely, the process of pre-filtering the HDR environment map is correct. Try experimenting with samples from different mip levels and watch the gradual increase in blurring with each level. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. Pre-filtering convolution artifacts

3r3161313. 3r31659. For most tasks, the described approach works quite well, but sooner or later you will have to meet with various artifacts that the pre-filtering process generates. Here are the most common and methods of dealing with them. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. The manifestation of the seams of the cubic card

3r3161313. 3r31659. Sampling values from a pre-filtered cubic map for surfaces with high roughness leads to reading data from the mip level somewhere near the end of their chain. When sampling from a cube map, OpenGL does not, by default, perform linear interpolation between the faces of a cube map. Since high mip-levels have a lower resolution, and the environment map has been convolved, taking into account the very large mirror lobe, 3r-31605. no texture filtering between the

3r3161313. 3r31659. 3r3161313. 3r31659. It is enough to set the flag somewhere in the application initialization code and this artifact is over. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r3990. The appearance of bright dots 3r3-3991. 3r3161313. 3r31659. Since specular reflections in general contain high-frequency details, as well as areas with very different brightness, their convolution requires the use of a large number of sample points to correctly take into account the large scatter of values inside the HDR reflections from the environment. In the example, we already take a sufficiently large number of samples, but for certain scenes and high levels of roughness of the material this will still not be enough, and you will witness the appearance of many spots around bright areas: 3r31613. 3r31659. 3r31558. 3r3997. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. It is possible to continue to increase the number of samples, but this will not be a universal solution and in some conditions will still allow an artifact. But you can refer to the method Chetan Jags , which allows to reduce the manifestation of the artifact. To do this, at the preliminary convolution stage, the sample from the environment map should not be directly taken, but from one of its mip levels, based on the value obtained from the probability distribution function of the integrand expression and roughness: 3r3r1616. 3r31659. 3r31492. 3r31493. float D = DistributionGGX (NdotH, roughness); 3r31659. float pdf = (D * NdotH /(4.0 * HdotV)) + ???; 3r31659. 3r31659. //resolution for each face of the original cubic map

float resolution = 512.0; 3r31659. float saTexel = 4.0 * PI /(6.0 * resolution * resolution); 3r31659. float saSample = 1.0 /(float (SAMPLE_COUNT) * pdf + ???); 3r31659. 3r31659. float mipLevel = roughness == 0.0? 0.0: 0.5 * log2 (saSample /saTexel); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Just do not forget to enable trilinear filtering for the environment map in order to successfully sample from mip levels: 3r31616. 3r31659. 3r31492. 3r31493. glBindTexture (GL_TEXTURE_CUBE_MAP, envCubemap); 3r31659. glTexParameteri (GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Also, do not forget to create the texture mip levels directly using OpenGL, but only after the main mip level is fully formed:

3r31659. 3r31492. 3r31493. 3r31659. //HDR conversion of a rectangular environment map into a cubic map 3r3161659.[]3r31659. //create mip levels

glBindTexture (GL_TEXTURE_CUBE_MAP, envCubemap); 3r31659. glGenerateMipmap (GL_TEXTURE_CUBE_MAP); 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. This method works surprisingly well, removing almost all (and often all) spots of the filtered map, even at high levels of roughness. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. Preliminary calculation of BRDF 3r3r1612. 3r3161313. 3r31659. So, we have successfully processed the environment map with a filter and now we can concentrate on the second part of the approximation in the form of a separate amount, which is BRDF. To refresh your memory, look again at the complete record of the approximate solution: 3r3r16613 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31065. 3r31477. 3r? 31308. 3r3161313. 3r31659. We previously calculated the left part of the sum and recorded the results for different levels of roughness into a separate cubic map. The right-hand side will require convolving the BDRF expression together with the following parameters: angle 3r31475. 3r31071. 3r31477. , surface roughness and Fresnel coefficient

3r31476. 3r31477. . The process is similar to the integration of a mirrored BRDF for a completely white environment or with a constant energy brightness of 3r31475. 3r31077. 3r31477. . BRDF convolution for three variables is not a trivial task, but in this case 3r31475. 3r31476. 3r31477. can be derived from an expression describing the mirror BRDF: 3r3r16613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31088. 3r31477. 3r? 31308. 3r3161313. 3r31659. Here is 3r3r1475. 3r31197. 3r31477. - A function describing the calculation of the Fresnel kit. By moving the divisor to the expression for BRDF, you can go to the following equivalent entry: 3r3r161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r? 31102. 3r31477. 3r? 31308. 3r3161313. 3r31659. Replacing the right entry

3r31197. 3r31477. on the Fresnel-Schlick approximation, we get: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31116. 3r31477. 3r? 31308.

3r31659. Denote the expression 3r31475. 3r? 31122. 3r31477. like 3r31475. 3r3-131125. 3r31477. to simplify the decision regarding 3r31475. 3r31476. 3r31477. :

3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31136. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31145. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31154. 3r31477. 3r? 31308. 3r3161313. 3r31659. Further function 3r31475. 3r31197. 3r31477. we break into two integrals:

3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31168. 3r31477. 3r? 31308. 3r3161313. 3r31659. Therefore,

3r31476. 3r31477. will be constant under the integral, and we can take it out of the integral sign. Next, we will reveal

3r31177. 3r31477. into the original expression and get the final entry for BRDF as a separate amount: 3r3161613. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31185. 3r31477. 3r? 31308. 3r3161313. 3r31659. The two integrals obtained are the scale and the offset for the value

3r31476. 3r31477. respectively. Notice that

3r31194. 3r31477. contains an entry

3r31197. 3r31477. therefore, these occurrences cancel out and disappear from the expression. 3r3161313. 3r31659. 3r3161313. 3r31659. Using the already developed approach, we can carry out the BRDF convolution together with the input dаta: roughness and angle between vectors 3r31475. 3r31477. and 3r31475. 3r31477. . The result is written in 2D texture - 3r3-31605. BRDF interconnect map [/i] (3r3-31605. BRDF integration map 3r3-331606.), Which will serve as an auxiliary table of values for use in the final shader, where the final result of indirect specular lighting will be formed. 3r3161313. 3r31659. 3r3161313. 3r31659. The BRDF convolution shader works on a plane, directly using two-dimensional texture coordinates as input parameters of the convolution process (3r3-31605. NdotV 3r3-331606. And 3r3-331605. Roughness 3r33131606.). The code is much like the convolution convolution, but here the sampling vector is processed taking into account the BRDF geometric function and the Fresnel-Schlick approximation expression: 3r31613. 3r31659. 3r31492. 3r31442. vec2 IntegrateBRDF (float NdotV, float roughness)

{

vec3 V; 3r31659. V.x = sqrt (1.0 - NdotV * NdotV); 3r31659. V.y = 0.0; 3r31659. V.z = NdotV; 3r31659. 3r31659. float A = 0.0; 3r31659. float B = 0.0; 3r31659. 3r31659. vec3 N = vec3 (0.? 0.? 1.0); 3r31659. 3r31659. const uint SAMPLE_COUNT = 1024u; 3r31659. for (uint i = 0u; i < SAMPLE_COUNT; ++i)

{

vec2 Xi = Hammersley (i, SAMPLE_COUNT);

vec3 H = ImportanceSampleGGX (Xi, N, roughness);

vec3 L = normalize (???n3 N, R, 3) H) * H - V);

Float NdotL = max (Lz, 0.0);

Float NdotH = max (Hz, 0.0);

Float VdotH = max (dot (V, H), 0.0) ;

If (NdotL> 0.0)

{

Float G = GeometrySmith (N, V, L, roughness);

Float G_Vis = (G * VdotH) /(NdotH) oddi Vd, Vd, Vd, Vd, Vd, Vd, Vd, Vd, Vrd; Fc = pow (1.0 - VdotH, 5.0);

A + = (1.0 - Fc) * G_Vis;

B + = Fc * G_Vis; 3rr161659.} 3r3r161659. ; 3r31659. B /= float (SAMPLE_COUNT); 3r3r1616. Return vec2 (A, B);

} 3r31659. //---------------------- ---------------------------------------------- --------

void main ()

{

vec2 integratedBRDF = IntegrateBRDF (TexCoords.x, TexCoords.y); 3r31659. FragColor = integratedBRDF; 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. As can be seen, the BRDF convolution is implemented in the form of a practically literal arrangement of the above mathematical calculations. The input parameters of roughness and angle

are taken. 3r31275. 3r31477. , the sample vector is generated based on a sample of importance, processed using the geometry function and the converted Fresnel expression for BRDF. As a result, for each sample, the magnitude of the scaling and displacement of the value

is obtained. 3r31476. 3r31477. which at the end are averaged and returned as 3r3-31605. vec2 [/i] . 3r3161313. 3r31659. 3r3161313. 3r31659. In 3r31286. theoretical 3r31644. the lesson mentioned that the geometrical component of BRDF is slightly different when calculating IBL, since the coefficient is 3r31475. 3r31477. set differently:

3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31297. 3r31477. 3r? 31308. 3r3161313. 3r31659. 3r? 31304. 3r? 31308. 3r? 31304. 3r31475. 3r31477. 3r? 31308. 3r3161313. 3r31659. Since the BRDF convolution is part of the solution of the integral in the case of calculating IBL, we will use the coefficient 3r31475. 3r31312. 3r31477. for calculating the geometry function in the Schlick-GGX model:

3r31659. 3r31492. 3r31442. float GeometrySchlickGGX (float NdotV, float roughness)

{

float a = roughness; 3r31659. float k = (a * a) /2.0; 3r31659. 3r31659. float nom = NdotV; 3r31659. float denom = NdotV * (1.0 - k) + k; 3r31659. 3r31659. return nom /denom; 3r31659.}

//------------------------------------------------ ----------------------------

float GeometrySmith (vec3 N, vec3 V, vec3 L, float roughness)

{

float NdotV = max (dot (N, V), 0.0); 3r31659. float NdotL = max (dot (N, L), 0.0); 3r31659. float ggx2 = GeometrySchlickGGX (NdotV, roughness); 3r31659. float ggx1 = GeometrySchlickGGX (NdotL, roughness); 3r31659. 3r31659. return ggx1 * ggx2; 3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. Note that the coefficient is

3r31477. calculated on the basis of parameter 3r3-31605. a [/i] . In this case, the parameter

*roughness*not squared when describing parameter 3r3-31605. a [/i] what was done in other places where this parameter was applied. I'm not sure where the problem lies here: in the work of Epic Games or in the original work from Disney, but it is worth saying that it is such a direct assignment of the value 3r-3?605. roughness [/i] parameter 3r31605. a [/i] leads to the creation of a BRDF integration map identical to that presented in the Epic Games publication. 3r3161313. 3r31659. 3r3161313. 3r31659. Further, the preservation of the BRDF convolution results will be provided in the form of a 2D texture of size 512x512:3r31659. 3r31492. 3r31493. unsigned int brdfLUTTexture; 3r31659. glGenTextures (? & brdfLUTTexture); 3r31659. 3r31659. //reserve enough memory to store the auxiliary texture

glBindTexture (GL_TEXTURE_2D, brdfLUTTexture); 3r31659. glTexImage2D (GL_TEXTURE_2D, ? GL_RG16F, 51? 51? ? GL_RG, GL_FLOAT, 0); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 3r31659. glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. As recommended by Epic Games, the 16-bit floating point format is used here. Be sure to set the repeat mode to

*GL_CLAMP_TO_EDGE 3r3-31606. in order to avoid sampling artifacts from the edge. 3r3161313. 3r31659. 3r3161313. 3r31659. Next, we use the same frame buffer object and execute the shader on the surface of a full-screen quad: 3r31613. 3r31659. 3r31492. 3r31493. glBindFramebuffer (GL_FRAMEBUFFER, captureFBO); 3r31659. glBindRenderbuffer (GL_RENDERBUFFER, captureRBO); 3r31659. glRenderbufferStorage (GL_RENDERBUFFER, GL_DEPTH_COMPONENT2? 51? 512); 3r31659. glFramebufferTexture2D (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT? GL_TEXTURE_2D, brdfLUTTexture, 0); 3r31659. 3r31659. glViewport (? ? 51? 512); 3r31659. brdfShader.use (); 3r31659. glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 3r31659. RenderQuad (); 3r31659. 3r31659. glBindFramebuffer (GL_FRAMEBUFFER, 0); 3r31659. 3r3-31509.*

3r3161313. 3r31659. 3r3161313. 3r31659. As a result, we obtain a texture map that stores the result of the convolution of the part of the expression for the separate amount responsible for BRDF: 3r31613. 3r31659. 3r31558. 3r31411. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Having on hand the results of the preliminary filtering of the environment map and the texture with the results of the BRDF convolution, we will be able to restore the result of calculating the integral for indirect mirror illumination based on approximation by a separate sum. The recovered value will subsequently be used as indirect or background specular radiation. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. The final calculation of the reflectivity in the model IBL 3r3r1616. 3r3161313. 3r31659. So, to get the value that describes the indirect mirror component in the general expression of the reflectivity, it is necessary to “glue” the calculated components of the approximation by a separate sum into a single whole. First, add the appropriate samplers for the pre-calculated data to the final shader: 3r31613. 3r31659. 3r31492. 3r31493. uniform samplerCube prefilterMap; 3r31659. uniform sampler2D brdfLUT; 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. First, we obtain the value of indirect mirror reflection on the surface by sampling from a previously processed environment map based on the reflection vector. Note that here the selection of the mip level for sampling is based on the surface roughness. For rougher surfaces, the reflection will be 3r3-31605. more blurred:3r3161313. 3r31659. 3r3161313. 3r31659. As a result, we obtain a texture map that stores the result of the convolution of the part of the expression for the separate amount responsible for BRDF: 3r31613. 3r31659. 3r31558. 3r31411. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Having on hand the results of the preliminary filtering of the environment map and the texture with the results of the BRDF convolution, we will be able to restore the result of calculating the integral for indirect mirror illumination based on approximation by a separate sum. The recovered value will subsequently be used as indirect or background specular radiation. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. The final calculation of the reflectivity in the model IBL 3r3r1616. 3r3161313. 3r31659. So, to get the value that describes the indirect mirror component in the general expression of the reflectivity, it is necessary to “glue” the calculated components of the approximation by a separate sum into a single whole. First, add the appropriate samplers for the pre-calculated data to the final shader: 3r31613. 3r31659. 3r31492. 3r31493. uniform samplerCube prefilterMap; 3r31659. uniform sampler2D brdfLUT; 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. First, we obtain the value of indirect mirror reflection on the surface by sampling from a previously processed environment map based on the reflection vector. Note that here the selection of the mip level for sampling is based on the surface roughness. For rougher surfaces, the reflection will be 3r3-31605. more blurred

3r31659. 3r31492. 3r31442. void main ()

{

[]3r31659. vec3 R = reflect (-V, N); 3r31659. 3r31659. const float MAX_REFLECTION_LOD = 4.0; 3r31659. vec3 prefilteredColor = textureLod (prefilterMap, R, roughness * MAX_REFLECTION_LOD) .rgb; 3r31659.[]3r31659.}

3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. At the preliminary convolution stage, we prepared only 5 mip levels (from zero to fourth), a constant 3r3-31605. MAX_REFLECTION_LOD [/i] serves to limit the selection of the generated mip levels. 3r3161313. 3r31659. Next, we make a sample of the BRDF integration map based on the roughness and angle between the normal and the direction of gaze: 3r3161613. 3r31659. 3r31492. 3r31493. vec3 F = FresnelSchlickRoughness (max (dot (N, V), 0.0), F? roughness); 3r31659. vec2 envBRDF = texture (brdfLUT, vec2 (max (dot (N, V), 0.0), roughness)). rg; 3r31659. vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y); 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. The value obtained from the map contains the scaling and offset factors for

3r31476. 3r31477. (here, the value 3r3–31605. F 3r3–31606 is taken. - Fresnel coefficient). The converted value is

*F*further combined with the value obtained from the pre-filtering map, to obtain an approximate solution of the original integral expression - 3r3-31605. specular [/i] . 3r3161313. 3r31659. 3r3161313. 3r31659. In this way, we get a solution for the part of the reflectivity that is responsible for the specular reflection. To obtain a complete solution of the PBR IBL model, it is necessary to combine this value with the solution for the diffuse part of the reflectivity expression, which we obtained in 3r31488. pastLesson:

3r31659. 3r31492. 3r31493. vec3 F = FresnelSchlickRoughness (max (dot (N, V), 0.0), F? roughness); 3r31659. 3r31659. vec3 kS = F; 3r31659. vec3 kD = 1.0 - kS; 3r31659. kD * = 1.0 - metallic; 3r31659. 3r31659. vec3 irradiance = texture (irradianceMap, N) .rgb; 3r31659. vec3 diffuse = irradiance * albedo; 3r31659. 3r31659. const float MAX_REFLECTION_LOD = 4.0; 3r31659. vec3 prefilteredColor = textureLod (prefilterMap, R, roughness * MAX_REFLECTION_LOD) .rgb;

vec2 envBRDF = texture (brdfLUT, vec2 (max (dot (N, V), 0.0), roughness)). rg; 3r31659. vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y); 3r31659. 3r31659. vec3 ambient = (kD * diffuse + specular) * ao; 3r31659. 3r3-31509.

3r3161313. 3r31659. 3r3161313. 3r31659. I note that the value of

*specular*not multiplied by*kS*, because it already contains the Fresnel coefficient. 3r3161313. 3r31659. 3r3161313. 3r31659. Let us launch our test application with a familiar set of spheres with varying metallicity and roughness characteristics and take a look at their appearance in full magnificence PBR: 3r3161313. 3r31659. 3r31558. 3r31526. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. You can go even further and download a set of textures corresponding to the PBR model and get the spheres from 3r31532. real materials 3r31644. :3r31659. 3r31558. 3r31537. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Or even download a smart model along with prepared PBR textures from Andrew Maximov :

3r31659. 3r31558. 3r31548. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. I think that no one should be particularly convinced that the current lighting model looks much more convincing. Moreover, the lighting looks physically correct, regardless of the environment map. Below are used several completely different HDR environment maps that completely change the nature of the lighting - but all the images look physically reliable, despite the fact that no parameters were needed in the model! (In principle, in this simplification of work with materials lies the main plus of the PBR pipeline, and a better picture can be said to be a pleasant consequence. 3r-331605. Appro. F. 3r3-331606.) 3r31659. 3r31558. 3r31559. 3r31655. 3r3161313. 3r31659. 3r3161313. 3r31659. Phew, our journey into the essence of the PBR render came out pretty voluminous. We went through the whole series of short steps to the result and, of course, a lot can go wrong with the first approaches. Therefore, in case of any problems, I advise you to carefully understand the code for the examples for 3r31565. monochrome 3r3r161644. and 3r31567. textured 3r31644. spheres (and in the shader code, of course!). Or ask for advice in the comments. 3r3161313. 3r31659. 3r3161313. 3r31659. 3r31611. What's next? 3r31612. 3r3161313. 3r31659. I hope that by reading these lines you have already made out for yourself an understanding of the work of the PBR model of the render, and also figured out and successfully launched a test application. In these lessons, all the necessary auxiliary texture maps for the PBR model were calculated in our application in advance, before the main render cycle. For learning tasks, this approach is suitable, but not for practical use. First, such preliminary preparation should occur once, and not every application launch. Secondly, if you decide to add some more environment maps, then they will also have to be processed at startup. And if some more cards are added? This snowball. 3r3161313. 3r31659. 3r3161313. 3r31659. That is why, in general, the irradiance map and the pre-processed environment map are prepared once and then stored on disk (the BRDF integration map does not depend on the environment map, so that it can be calculated or loaded once). It follows that you will need a format for storing HDR cubic cards, including their mip levels. Well, or you can store and load them, using one of the most common formats (for example, 3r3-31605. .Ds [/i] Supports saving mip levels). 3r3161313. 3r31659. 3r3161313. 3r31659. Another important point: in order to give a deep understanding of the PBR pipeline in these lessons, I gave a description of the full process of preparing for the PBR renderer, including preliminary calculations of auxiliary cards for IBL. However, in your practice, you might as well take advantage of one of the great utilities that will prepare these cards for you: for example, 3r31587. cmftStudio

or IBLBaker 3r3r1644. . 3r3161313. 3r31659. 3r3161313. 3r31659. We also did not consider the process of preparing cubic cards 3r3-31605. Sample Reflection [/i] (3r3-31605. Reflection probes 3r3-331606.) And related processes of interpolation of cubic cards and correction of parallax. Briefly, this technique can be described as follows: we place in our scene a set of objects of reflection samples, which form a local environment picture in the form of a cubic map, and then on the basis of it all the necessary auxiliary maps for the IBL model are formed. By interpolating data from several samples based on the distance from the camera, you can get highly detailed lighting based on the image, the quality of which is essentially limited only by the number of samples that we are ready to place in the scene. This approach allows the lighting to change correctly, for example, when moving from a brightly lit street to the dusk of a certain room. Probably, I will write a lesson about the reflection samples in the future, but at the moment I can only recommend for review the article by Chetan Jags, given below. 3r3161313. 3r31659. 3r3161313. 3r31659. (You can see the implementation of the samples, and much more, in the raw materials of the author of the author of tutorials 3r3-331603. Here , 3r3-331605. Approx. 3r31659. 3r3161313. 3r31659. 3r31611. Additional materials 3r31612. 3r3161313. 3r31659. 3r31615. 3r31659. 3r31642. 3r31618. Real Shading in Unreal Engine 4

: Explanation of the Epic Games approach to approximating the expression for the mirror component by a separate sum. Based on this article, the code for the IBL PBR lesson was designed 3r31645. 3r31659. 3r31642. 3r31623.

Physically Based Shading and Image Based Lighting. : Excellent article describing the process of incorporating the calculation of the IBL mirror component into a PBR pipeline interactive application. 3r31645. 3r31659. 3r31642. 3r31628. Image Based Lighting

: a very voluminous and detailed post about mirror IBL and related issues, including the task of light probe interpolation. 3r31645. 3r31659. 3r31642. 3r31633. Moving Frostbite to PBR

: a well-designed and fairly technically detailed presentation showing the process of integrating the PBR model into the “AAA” class game engine. 3r31645. 3r31659. 3r31642. 3r31638. Physically Based Rendering - Part Three

: an overview of IBL and PBR from JMonkeyEngine developers. 3r31645. 3r31659. 3r31642. 3r31643. Implementation Notes: Runtime Environment Filtering for Image Based Lighting

: detailed information about the pre-filtering process of HDR environment maps, as well as possible optimization of the sampling process. 3r31645. 3r31659. 3r31647. 3r31655. 3r31659. 3r31659. 3r31659. 3r31652. ! function (e) {function t (t, n) {if (! (n in e)) {for (var r, a = e.document, i = a.scripts, o = i.length; o-- ;) if (-1! == i[o].src.indexOf (t)) {r = i[o]; break} if (! r) {r = a.createElement ("script"), r.type = "text /jаvascript", r.async =! ? r.defer =! ? r.src = t, r.charset = "UTF-8"; var d = function () {var e = a.getElementsByTagName ("script")[0]; e.parentNode.insertBefore (r, e)}; "[object Opera]" == e.opera? a.addEventListener? a.addEventListener ("DOMContentLoaded", d,! 1): e.attachEvent ("onload", d ): d ()}}} t ("//mediator.mail.ru/script/2820404/"""_mediator") () (); 3r31653. 3r31659. 3r31655. 3r31659. 3r31659. 3r31659. 3r31659.