Skip to main content
Custom Shadow Mapping         How to implement a custom shadow technique

Introduction

This article explains how you can use Ogre and its shadow technique management to implement custom algorithms. We will only focus on a well-known texture-based technique called Shadow Mapping (SM). The shader code is written using the NVidia's Cg language.

This article is somewhat "technical" and should be restricted to advanced users, thus newbies that are not very familar with Ogre or standard 3D techniques may begin with something "lighter", but they could give a try though...

What is Shadow Mapping ?

Shadow Mapping is an old image-based shadow determination algorithm published by L. Williams in 1978. It has been extensively used since, both in offline (Toy Story by Pixar) rendering and real time graphics (most games). It consists in two passes :

  • first, render a 2D depth buffer (called "depth" or "shadow" map) from the light's point-of-view
  • second, render scene from the eye's point-of-view


In the second pass the depth map is used when lighting computations occur to know if each fragment shaded is in shadow or not, according to the following algorithm :

  • determine fragment's xyz position relative to light
  • compare fragment's depth z to depth value stored at position xy in the shadow map
  • if greater the fragment is shadowed as something closer to the light occlude it


As this article is not a tutorial about shadow mapping but about implementing it with Ogre, the reader can refer to http://www.paulsprojects.net/tutorials/smt/smt.html to know more about it.

How to do it with Ogre ?

For this, you will need to understand how additive light masking works in Ogre (indeed you will have to properly setup your materials and shaders to work with it). It consists in rendering the scene many times, each time computing a single light contribution whose influence is masked out in areas of shadow. This single light contribution is added to the previous one such that all the light contribution has correctly accumulated in the scene at the end of the rendering job. To do this Ogre categorises the passes of your material in 3 types:

  • ambient, which handles any effect independent from the lights (light emission, ambient light, reflection mapping, etc.)
  • lighting, which is the pass used to compute each single light contribution
  • decal, which is used to modulate the accumulated lighting with the final texture color (not use here as texture color may be used for lighting too)


First we have to let Ogre know about the shader to use when computing the shadow map. We assume no {LEX()}vertex{LEX} deformation, thus all materials may use the same vertex program that simply projects the geometry on the screen (see next section). However, object with vertex deformation may provide its own vertex shader to make the deformation reflects in the shadow map. Secondly we have to let Ogre know about the shaders to use when performing lighting computations for a given light, using the corresponding shadow map. That is a similar version of your standard shaders but with the lighting computation restricted to a single light and a new texture as input (the shadow map in unit 0). Here is the syntax for material scripts :

Copy to clipboard
material { technique default { pass ambient { // Not needed for rendering, but as information // to lighting pass categorisation routine ambient 1 1 1 diffuse 0 0 0 specular 0 0 0 0 vertex_program_ref AmbientVP { } fragment_program_ref AmbientFP { } shadow_caster_vertex_program_ref ShadowCasterVP { } ... } pass lighting { // Not needed for rendering, but as information // to lighting pass categorisation routine ambient 0 0 0 diffuse 1 1 1 specular 1 1 1 1 vertex_program_ref LightingVP { } fragment_program_ref LightingFP { } shadow_receiver_vertex_program_ref LightingWithShadowMapVP { } shadow_receiver_fragment_program_ref LightingWithShadowMapFP { } ... } } }


Interesting links related to this topic in the manual are :


In order to prepare Ogre to do shadow mapping, we also indicate that we will use a texture-based shadow algorithm and handle self-shadowing. We will also use a floating point render target to store the depth map (a standard byte texture can be used too).

Copy to clipboard
SceneManager::setShadowTexturePixelFormat(Ogre::PF_FLOAT16_R); SceneManager::setShadowTechnique(Ogre::SHADOWTYPE_TEXTURE_ADDITIVE); SceneManager::setShadowTextureSelfShadow(true);

Setting up the shadow materials

Shadow materials are global shadow settings used by Ogre when rendering shadows and shadow maps. It is set in Ogre using :

Copy to clipboard
SceneManager::setShadowTextureCasterMaterial("ShadowCaster"); SceneManager::setShadowTextureReceiverMaterial("ShadowReceiver");


The shadow caster material holds the shaders that fill the depth map in (it is detailled in the next section) :

Copy to clipboard
vertex_program ShadowCasterVP cg { source v-shadow-caster.cg entry_point main profiles arbvp1 default_params { param_named_auto p_ModelViewProjection worldviewproj_matrix param_named_auto p_AmbientLight ambient_light_colour } } fragment_program ShadowCasterFP cg { source f-shadow-caster.cg entry_point main profiles arbfp1 // Store normalized (usefull to avoid overflowin) or non-normalized depth ? //compile_arguments -DSTORE_NORMALIZED_DEPTH default_params { // Only used when storing normalized depth values //param_named_auto p_Near near_clip_distance //param_named_auto p_Far far_clip_distance param_named p_DepthOffset float 0.01 } } material ShadowCaster { technique default { // Z-write only pass pass Z-write { vertex_program_ref ShadowCasterVP { } fragment_program_ref ShadowCasterFP { } } } }


The shadow receiver material simply holds the shadow map in the first texture unit :

Copy to clipboard
material ShadowReceiver { technique default { pass lighting { texture_unit ShadowMap { tex_address_mode clamp filtering none } } } }


Starting from these materials and the material of your object, Ogre derives the final material for rendering by mixing them up. Actually, it provides the shadow caster vertex program if not overriden in the material (only required when vertex deformation occurs) and adds the shadow map to your lighting pass.

As of newer Ogre versions, you can alternatively directly 'pull in' the shadow texture into your regular materials. This effectively removes the need for a shadow receiver material. To accomplish this, we use content_type as described here: http://www.ogre3d.org/docs/manual/manual_17.html#SEC70

Copy to clipboard
texture_unit { content_type shadow tex_address_mode clamp filtering none }

You'd simply include the above texture unit into your usual lighting passes, addressing it with the texture_viewproj_matrix matrix as you'll read below.

Writing the depth map

The idea here is to keep the shaders as simple as possible in order to draw the depth map as fast as possible. Thus, we get simple ambient shaders that additionaly store the depth of each fragment. Here, just the ambient pass is required because we only have to draw the depth of the objects in the scene, no lighting computations are done at this point.

Depth value can be stored as normalized values in 0,1 or in a non-normalized way. Normalization method was used before floating point targets have appeared, when GPUs were restricted to byte components. In this case, more precision is given to objects close to the near plane and less for objects close to the far plane. Otherwise, there would be a very bad precision for depth storage as the distance between the far and the near planes can be very large. This problem can be avoided with today's graphics hardware and floating point targets.

To avoid some precision issue when comparing the depth value stored in the depth map and the one of the current rendered fragment according to the light frustum we add a small bias to the stored depth in the fragment shader.

Vertex shader :

Copy to clipboard
// Define inputs from application. struct VertexIn { float4 position : POSITION; // Vertex in object-space float2 texCoords : TEXCOORD0; // Vertex's Texture Coordinates }; // Define outputs from vertex shader. struct Vertex { float4 position : POSITION; // Vertex position in screen-space float4 color : COLOR; // Vertex color float2 texCoords : TEXCOORD0; // Vertex Texture Coordinates float depth : TEXCOORD1; // Vertex depth in eye space }; Vertex main(VertexIn p_In, uniform float4 p_AmbientLight, // Ambient light in scene uniform float4x4 p_ModelViewProjection // Model view projection matrix ) { Vertex l_Out; // Transform vertex position into homogenous screen-space. l_Out.position = mul(p_ModelViewProjection, p_In.position); // Store depth l_Out.depth = l_Out.position.z; // Store ambient color l_Out.color = p_AmbientLight; // Pass texture coordinates to fragment shader l_Out.texCoords = p_In.texCoords; return l_Out; }


Fragment shader :

Copy to clipboard
struct Vertex { float4 position : POSITION; // Fragment position in screen-space float4 color : COLOR; // Fragment color float2 texCoords : TEXCOORD0; // Fragment's Texture Coordinates float depth : TEXCOORD1; // Fragment depth in eye-space }; struct Fragment { float4 color : COLOR0; }; Fragment main(Vertex p_In #ifdef STORE_NORMALIZED_DEPTH ,uniform float p_Near // Near distance ,uniform float p_Far // Far distance #endif ,uniform float p_DepthOffset // Depth offset ) { Fragment l_Out; #ifdef STORE_NORMALIZED_DEPTH // Store normalized depth in [0,1] to avoid overflowing, // even when using half precision floating point render target float l_Depth = (1.0/p_Near - 1.0/p_In.depth) / (1.0/p_Near - 1.0/p_Far); // Use some bias to avoid precision issue // TODO : As depth is not distributed uniformly across the range // we should bias proportionately to the depth value itself. // The absolute bias closer to the camera is lower than the bias further away. l_Depth += p_DepthOffset; #else // Store non-normalized depth float l_Depth = p_In.depth; // Use some bias to avoid precision issue l_Depth += p_DepthOffset; #endif // Write the depth value to the depth map l_Out.color.r = l_Depth; return l_Out; }

Performing lighting computations

Here come the shaders used to compute each single light contribution. As lighting with or without the shadow map is pretty similar, we can use the same shaders with little modifications included in a #define directive. Ogre automatically provides the transformation to compute the fragment position in light space as a view/projection matrix for the current shadow projector in the texture view projection matrix.

Copy to clipboard
vertex_program LightingVP cg { source v-lighting.cg entry_point main profiles arbvp1 default_params { param_named_auto p_ModelView worldview_matrix param_named_auto p_InverseModelView inverse_worldview_matrix param_named_auto p_ModelViewProjection worldviewproj_matrix param_named_auto p_LightPosition light_position_object_space } } fragment_program LightingFP cg { source f-lighting.cg entry_point main profiles arbfp1 default_params { param_named_auto p_LightDiffuse light_diffuse_colour 0 param_named_auto p_LightSpecular light_specular_colour 0 param_named_auto p_LightPower light_power 0 param_named p_Diffuse float4 0.5 0 0 1 param_named p_Specular float 1 1 1 30 } } vertex_program LightingWithShadowMapVP cg { source v-lighting.cg entry_point main profiles arbvp1 // Similar to standard lighting but using the shadow map in addition compile_arguments -DSHADOW_MAP default_params { param_named_auto p_ModelView worldview_matrix param_named_auto p_InverseModelView inverse_worldview_matrix param_named_auto p_ModelViewProjection worldviewproj_matrix param_named_auto p_LightPosition light_position_object_space // Required to express fragment's position in light space param_named_auto p_Model world_matrix param_named_auto p_TextureViewProjection texture_viewproj_matrix } } fragment_program LightingWithShadowMapFP cg { source f-lighting.cg entry_point main profiles arbfp1 // Similar to standard lighting but using the shadow map in addition compile_arguments -DSHADOW_MAP default_params { param_named_auto p_LightDiffuse light_diffuse_colour 0 param_named_auto p_LightSpecular light_specular_colour 0 param_named_auto p_LightPower light_power 0 param_named p_Diffuse float4 0.5 0 0 1 param_named p_Specular float 1 1 1 30 } }


Vertex Shader :

Copy to clipboard
// Define inputs from application. struct VertexIn { float4 position : POSITION; // Vertex in object-space float4 normal : NORMAL; // Vertex's Normal float2 texCoords : TEXCOORD0; // Vertex's Texture Coordinates }; // Define outputs from vertex shader. struct Vertex { float4 position : POSITION; // Vertex position in screen-space float2 texCoords : TEXCOORD0; // Vertex texture coordinates float3 normal : TEXCOORD1; // Normal in eye-space float3 halfVector : TEXCOORD2; // Half angle vector in eye space float3 lightVector : TEXCOORD3; // Light vector in eye space #ifdef SHADOW_MAP float4 lightPosition : TEXCOORD4; // Vertex position in light space #endif }; Vertex main(VertexIn p_In, uniform float4x4 p_ModelViewProjection // Model view projection matrix ,uniform float4 p_LightPosition // Light position in object-space ,uniform float4x4 p_ModelView // Model view matrix ,uniform float4x4 p_InverseModelView // Model view matrix inverted #ifdef SHADOW_MAP ,uniform float4x4 p_Model // Model matrix ,uniform float4x4 p_TextureViewProjection // Texture view projection matrix #endif ) { Vertex l_Out; // Compute light position in eye-space float4 l_LightPosition4 = mul(p_ModelView, p_LightPosition); float3 l_LightPosition3 = l_LightPosition4.xyz; // Compute vertex position in eye-space float4 l_Position4 = mul(p_ModelView, p_In.position); float3 l_Position3 = l_Position4.xyz / l_Position4.w; // Transform normal from model-space to eye-space. l_Out.normal = normalize(mul(transpose(p_InverseModelView), p_In.normal).xyz); // Light vector. l_Out.lightVector = l_LightPosition3 - (l_Position3 * l_LightPosition4.w); // Half angle vector = light vector + eye vector l_Out.halfVector = l_Out.lightVector + (- l_Position3); #ifdef SHADOW_MAP // Compute vertex position in light space // First object to world space l_Out.lightPosition = mul(p_Model, p_In.position); // Then world to light space l_Out.lightPosition = mul(p_TextureViewProjection, l_Out.lightPosition); #endif // Transform vertex position into homogenous screen-space. l_Out.position = mul(p_ModelViewProjection, p_In.position); // Pass texture coordinates to fragment shader l_Out.texCoords = p_In.texCoords; return l_Out; }


Fragment Shader :

Copy to clipboard
// Shadow map always comes in texture unit 0, // so we have to decal all other textures if any... #ifdef SHADOW_MAP sampler2D p_ShadowMap : TEXUNIT0; //sampler2D p_DiffuseMap : TEXUNIT1; #else //sampler2D p_DiffuseMap : TEXUNIT0; #endif struct Vertex { float4 position : POSITION; // Fragment's position in screen-space float2 texCoords : TEXCOORD0; // Fragment's texture coordinates float3 normal : TEXCOORD1; // Fragment's normal in eye-space float3 halfVector : TEXCOORD2; // Fragment's half angle vector in eye-space float3 lightVector : TEXCOORD3; // Fragment's light vector in eye-space #ifdef SHADOW_MAP float4 lightPosition : TEXCOORD4; // Fragment's position in light space #endif }; struct Fragment { float4 color : COLOR0; }; Fragment main(Vertex p_In, uniform float4 p_LightDiffuse, // Light diffuse component uniform float p_LightPower, // Light power uniform float4 p_Diffuse, // Material diffuse component uniform float4 p_LightSpecular, // Light specular component uniform float4 p_Specular, // Material specular component + specular exponent ) { Fragment l_Out; // Normalized normal. float3 l_Normal = normalize(p_In.normal); // Normalized light vector. float3 l_LightVector = normalize(p_In.lightVector); // Normalized half angle vector. float3 l_HalfVector = normalize(p_In.halfVector); // Diffuse component // ----------------- // Angle between normal and light vector float l_CosNL = saturate(dot(l_Normal, l_LightVector)); // No light can reach back surfaces... if (l_CosNL == 0) discard; l_Out.color.rgb = p_Diffuse.rgb * p_LightDiffuse.rgb * l_CosNL; // Specular component // ------------------ // Apply cosine power distribution around mirror direction float l_CosNH = saturate(dot(l_Normal, l_HalfVector)); float l_SpecularPower = pow(l_CosNH, p_Specular.a); float3 l_Specular = p_Specular.rgb * p_LightSpecular.rgb * l_SpecularPower; // Add specular component l_Out.color.rgb += l_Specular.rgb; // Modulate by light incoming power l_Out.color.rgb *= p_LightPower; #ifdef SHADOW_MAP // Test if fragment is in shadow // ----------------------------- // Compute the distance from light of the rasterized fragment (normalized in [0,1] or not) #ifdef STORE_NORMALIZED_DEPTH float l_LightDistance = p_In.lightPosition.z / p_In.lightPosition.w; #else float l_LightDistance = p_In.lightPosition.z; #endif // Compute fragment position in shadow map (texture) space float2 l_ShadowMapTexCoords = float2(p_In.lightPosition.x / p_In.lightPosition.w, p_In.lightPosition.y / p_In.lightPosition.w); // Get the stored nearest fragment distance from light in the shadow map (normalized in [0,1] or not) float3 l_ShadowDistance = tex2D(p_ShadowMap, l_ShadowMapTexCoords).rgb; // Perform standard shadow map comparison float l_Lit = (l_LightDistance <= l_ShadowDistance.r ? 1 : 0); // Attenuate the light contribution as necessary to compute the final color l_Out.color.rgb *= l_Lit; #endif return l_Out; }

Conclusion


Some screenshots to finish.

A scene without shadow mapping :
Image

The same scene with shadow mapping :
Image

The same scene with shadow map projected onto the geometry :
Image

This article has been initiated by Luc Claustres and the original post http://www.ogre3d.org/forums/viewtopic.php?f=11&t=20840&p=150916#p150916 contains some more screenshots. I had succesfuly implemented standard shadow mapping as well as some derived algorithms with Ogre. Variance Shadow Mapping (http://www.punkuser.net/vsm/) only requires two little modifications (in addition with writing the proper shader code of course ๐Ÿ˜‰ :

  • add a blur compositor on the top of your shadow map (you can find a blur example in the compositor sample for instance)

Copy to clipboard
CompositorManager::getSingleton().addCompositor(l_ShadowMapViewport, "ShadowMapBlur"); CompositorManager::getSingleton().setCompositorEnabled(l_ShadowMapViewport, "ShadowMapBlur", true);
  • set bilinear filtering on the shadow map in the shadow receiver material

Copy to clipboard
material ShadowReceiver { technique default { pass lighting { texture_unit ShadowMap { tex_address_mode clamp filtering linear linear none } } } }

Percentage Closer Filtering (http://www.mpi-sb.mpg.de/~brabec/doc/brabec_cgi01.pdf) simply consists in averaging multiple shadow comparisons in order to get various intensity levels instead of a binary result (shadowed or not).

Many other derived algorithms are possible from this base, such as soft-edged shadows (http://www.gamedev.net/reference/articles/article2193.asp), soft PCF (http://www.cs.utah.edu/classes/cs5610/projects-2005/lha/, http://download.nvidia.com/developer/presentations/2005/SIGGRAPH/PCSS.pdf), and so on.


Alias:Custom_Shadow_Mapping