Skip to main content

One more time

Or

Multi-pass techniques

And

Other stuff


Multi-pass techniques

When you render an object, you can define multiple rendering phases (called passes) with different attributes and different shaders to achieve more complex results.

How do we create one?
Simple, you just define another pass (just like your first one) after you close the pass. What was rendered in the first pass will be forwarded to next pass and so on.

Copy to clipboard
technique { pass pass1 { … } pass pass2 { … } … }

It’s Broken!

So, let’s begin. For the last chapter of part one, we will see some examples for how we can use what we learned so far to solve some simple problems you would likely encounter, and utilize them in multi-pass techniques.

First

If you’ve tried to blend any kind of model, you should have noticed you can see the other side of that object.
For instance:
Image

While in the sphere case this could easily be solved with culling, on more complex forms it will not. Observe:
Image

While there are no ‘real’ artifacts, we can see parts of the other side of the object which are facing the right direction.

The main problem in this case is that polygons from the other side pass the culling test. To solve this, we will need to filter them using the depth buffer.

…But how?
First off, we need a two pass technique:

Copy to clipboard
technique { pass pass1 { VertexShader = compile vs_1_1 VertexShaderFunction( ); PixelShader = compile ps_2_0 PixelShaderFunction( ); } pass pass2 { } }


Your instincts might tell you “opaque in first pass and blend in the second pass!”, if you do try this, you will see that the object will remain opaque.

Why?
Because during that pass, the object (not blending at all), “won” the Z buffer, sending only his pixels to the second pass.
Do not be fooled, the second pass will blend only with what the first had already sent – if the previous pass was opaque, blending performed by the second pass will generate no transparency to the background.

So, how CAN we solve this issue?
In order to allow us to reach a solution, we need somehow to write the object but not have it interact with the background until the second pass.
How? We will disable color-writing during the first pass.

How does this help?
By drawing the object without color-writing, you allow the depth buffer the filter out the back side of the object. Since in the first pass the object is opaque, the other side of it will fail the depth test.
All this is done without drawing the object to the screen, meaning the first pass will forward the background, and the object as 'invisible'.
This allows you to blend the originally opaque object with the background by enabling color-writing and blending in the second pass.

The solution:
(Remember, color-write 0 means off, and 7 means all channels on)

Copy to clipboard
technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction( ); PixelShader = compile ps_2_0 PixelShaderFunction( ); ColorWritEenable = 0; } pass pass2 { ColorWritEenable = 7; alphablendenable = true; srcBlend = srcAlpha; destBlend= invSrcAlpha; } }


Image
Hooray!
Thought color-write was useless have you?

Outline

If you sniff around, you will find all sorts of difficult edge detection techniques with all sorts of things you probably don’t want to mess around with at this point.

But say you want to draw just an outline around the object without any complex algorithms – how can you do that?

Again, this will be a two pass technique.

In order to achieve this result, we will use two sets of vertex + pixel programs. One will render an inflated version of the object, and the other a lighting technique.

Copy to clipboard
technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 backVS( ); PixelShader = compile ps_2_0 backPS( ); } pass Pass1 { VertexShader = compile vs_1_1 lightingVS( ); PixelShader = compile ps_2_0 lightingPS( ); } }


You will notice though, that in this state, the inflated will completely obscure the inner one, even if rendered first.

The first thing that comes to mind when attempting this is culling ccw in the outer layer and cw in the inner layer. In a sphere, this will work perfectly, in something more complex, it will not.
Image

You may try to render the inner layer first and then to render the outer layer while blending.
But doing that will create a ‘balloon’ around your inner part. While being a nice effect on its own, it’s not what we set out for.

Image
Did I mention I like yellow?
The balloon technique:

Copy to clipboard
technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 lightingVS( ); PixelShader = compile ps_2_0 lightingPS( ); } pass Pass1 { cullmode = cw; alphablendenable = true; srcBlend = srcAlpha; destBlend= invSrcAlpha; VertexShader = compile vs_1_1 backVS( ); PixelShader = compile ps_2_0 backPS( ); } }



How can we do it than? You’ll be surprised; it’s even simpler than the balloon.

In order to solve this case, we will render the inflated layer first without Z-writing, and then enable Z-writing while rendering the inner layer;

Copy to clipboard
technique Technique0 { pass Pass0 { ZWriteEnable = false; VertexShader = compile vs_1_1 backVS( ); PixelShader = compile ps_2_0 backPS( ); } pass Pass1 { ZWriteEnable = true; VertexShader = compile vs_1_1 lightingVS( ); PixelShader = compile ps_2_0 lightingVS( ); } }



The result:
Image

Why does it work?
The inflated layer cannot win, or contend, against other objects in front of it, while they are being written themselves.
But when the layer itself is written, depth check is still active, meaning the layer will pass its own pixels correctly compared to other objects in world space.

When the second, inner layer is computed, depth write is active, meaning it will immediately win the depth test against the outer layer, since it was not written at all into the buffer.

Drawbacks

While simple, this solution has its drawbacks – especially when blending.

In FXC, the object will render correctly at times while in other times not. The outer layer will blend with blending materials even when completely opaque and in front.

In OGRE, the problem is more consistent but none the less a problem. If a material is alpha blending in front of this object, you will see whatever resides behind it (meaning the blending ignores this technique’s pixels)

This problem is due to the fact the blending depends on source and destination pixels, which, in turn, depend on depth.

A better solution

The best solution to achieve this result is with the stencil buffer, though this is something we will not be touching at the moment (feel free to google it)

Additive lighting

Remember that long ugly shader we wrote to include more than a single point light with phong/blinn? Now we can make a better one using OGRE script. We can of course, do the same in FXC, but that requires some usage we haven’t learned yet (specifically, annotations and scripts).

We will use what is called ‘additive lighting’ – looping per light source and adding its contribution to the final color, using a simple shader that uses only one light source using additive blend.

Because of the nature of additive blend, we will first have to render either a black layer or an ambient only layer to prevent the scene from rendering with the object (basically rendering a 'canvas').

Currently, FXC cannot loop per light source; because of this we will use OGRE material script only, without an FXC example.

The foundation

The first part of this shader (as described above) is to render the base shape with a simple ambient color to prevent blending with the rest of the scene;
We will use the simplest shader possible to save computation time.

Copy to clipboard
float4 baseAmbientVS(float4 pos : POSITION, uniform float4x4 worldViewProj_m ) : POSITION { return mul(worldViewProj_m, pos); } float4 baseAmbientPS(uniform float4 ambientl) : COLOR { return ambientl; }


For the second pass, we can use the multi-light type shader from chapter 10, and edit it to work on a single light:

Copy to clipboard
struct vertexIn { float4 position : POSITION; float3 normal : NORMAL; }; struct vertexOut { float4 position : POSITION; float3 normal : TEXCOORD0; float3 viewDir : TEXCOORD1; float3 pixelToLight : TEXCOORD2; }; // main vertex program //-------------------------------------------------------------------------------- vertexOut multiTypeLightVS( vertexIn input, uniform float4x4 worldViewProj_m, uniform float4 cameraPos, uniform float4 lightPos ) { vertexOut output= (vertexOut)0; output.position = mul(worldViewProj_m, input.position); output.normal = input.normal; output.viewDir = cameraPos - input.position; if(lightPos.w == 0) output.pixelToLight = lightPos.xyz; else output.pixelToLight = lightPos - input.position; return output; } //-------------------------------------------------------------------------------- //main pixel program //-------------------------------------------------------------------------------- float4 multiTypeLightPS( vertexOut input, uniform float4 spotLightParams, uniform float3 lightDir, uniform float4 diffColor, uniform float4 specColor, uniform float4 lightAtten, uniform float4 lightDist, uniform float specShine ) : COLOR { float4 diff = float4(0, 0, 0, 0); float4 spec = float4(0, 0, 0, 0); input.viewDir = normalize( input.viewDir ); input.normal = normalize( input.normal ); input.pixelToLight = normalize( input.pixelToLight ); float dotNL = dot(pixelToLight, normal); float luminosity = 1 / ( lightAtten.y + lightAtten.z*lightDist + lightAtten.w*pow(lightDist,2)); float3 halfAng = normalize(viewDir + pixelToLight); float dotNH = dot(normal, halfAng); // if spotlight params are (1, 0, 0, 1) we have a point, directional or empty light; we handle them the same. if(spotLightParams.x == 1 && spotLightParams.y == 0 && spotLightParams.z == 0 && spotLightParams.w == 1) { spec += pow(saturate(dotNH),specShine) * specColor * luminosity; diff += (saturate(dotNL)) * diffColor * luminosity; } else if(dotNL > 0) { //if it was not either of the above, we have a spotlight float dotPLd = dot(-pixelToLight, spotLightDir); //diffuse //------------------------------ if ( dotPLd > spotLightParams.y ) diff += dotNL * (1-(spotLightParams.x - dotPLd)/(spotLightParams.x - spotLightParams.y)) * diffColor * luminosity; else if ( dotPLd > spotLightParams.x ) diff += dotNL * diffColor * luminosity; //------------------------------ // specular //------------------------------ if (dotPLd > 0) spec += pow(saturate(dotNH),specShine) * specColor * luminosity; //------------------------------ } return diff + spec ; } //--------------------------------------------------------------------------------

The material

In order to iterate over lights, we use the iteration rule command.
In our case, we will use the once_per_light rule, but do read about it in the OGRE manual.

So, our material will have two vertex programs, and two pixel program:

  • Base ambient VS/PS
  • multiTypeLight VS/PS (lightMasterB_VS/PS)


The first pass will render the object with only the ambient color, and the second with all available lights. Although the Phong shader does receive an ambient color, we will send an empty color to it. If not, the ambient will be contributed multiple times.

The second pass will use additive blending (to keep the previous pass’s results) and an iteration rule.

It should look something like this:

Copy to clipboard
vertex_program baseAmbientVS hlsl { source baseAmbient.hlsl entry_point baseAmbientVS target vs_1_1 default_params { param_named_auto worldViewProj_m worldviewproj_matrix } } fragment_program baseAmbientPS hlsl { source baseAmbient.hlsl entry_point baseAmbientPS target ps_2_0 } vertex_program lightMasterB_VS hlsl { source lightmaster2.hlsl target vs_1_1 entry_point multiTypeLightVS default_params { param_named_auto worldViewProj_m worldviewproj_matrix param_named_auto cameraPos camera_position_object_space param_named_auto lightPos light_position_object_space 0 } } fragment_program lightMasterB_PS hlsl { source lightmaster2.hlsl target ps_2_0 entry_point multiTypeLightPS default_params { param_named_auto lightDir light_direction_object_space 0 param_named_auto spotLightParams spotlight_params 0 param_named_auto diffColor light_diffuse_colour 0 param_named_auto specColor light_specular_colour 0 param_named_auto lightAtten light_attenuation 0 param_named_auto lightDist light_distance_object_space 0 } } material lightMasterB { technique { pass { vertex_program_ref baseAmbientVS { } fragment_program_ref baseAmbientPS { } } pass { iteration once_per_light scene_blend add vertex_program_ref lightMasterB_VS {} fragment_program_ref lightMasterB_PS { param_named specShine float 55 } } } }

Note that:
• I sent some of the information to the programs in the pass. This is because it’s a specific implementation, and not a constant rule of action (world-view-projection will never defer, but the desired shininess of the specular might)
• I used the shader the same way I would have used it without the iteration for a single light
• Scene blend add (vital…)

WOOHOO

you finished part one of the guide!