Skip to main content

Where is that that wolf?

Or

Per pixel Phong and Blinn

And

Attenuation and Multiple light sources


It sound like some child’s book title doesn’t it?

Per pixel shading


While per vertex shading is fast and simple, it’s hardly produces a desired result in most cases. This is most noticeable in objects with low polygon count, and especially in the specular channel.

The solution is to move the calculation of color and reflection to the pixel program.

Theory things


Gouruad produces the pixel’s color by interpolating between the vertices meeting points. While in high polygon, non-complex models (such as a sphere made out of a very large amount of polygons) it works fine, in more unpredictable forms it results in a less than desired outcome. The lighting map will tend to sharply change at the vertices meeting points/lines.

Image
Gouraud technique - linear interpolation across the vertex

Instead of interpolating color across surfaces, we can interpolates the normal per pixel, and only than reflect. The picture below show the described approach:
Image

The vertex’s normal slightly changes per pixel, creating a more subtle and gradual curve, allowing light to be reflected more convincingly. Notice the smooth arc created. As always, better results come with lower performance – as the normalization happen every pixel instead of every vertex, the rendering takes longer.

Task division


In order to achieve this result, we will use the classic Phong model (we will use the same reflection method, but per pixel. When people say Phong, they usually refer to this)

First, we need to redistribute the calculations between the two programs in order to redefine their input and output structs.

All calculations we require that will not change per pixel are to be performed in the vertex program:

  1. Vertex position world-view-projection
  2. Vertex normal in world space
  3. Direction to light vector
  4. Direction to view vector


Program input:

  1. Vertex position
  2. Vertex normal


Program output:

  1. Vertex position in WorldViewProj
  2. Vertex normal in world space
  3. Light direction vector
  4. View direction vector


All calculations that can vary per pixel or directly related to the calculation of color values are to be performed in the pixel program:

  1. Normalize vectors per pixel (normal and directionals)
  2. dotNL (normal – light direction dot product)
  3. reflection vector
  4. dotVR (view direction and reflection dot product)
  5. diffuse color
  6. specular color


Program input:

  1. normal from vertex program
  2. light direction from vertex program
  3. view direction from vertex program


Program output:

  • The combined color of the pixel

Compatibility

The first problem you will encounter when trying to pass data from a vertex program to a pixel program is that pixel programs can only accept one of two things that we can use: COLORn or TEXCOORDn.

So, how do we pass a normal or a direction vector to the pixel shader?
Well, you will be surprised to hear that we use TEXCOORDn for a lot more than just textures. We will use TEXCOORD spots to deliver data from vertex to pixel.

So, first thing first, new structs:

Copy to clipboard
struct vertexIn { float4 position : POSITION; float3 normal : NORMAL; }; struct vertexOut { float4 position : POSITION; float3 normal : TEXCOORD0; float3 lightDir : TEXCOORD1; float3 viewDir : TEXCOORD2; }; struct pixelIn { float3 normal : TEXCOORD0; float3 lightDir : TEXCOORD1; float3 viewDir : TEXCOORD2; };



Notice:

  1. TEXCOORDs are indexed, and correspond in vertexOut and pixelIn.
  2. While the vertex program receives a proper normal, it outputs it as a TEXCOORD so it could be used.
  3. LightDir and viewDir are also passed forward to be normalized in the pixel shader.

The vertex program

Our vertex program will look very much like the way it did in the previous tutorial, but with less tasks and more return values:

Remember, we are not normalizing in the vertex program.

Copy to clipboard
vertexOut mainVS (vertexIn input) { vertexOut output= (vertexOut)0; output.position = mul(input.position, worldViewProj_m); output.normal = mul(input.normal, world_m); float3 worldpos = mul(input.position, world_m); output.lightDir = lightPos - worldpos; output.viewDir = cameraPos - worldpos; return output; }



First, as usual, we calculate WVP position for output.
Secondly, we calculate normal and position in world space.
Using world position of the vertex, we calculate the light direction and view direction and output them unaltered.

The pixel program

First thing we have to do is normalize the normal, lightDir and viewDir vectors so they will differ per pixel:

Copy to clipboard
float4 PixelShaderFunction( pixelIn input ) : COLOR { input.lightDir = normalize( input.lightDir ); input.viewDir = normalize( input.viewDir ); input.normal = normalize(input.normal);


Now, having per pixel data, we can calculate the diffuse and specular using the same algorithm we used in Gouraud:

Copy to clipboard
// this comment was put here because if i disable it the next line will not be tabbed forward, and i cant see my code not leveled correctly; I'm a bit insane and it drives me crazy when it does that. float dotNL = dot ( input.lightDir, input.normal ); float diff = saturate(dotNL); float3 ref = (input.normal * 2 * dotNL) - input.lightDir; float dotRV = dot(ref, input.viewDir); float spec = pow(saturate(dotRV),15); return ambientColor + diff * diffColor + Specular * specColor ; }


It may look like not much had change, but compare the two shaders in a complex model (even the classic ogrehead.mesh used in ogre tutorial will suffice) and you will notice a big difference (especially in specular)

Blinn-Phong

Blinn is a modification for the Phong shader. Instead of calculating the reflection vector, it calculates an approximation:

  1. First, we create the half-way vector, which is composed of light direction and view direction ( H );
  2. We use the angle between the half vector and the surface normal as an approximation of the angle between R and V.


As equations:

Copy to clipboard
H = ( L + V ) Specular = pow (dotNH, [how shiny])

Image

The new angle will always be smaller than the angle used in Phong, creating a larger specular highlight, but we can compensate with a higher exponent value. That said, they will always be slightly different.

Code:

Copy to clipboard
float4 PixelShaderFunction( pixelIn input ) : COLOR { input.lightDir = normalize( input.lightDir ); input.viewDir = normalize( input.viewDir ); input.normal = normalize( input.normal ); float dotNL = dot ( input.lightDir, input.normal ); float diff = saturate(dotNL); float3 halfway = normalize(input.viewDir + input.lightDir); float dotNH = dot(halfway, input.normal); float spec = pow(saturate(dotNH),25); return ambientColor + diff * diffColor + spec * specColor ; }

Why Blinn?


Why approximate, when the result is actually a slower shader (an additional normalization)? Well, the reflection in Blinn tends to keep its shape better at high angles, and tends to have a softer look.

Example:
Blinn-Phong and Phong using the same exponent for specular

See how the reflection in Blinn keeps its round shape even when observed at high angle, while Phong tends to stretch.
Image

In the second picture you can see the full reflection. Using same exponent gives a larger, softer highlight in Blinn.
Image

Each one has its advantages. It depends on what is the result you are looking for.

Point Light Attenuation

Attenuation is the way the light changes depending on its distance from the object. Read this this little article:
http://www.ogre3d.org/tikiwiki/-Point+Light+Attenuation
It is defined by these equations:

Copy to clipboard
Final color = light color * Luminosity Luminosity = 1 / Attenuation Attenuation = Constant + Linear * Distance + Quadratic * Distance^2


As you can see it’s not a big deal, but it is an important part of using lights.

So, let’s implement:
First, add the new globals:

Copy to clipboard
float linearAtten; float quadAtten; float constAtten; float attenRange;


Now we need to add the calculation to the pixel program:

Copy to clipboard
// you can calculate the length, but you can also ask the material script for 'light_distance_object_space'. it give you the distance fro the light to the center of the object, which is a good approximation. float lightDist = length(lightDir); float luminosity = 0; if( attenRange > lightDist) { luminosity= 1 / ( constAtten + linearAtten* lightDist + quadAtten*pow(lightDist,2)); } ….. return (ambientColor + diff * diffColor + spec * specColor) * luminosity ;


As long as lightDir is not yet normalized, it can be used to determine current distance if you calculated it according to ‘lightDir = lightPos - worldpos;’ in the vertex program.

In OGRE

It’s quite simple, all you need to do is add a float4 uniform, and ask the script to fill it with the light attenuation (you need to set it from the application).

Copy to clipboard
param_named_auto [name] light_attenuation 0


Order of the array:

OGRE manual wrote:

...The order of the parameters
is range, constant attenuation,
linear attenuation, quadric attenuation...

Multiple light sources

I’m surrounded!

You are most likely going to create more than a single light in your scene, but up until now, our shader only handles the closest light to it. How can we include multiple lights to our lighting technique?

Truth be told, it’s a fairly technical things, but none the less, an important one to overview. In order to do this, we first need to create arrays so we could hold more than a single light’s information; lets declare some new globals:

Copy to clipboard
#define lightCount 3 float4 lightPoses[lightCount]; float4 diffColors[lightCount] : color; float4 specColors[lightCount] : color;


Make sure you fill the arrays with data in the material properties
(Just a random example)
Image

Now, we need to address this change in the structs; unlike our original vertex program, we now need to also output an array of light directions, and input those into the pixel program:

Copy to clipboard
struct vertexOut { float4 position : POSITION; float3 normal : TEXCOORD0; float3 viewDir : TEXCOORD1; float3 lightDir[lightCount] : TEXCOORD2; }; struct pixelIn { float3 normal : TEXCOORD0; float3 viewDir : TEXCOORD1; float3 lightDir[lightCount] : TEXCOORD2; };

 IMPORTANT NOTE:
Outputted Arrays should be placed LAST. An array of ‘n’ members will occupy ‘n’ amount of TEXCOORDs forward. If lightDir was placed in TEXCOORD1 and viewDir was placed in TEXCOORD2, they would have overlapped and caused a critical error.

Vertex program changes

Our vertex program would not change much, only an added loop to calculate light directions per light source, all else stays the same:

Copy to clipboard
vertexOut VertexShaderFunction(vertexIn input) { vertexOut output= (vertexOut)0; output.position = mul(input.position, worldViewProj_m); output.normal = mul(input.normal, world_m); float3 worldpos = mul(input.position, world_m); output.viewDir = cameraPos - worldpos; for(int i = 0; i < lightCount; i++) // this is the new stuff { output.lightDir[i] = lightPoses[i] - worldpos; } return output; }

Pixel program changes

While the changes to the pixel program are also minimal, they are more noticeable than the once in the vertex program.
First, two things have not changed: the normal and viewDir (since they do not change per source)
The things that did change are:

  • We need to use a variable to hold the diffuse and specular colors (initially complete black).
  • Normalizing light direction per source
  • Calculate diffuse light (intensity and color) and contribute it to the final diffuse.
  • Calculate specular light (intensity and color) and contribute it to the final specular.


Let’s overview the new pixel program (this program use Phong reflection, but feel free to use Blinn):

Copy to clipboard
float4 PixelShaderFunction( pixelIn input ) : COLOR { input.viewDir = normalize( input.viewDir ); input.normal = normalize( input.normal ); float4 diff = float4(0,0,0,0); float4 spec = float4(0,0,0,0); for(int i = 0; i<lightCount ; i++) { input.lightDir[i]= normalize( input.lightDir[i] ); float dotNL = dot ( input.lightDir[i], input.normal ); float3 ref = (input.normal * 2 * dotNL) - input.lightDir[i]; float dotRV = dot(ref, input.viewDir); spec += pow(saturate(dotRV),specularIntensity) * specColors[i]; diff += saturate(dotNL) * diffColors[i]; } return ambientColor + diff + spec ; }


First, we altered the diff and spec variables to be float4, so they could hold a color value. This is because we want to use different light colors per source, so we can’t apply a common color in the final calculation. Instead, we will add the values directly into diff and spec using each source’s values.

Now, we normalize the light direction, and calculate diffuse and specular per source in a loop. Than we add each light’s contribution to the final color using its own color data.

Lastly, we combine the three channels to a final output.

Image
Objects affected by three light sources.

Multiple light sources in OGRE

Sending information of multiple lights from an OGRE script is actually a minor modification; OGRE can send an array just like the ones we created. That means that you need to add all the needed arrays (including the amount of lights you are sending, if you wish to specify on the run).

Each light property you can send (such as light_position or light_diffuse_colour) has an _array equivalent (check OGRE manual, or use a script editor); the extra params fields will be the number of lights (picked by distance from object) you wish to include.

Copy to clipboard
param_named_auto [array name] light_[type]_colour_array [number of lights]

Type can be either diffuse or specular (or emissive, but we haven’t touched that)

OGRE scripts can also create preprocessor defines (like our lightCount), allowing us to use the same source code with the ’lightCount’ parameter.

Copy to clipboard
preprocessor_defines [#define name]=[value]


Let’s take our current example and export it:
First, the HLSL file:

Copy to clipboard
struct vertexIn { float4 position : POSITION; float3 normal : NORMAL; }; struct vertexOut { float4 position : POSITION; float3 normal : TEXCOORD0; float3 viewDir : TEXCOORD1; float3 lightDir[lightCount] : TEXCOORD2; }; struct pixelIn { float3 normal : TEXCOORD0; float3 viewDir : TEXCOORD1; float3 lightDir[lightCount] : TEXCOORD2; }; vertexOut multiLightVS( vertexIn input, uniform float4x4 worldViewProj_m, uniform float4x4 world_m, uniform float4 cameraPos, uniform float4 lightPoses[lightCount] ) { vertexOut output= (vertexOut)0; output.position = mul(worldViewProj_m, input.position); output.normal = mul(world_m, input.normal); float3 worldpos = mul(world_m, input.position); output.viewDir = cameraPos - worldpos; for(int i = 0; i < lightCount; i++) { output.lightDir[i] = lightPoses[i] - worldpos; } return output; } float4 multiLightPS( pixelIn input, uniform float4 diffColors[lightCount], uniform float4 specColors[lightCount], uniform float4 ambientColor ) : COLOR { input.viewDir = normalize( input.viewDir ); input.normal = normalize( input.normal ); float4 diff = float4(0,0,0,0); float4 spec = float4(0,0,0,0); for(int i = 0; i<lightCount ; i++) { input.lightDir[i] = normalize( input.lightDir[i] ); float dotNL = dot ( input.lightDir[i], input.normal ); float3 ref = (input.normal * 2 * dotNL) - input.lightDir[i]; float dotRV = dot(ref, input.viewDir); spec += pow(saturate(dotRV),25) * specColors[i]; diff += saturate(dotNL) * diffColors[i]; } return ambientColor + diff + spec ; }

And the material:
Copy to clipboard
vertex_program multiLightVS hlsl { source multilight.hlsl entry_point multiLightVS target vs_2_0 preprocessor_defines lightCount=3 default_params { param_named_auto worldViewProj_m worldviewproj_matrix param_named_auto world_m world_matrix param_named_auto cameraPos camera_position param_named_auto lightPoses light_position_array 3 } } fragment_program multiLightPS hlsl { source multilight.hlsl entry_point multiLightPS target ps_2_0 preprocessor_defines lightCount=3 default_params { param_named_auto ambientColor ambient_light_colour param_named_auto diffColors light_diffuse_colour_array 3 param_named_auto specColors light_specular_colour_array 3 } } material textry4 { technique { pass { vertex_program_ref multiLightVS {} fragment_program_ref multiLightPS {} } } }


Image
The shader in OGRE, using three unique colored lights.

Add attenuation you lazy bastard, how about that?!

I’ll take care of it

Hey, guess what?
OGRE can iterate per light on its own, and you can control it via the script using your original one-light-capable shader!

WHAT??? So why did we write this long and tedious one?!

  • I* wrote it. I doubt you even copy-pasted.

Four reasons:

  1. It’s a design independent from OGRE. (though most 3d engines have the ability to loop a specific pass per light)
  2. While having somewhat more cumbersome code, it’s more efficient. When you iterate per light from the pass, you re-render the mesh (both vertices and pixels) which adds a lot of work.
  3. It was an educational torture meant to teach you how to pass arrays of data from vertex to pixel program and use preprocessor defines (and, well, it was a good practice)
  4. The ’simpler’ and more flexible way requires blending, which we have yet to learn.


however, this technique has one main drawback: it has a hard-coded loop limit. that means that with a single light available, an additive blend will render once, while this shader will render its fixed 'n' times (even if you set if to 'jump over'- GPUs don't really do that, its an illusion created by using high level languages)

Why use additive lighting if it’s so bad for fps?

  1. its not THAT bad for FPS
  2. Easier to implement.
  3. More flexible, but much more 'modular'.
  4. No risk of overlapping data in vertex to pixel program.
  5. Great example for use of blending (not really a reason to use however)


It’s a neat idea, and its code looks much prettier, but it’s not always fast (you know, like JAVA or Python. Wait… Python isn’t even pretty to look at… why does it still exist?).

We will review additive lighting in the last chapter 14 (last chapter for part one), after we learn blending.