Your very own, first, copy-pasted, good for nothing shader



In the beginning

Before we can start creating our own shaders in FXC, we need to create a project:

1: when you open FXC, you will be greeted with the following window:
… Press new project, put it wherever you want.

2: right click on the material menu (top left), and select ‘add material from new effect’

3: the following menu will appear; select HLSL FX

4: select empty

5: a new material will appear on the top left menu, and a code will appear in the center -> DELETE THIS CODE

Now we have a clean start.

Basic FXC guidelines

  1. ctrl + LMB : move camera across its own plane
  2. shift +LMB : zoom in/out
  3. mouse wheel : zoom in/out
  4. Alt+ LMB : rotate camera around center.

Create primitive object or lights in render window:

Unless you are blind, you must have seen this in the top:

Applying material to object

  1. method A:
    1. press material on material menu
    2. drag and drop on object in render window
  2. method B:
    1. right click object
    2. material/effects
    3. select desired material


Control thingy

In the top, you will see this

  1. effect: create new shader (like we did in the right click on material menu)
  2. compile : compile current code
  3. rebuild all : compile all available codes
  4. analyze : when you reach a point where you build big complex shaders, this is a nice tool to find out how they perform. that said, some find it somewhat unreliable and hard to understand.

The three buttons in the left


  1. if your material or scene has something that depend on time, the first two help you.
  2. the other one : render system (Direct3D or OpenGL) ,
    1. Hint: HLSL uses D3D.


Now, I know you’re thinking … “my god…. Cant we just see something happen already??”

Well, the time has come to COPY PASTE your first shader.

float4 mainVS(float4 pos : POSITION) : POSITION
	return pos;

float4 mainPS() : COLOR 
	return float4(1.0, 0, 0, 1.0);

technique technique0 
	pass p0 
		VertexShader = compile vs_1_1 mainVS();
		PixelShader = compile ps_2_0 mainPS();

  1. paste (or write by yourself, doesn’t matter, its not like you know what it means)
  2. press compile in the control thingy
  3. create a sphere
  4. apply the material to the sphere

The result will be our version of the


Saving yourself from Alma

Now that we have satisfied your need to see something happen, we can start dissecting what just happened, and understand what is happening within the most basic shaders ever created, and what semantics are.

By the way: If you move around in the render window, you will notice that the sphere sticks to your face whatever you do. This is not a mistake; it’s the main reason I call it


So what just happened?

Our little program contains three parts:

  1. a vertex shader(mainVS)
  2. a pixel shader(mainPS)
  3. a rendering technique

say what?

What are shaders then? What are the operations we see there? Here it comes…

A 3d object is a static thing. However you look at it, that sphere will not change.

Shaders provide the means to manipulate the way things are rendered onto the screen. While the object itself will stay the same (data-wise), the way it looks to the client will differ, according to the tasks we design the shader to perform.

Each shader program will run once per vertex / pixel by the GPU.

 The interfering tip box

Not satisfied with the explanation?
Feeling smart?

Google “GPU programmable pipeline”

The vertex shader

This shader type allows you to manipulate how a vertex is rendered. The vertex data is collected from the original object. Consequently, a vertex program at minimum must provide the vertex’s position.

The pixel shader

This shader type (sometimes called fragment program) allow you to program how the pixels (or fragments) are rendered. Consequently, a pixel program at minimum must provide the pixel’s color.


The techniques describe the order in which the object is rendered. A material can have multiple techniques. Each technique has at least one pass.

A pass is a single render of the object. In more complex techniques, you can have multiple passes, create an elaborate rendering sequence.


So what are these caps-locked additions that seem to be everywhere inside the code?

These are semantics.

Semantics describe the purpose of the parameter for the GPU. As mentioned earlier, each shader is run once per vertex/pixel. After the program executes, data can be returned. But how can you explain to the GPU what to do with a collection of 4 float numbers? Semantics exist just for that purpose.

For instance:

Our vertex shader receives float4 (the vertex position in object space). Leaving it unchanged, it passes it forward.

Take a look at the vertex shader declaration:

float4 mainVS(float4 pos : POSITION) : POSITION
	return pos;

The program’s input is:

float4 pos : POSITION

This means that the vertex program will receive a float4 vector, described as being a position.

Look at the program description itself:

float4 mainVS(float4 pos : POSITION) : POSITION

This means: return a float4 vector and flag it as a position.

Now, semantics have laws; if we were to change the return value of the program to float2, we will have had received an error.

Why? Because in order to qualify as a position, a vector must contain exactly four values: XYZW.

Same goes for a color – a color must contain four values: RGBA.

And the list goes on.

As we progress, we will study more and more types of semantics, and what they are used for.

A closer look

Now, you might be wondering “where is the main loop? Who invokes these methods? How does he know what to send?”

In order to understand what’s happening, we must first look at the technique:

technique technique0 
	pass p0 
		VertexShader = compile vs_1_1 mainVS();
		PixelShader = compile ps_2_0 mainPS();

A technique provides instructions on how you want the object to be rendered. A technique must have at least one pass.

A pass is a single render of the object. In here, we provide the pass with two programs: a vertex shader, and a pixel shader.

Declaring one of these programs is as follows

[program type]= [profile] [function-name];

  • |Program type| :either VertexShader or PixelShader.
  • = compile |profile| :the shader profile is the ‘compilation package’ in which we want the shader to compile against.
  • |function_name| :our program.

A word abour profiles

(Also called target)

We decide the version according to the functions or operations we use in the program. You should always use the lowest possible version. That said, some might stop being supported one day (as for now, all ps_1_x are no longer supported).

Notice that the vertex shader is compiles using vs_1_1 while the pixel shader is compiled with ps_2_0.

Where was that closer look? I asked some questions!

So, we now understand (presumably) what the technique is. Who invokes all of this? Where is the loop?

Short answer: the iron curtain. That is, the metal piece that covers your pc.

We can make a hypothetical loop of what’s happening:

[Rendering a frame to your screen] 
	[Rendering the sphere] 
		[Using technique you wrote] 
			[Render first pass] 
				[Perform instructions] 
				[Call vertex program for each vertex] 
				[Call pixel program for each pixel] 
			[Render next pass]… 
	[Sphere has been rendered combining all passes] 
	[Rendering other stuff] 

It might feel somewhat unnatural, but we provide only instructions.

You will also notice that calling the vertex shader, we provide no arguments, although we have declared it must receive a position. All parameters flagged with semantics will be filled automatically with the appropriate data.

How does it know what position it needs to send? The program is flagged as a vertex shader. As such, a parameter flagged as position will automatically be filled with the current vertex’s position.

Farther ahead, when we begin to write more complex shaders that require more arguments (some might not be flagged) we will learn how to send or receive different types of data.