Per pixel lighting and Bump Mapping


The basic tool is the dot product. With it, you can measure the angle between two vectors. Let's speak about two specific vectors: the normal vector and the light vector. The light vector points to the light, from a given vertex. The normal vector is perpendicular to the surface of the object.

If these two vectors point in the same direction, the surface is in front of the light, and it is well lighted. On the other hand if these two vectors have two opposite direction, the light cannot reach the surface. So using the result of their dot product, we can compute the amount of light that is received on a given pixel.

Now where does this bump mapping interfere? Well the goal of the bump mapping is to alter the lighting to simulate details on the surface. So by altering a bit the normal on the surface, we simulate the fact that the simulate is not flat.

Black: a face. Red: its normals and how it will look like. Up: no bump. Bottom: bump mapping.

Normal map


All vector are normalized.

"L" is the light vector
"N" is the normal
"H" is the half-angle. In fact H = E+L or E+N, both work, where E is the unit vector pointing from the pixel to the camera.
"." is a dot product

Basic lighting: diffuse reflexion
Intensity = N . L

Phong lighting: diffuse lighting + specular lighting
Intensity = N.L + (H.N)^n
The new term adds some specular lighting, that is some shyness when the light is directly reflected in direction of the camera.

Tangent space

Of course all vectors must be stored in the same space: camera space, world space, object space or... tangent space.

Most of the time we use the tangent space, for it can be used in every cases.
The tangent space is made of three vectors: the tangent, the binormal and the normal. The tangent is the vector along which the U coordinates of the vertices are increased. The binormal is the vector along which the V coordinates of the vertices are increased. The normal is... the normal!
Most of the time the three vectors are orthogonal. In some cases they are not, but lighting will still work.

Of course each vertex has a different tangent space.

To compute the tangent space in a given vertex, you may use:

//X being the cross product
For each face {
	(x,y,z) = (p2->x-p1->x,p2->u-p1->u,p2->v-p1->v) X (p3->x-p1->x,p3->u-p1->u,p3->v-p1->v)
	if (x!=0) {
		p1->Tx += -y/x;
		p1->Bx += -z/x;
		p2->Tx += -y/x;
		p2->Bx += -z/x;
		p3->Tx += -y/x;
		p3->Bx += -z/x;
	(x,y,z) = (p2->y-p1->y,p2->u-p1->u,p2->v-p1->v) X (p3->y-p1->y,p3->u-p1->u,p3->v-p1->v)
	if (x!=0) {
		p1->Ty += -y/x;
		p1->By += -z/x;
		p2->Ty += -y/x;
		p2->By += -z/x;
		p3->Ty += -y/x;
		p3->By += -z/x;
	(x,y,z) = (p2->z-p1->z,p2->u-p1->u,p2->v-p1->v) X (p3->z-p1->z,p3->u-p1->u,p3->v-p1->v)
	if (x!=0) {
		p1->Tz += -y/x;
		p1->Bz += -z/x;
		p2->Tz += -y/x;
		p2->Bz += -z/x;
		p3->Tz += -y/x;
		p3->Bz += -z/x;
For each vertex {
	newN = TxB
	//Check direction of N according to a classic normal computation method:
	//sometimes N needs to be reversed
	if newN . oldN > 0
		N = newN
	else	N = -newN

Once you have your three vectors (T,B and N), all required vectors must be converted in this basis. For this you use three dot products:
"." being the dot product
L = (Lo . T , Lo . B , Lo . N) where Lo is the Light vector in the object space.
N = (No . T , No . B , No . N) where No is the Normal vector in the object space.
N = (0 , 0 , 1) in most cases as N==No (unless you alter it with Bump mapping).

Now you need to convert the vectors in colors. First normalize the vector. Then:

R = 127 + x*127
G = 127 + y*127
B = 127 + z*127


On one hand, we can use the basic OpenGL lighting. The problem is that it does compute lighting only for each vertex, not for each pixel. Then lighting is interpolated across the triangles between vertices. This is just a basic Goureau lighting.

Let's get some more fun. We will implement Phong lighting, that is per pixel lighting. We can just use the basic lighting, or the whole lighting, as you want. It just require some more passes to add the different terms of the equation. Now how to compute this damn per pixel dot product?

Hopefully, we have an OpenGL extension to help us: the GL_DOT3 extension, that does compute a per pixel dot product. We store the normal vector(s) in a texture called a normal map. And we store the light vector in the color of the vertex. So the normal map (using texture coordinates) gives us the proper per pixel normal (including the bump effect if you want), and OpenGL will interpolate the light vector when interpolating the color across a triangle.

Now let's set up the DOT3. I hope you know how to use multitexturing. Now let's set up a Texture Unit to use DOT3 between a texture color and a texel color:

That's all, folks!

Where do we get the normal map?

Simple choice:
Main page

email : Sly