Page 4: Lighting in Shaders
CS559 Spring 2021 Sample Solution  Workbook 11
Written by CS559 course staff
You can try out the example solutions here. The interesting code is on page 6, page 7 and page 9.
The first thing we usually need to do in a shader is compute lighting. The simple shaders from page 2 didn’t have lighting (so all sides of the cube looked the same).
We discussed the equations for a simple lighting model (Phong) in class. You can find the shader code for this all over the web and even in some of the required readings.
If you recall, in order to compute lighting at a point, we need to know:
 The local geometry (mainly the normal vector  we usually don’t need the position)
 Information about the surface property (such as its color)
 Information about the lights (color, intensity, direction)
 Information about the camera (so we have the eye direction for specular computations)
The geometry (#1) is different for every point  we’ll need to pass it to the shader as a varying variable.
Information about the surface is constant for the object, it goes into uniform variables. We could pass pervertex colors, or do a texture lookup (in which case the texture is a uniform  but we’ll get to that later).
Information about the lights is constant for the scene, we can either pass it as a uniform variable, or hard code it into the shaders.
Observe that we are performing the lighting calculation (computing the color) in the fragment shader, which means we are doing it perpixel (or fragment). This means a lot of lighting calculations. That’s OK, because the graphics hardware is fast. However, we could have computed the lighting pervertex, which would have given a color pervertex. That color would be interpolated to give colors for each pixel. In perpixel (or perfragment) lighting we interpolate the normal vector, and compute the color on each pixel. We also might have pervertex colors that we interpolate and use as part of the lighting calculation.
Simple Lighting
Let’s try a simple example. We’ll make a purely diffuse surface lit by a single directional light source. The lighting equation is:
$$c = c_d * (\hat{n} \cdot \hat{l}) * l_d$$
Where c is the resulting light color, $c_d$
is the surface color, $l_d$
is the light color, $\hat{n}$
is the unit normal vector, and $\hat{l}$
is the unit light vector (the direction the light comes from).
This is quite simple in code. To make it even simpler, I will assume that $l_d$
is white.
In the vertex shader, we can do everything as we have been, except that now we have to pass the normal vector. There is one catch: the normal vectors are in the object’s local coordinate system. Just as we transform the object’s positions by the “model” matrix to get it into the “world” coordinates, we need to provide a similar transformation to the normals. It turns out that if you transform an object by a matrix M, you have to transform its normals by a different matrix N (which is the adjoint or inversetranspose of M). The math for this is discussed in Section 6.2.2 of Fundamental of Computer Graphics. THREE provides the normal matrices for us.
So, when we transform the vertex to get its final position, we also transform the normals using the normalMatrix
that THREE gives us. There is one slight catch: notice that we transform the position by modelViewMatrix
because we need to know where the vertex is going to end up in view coordinates (we need both the modeling matrix and the viewing matrix). The normalMatrix
in THREE is similar: it tells us what direction the normal will be pointing in view (not world) coordinates. This is documented in the WebGlProgram
page.
So, our vertex program (which is in shaders/1104.vs  with comments) looks like:


Again, notice how we need to declare a varying variable, and that we have to compute the transformed normal (that is transformed the same way the the object is). Also notice that the normal is not transformed by the projection: we don’t want the lighting affected by perspective.
The action happens in the fragment shader (shaders/110401.fs), which computes the lighting equation.


Let’s discuss this part by part.
First, we declare some “global” variables. We declare the varying vector v_normal
to receive normal information from the vertex shader. (Note that we choose to omit v_position
, which isn’t used by this fragment shader.) We also declare two constants, the light direction vector lightDir
and the surface color baseColor
 these correspond to $\hat{l}$
and $c_d$
in the equation.
In the shader itself, the first thing we do is compute nhat
(which is $\hat{n}$
). We need to renormalize the vector: because the fragment normal is computed by linear interpolation of the vertex normals, it may no longer be unit length (even if the vertex normals were unit length).
Then we compute the dot product  just as in the equation. One slight deviation: we take the absolute value of this, so if the normal is facing inward I still get the same lighting. This makes sure things work for two sided lighting.
Finally, we use this brightness amount to change the color.
There is a hidden trick here: the normal vector is in the view (or camera coordinate) system. The zaxis is perpendicular to the image plane (basically, pointing towards the camera). If you look at the results, you’ll see it as if the light is where the camera is. Notice how the light on the sphere is brightest at the part that points towards the camera. You should also notice that although this is diffuse lighting, it changes as the camera moves (because the light is moving with the camera).
110401.js (110401.html) is similar to the previous examples, but make sure you understand the shaders shaders/1104.vs and shaders/110401.fs before going on.
Light Parameters and Camera Coordinates
Usually, we like to think about lights in “world coordinates”, not coordinates that move with the cameras. So the previous example is inconvenient. Previously the light was attached to the camera. If we wanted to have the light defined in the world (for example, we would like to have the light coming from straight above  (0,1,0)  as if it were the sun at noon, or a light in the ceiling), we’re stuck.
It turns out this is a common problem. In many graphics systems, there is no notion of the “world coordinates”  there are just camera coordinates. All other coordinate systems are up to the programmer. The fact that we have “world coordinates” is our own convention.
There are a few things we could do, here are two general approaches:
 We could compute the normals in world coordinates. Unfortunately, while THREE gives us
normalMatrix
which is the adjoint of themodelViewMatrix
, it has no equivalent predefined uniform for the adjoint of themodelMatrix
. We have to compute it ourselves, and make our own uniform variable.  We could transform the lights into view coordinates by transforming them by the viewing matrix. This is actually what THREE (and most graphics systems) do.
Let’s try both approaches and make a light from vertically above (with the same diffuse material).
In 110402a.js (110402a.html), we’ll try approach #2 first: transforming the lights. The simplest thing to do would be to apply the view transformation in the fragment shader, rewriting it as:


This works (note how the light comes from above, so the square is dark):
Notice that because I am doing “two sided” lighting (with that abs
), the light comes both from above and below (the top and bottom of the sphere are lit).
The downside is this is really inefficient. We are doing a matrix multiply to change the light direction once for every fragment. That’s a lot of work  that we don’t need to be doing. We could have transformed the light position once and made it a uniform.
The alternative would be to make the light direction a uniform variable. The problem with this is that when we create uniform variables, we don’t know what the camera will be (or have the view matrix). For THREE’s built in lights, this is implemented in the render loop so that the appropriate light directions are computed just before rendering when the view matrix is known. THREE provides mechanisms for performing these kinds of “prerendering” computations, but we won’t discuss them. An inbetween hack would be to perform the multiplication in the vertex shader, so it happens 3 times per triangle (rather than for each pixel).
We could use a similar strategy to define our own “model matrix adjoint” uniform, we would need to recompute it every time the model matrix changed. Again, THREE has ways to do this, but we aren’t going to take time to learn about them.
But here’s a hack you can use: usually, the modeling matrices are just rotations, translations and uniform scales. For the normals, we can ignore the translation. For the rotation, remember that (1) the adjoint is the inverse transpose and (2) the transpose of a rotation is the inverse. So, for rotations, the adjoint is the matrix itself. The only issue is the uniform scale  which does change the length of the vectors, but since we have to normalize them anyway, they don’t matter.
You can look at 110402b.html (and its associated shaders/110402b.vs and shaders/110402b.fs) to see the code is different, but the result looks the same.
Actually, to make sure it’s different edit shaders/110402b.fs (1) change the direction of the lighting so the square isn’t just dark, and (2) change the lighting equation so it is “1 sided” (so only the side of the square and sphere that face the light will be lit  the back will be dark). Because lighting is in world space, you can move the camera to the back side to check. (there are points for this)
Specular Lighting
Specular lighting is a little tricker  we need to account for the camera. Once again, THREE provides the camera position. But it is even easier than that: in view coordinates, the camera is the origin, so we know where it is! Computing the view direction is easy.
We were going to ask you to write this yourself  the equation is in the lectures and readings. But, one of your very kind TAs wrote it for you. We changed it to do everything in view coordinates.
The shaders are in shaders/1104.vs and shaders/110403.fs; the box is set up by 110403.js (110403.html).
But this only has specular! the object is generally dark. Make two simple changes to shaders/110403.fs to show that you understand it:

Add some diffuse lighting, so the object has a specular (highlight) and general diffuse lighting. Remember, you add these together. Be sure to “clamp” the total color so it doesn’t exceed 1.0.

Change the specular color to white  so the object has yellow diffuse reflection, and white specular. This will make it look more like plastic than metal.
Then, in 110403.txt, explain how we know that this is correct (1 sentence is OK)  tell us what a mix of specular and diffuse should look like, and how we can tell it really is combining the two.
Using THREE’s Lights
Of course, to really do things correctly and make them blend into our scenes, we should use the lights that are defined in the THREE scene so our objects using our shaders have the same lighting as those using THREE’s shaders.
Doing this requires:
 Setting up uniforms that receive information about THREE’s lights. Fortunately, THREE will set this up for us. We just need to use some poorly documented parts of THREE (the
UniformsLib
).  In our shaders, we need to declare all the uniforms that THREE provides.
 In our shaders, we need to loop over all of the lights and sum up their contributions.
 When we create the material we need to turn lights on.
The upside is that THREE gives lighting information in view space, so the issues discussed above are taken care of.
You can see an example in the Framework Demos (look at Shader Test 9).
Things get even trickier if we want to do shadows.
We will not require you to figure out how to use THREE’s lights in a shader  it will be sufficient for the exercises (future pages) to make a simple directional light source in camera coordinates. However, you can make your shaders work with THREE’s lights for bonus points.
Summary: Lighting in Shaders
Short version: we’ll let THREE take care of it. We might want to do a little simple lighting to add to our more interesting shaders (next).
On Next: Procedural Textures we’ll try something more interesting.