Page 4: Lighting in Shaders

CS559 Spring 2023 Sample Solution

The first thing we usually need to do in a shader is compute lighting. The simple shaders from page 2 didn’t have lighting (so all sides of the cube looked the same).

We discussed (or will discuss) the equations for a simple lighting model (Phong) in class. You can find the shader code for this all over the web and even in some of the required readings.

If you recall, in order to compute lighting at a point, we need to know:

  1. The local geometry (mainly the normal vector - we usually don’t need the position)
  2. Information about the surface property (such as its color)
  3. Information about the lights (color, intensity, direction)
  4. Information about the camera (so we have the eye direction for specular computations)

The geometry (#1) is different for every point - we’ll need to pass it to the shader as a varying variable.

Information about the surface is constant for the object, it goes into uniform variables. We could pass per-vertex colors, or do a texture lookup (in which case the texture is a uniform - but we’ll get to that later).

Information about the lights is constant for the scene, we can either pass it as a uniform variable, or hard code it into the shaders.

Observe that we are performing the lighting calculation (computing the color) in the fragment shader, which means we are doing it per pixel (or fragment). This means a lot of lighting calculations. That’s OK, because the graphics hardware is fast. However, we could have computed the lighting per-vertex, which would have given a color per-vertex. That color would be interpolated to give colors for each pixel. In per-pixel (or per-fragment) lighting we interpolate the normal vector, and compute the color on each pixel. We also might have per-vertex colors that we interpolate and use as part of the lighting calculation.

Simple Lighting

Let’s try a simple example. We’ll make a purely diffuse surface lit by a single directional light source. The lighting equation is:

$$c = c_d * (\hat{n} \cdot \hat{l}) * l_d$$

Where c is the resulting light color, $c_d$ is the surface color, $l_d$ is the light color, $\hat{n}$ is the unit normal vector, and $\hat{l}$ is the unit light vector (the direction the light comes from).

This is quite simple in code. To make it even simpler, I will assume that $l_d$ is white.

In the vertex shader, we can do everything as we have been, except that now we have to pass the normal vector. There is one catch: the normal vectors are in the object’s local coordinate system. Just as we transform the object’s positions by the “model” matrix to get it into the “world” coordinates, we need to provide a similar transformation to the normals. It turns out that if you transform an object by a matrix M, you have to transform its normals by a different matrix N (which is the adjoint or inverse-transpose of M). The math for this is discussed in Section 6.2.2 of Fundamental of Computer Graphics. THREE provides the normal matrices for us.

So, when we transform the vertex to get its final position, we also transform the normals using the normalMatrix that THREE gives us. There is one slight catch: notice that we transform the position by modelViewMatrix because we need to know where the vertex is going to end up in view coordinates (we need both the modeling matrix and the viewing matrix). The normalMatrix in THREE is similar: it tells us what direction the normal will be pointing in view (not world) coordinates. This is documented in the WebGlProgram page.

So, our vertex program (which is in shaders/10-04.vs - with comments) looks like:

18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
varying vec3 v_normal;
varying vec3 v_position;

void main() {
    // compute the position in view space
    vec4 pos = (modelViewMatrix * vec4(position,1.0));
    
    // the main output of the shader (the vertex position)
    gl_Position = projectionMatrix * pos;
    
    // pass position to fragment shader
    v_position = pos.xyz;
    
    // compute the view-space normal and pass it to fragment shader
    v_normal = normalMatrix * normal;
}

Again, notice how we need to declare a varying variable, and that we have to compute the transformed normal (that is transformed the same way the object is). Also notice that the normal is not transformed by the projection: we don’t want the lighting affected by perspective.

The action happens in the fragment shader ( shaders/10-04-01.fs), which computes the lighting equation.

11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
varying vec3 v_normal;

// note that this is in VIEW COORDINATES
const vec3 lightDir = vec3(0,0,1);
const vec3 baseColor = vec3(1,.8,.4);

void main()
{
    // we need to renormalize the normal since it was interpolated
    vec3 nhat = normalize(v_normal);

    // deal with two sided lighting
    // light comes from above and below (use clamp rather than abs to get one sided)
    float light = abs(dot(nhat, lightDir));

    // brighten the base color
    gl_FragColor = vec4(light * baseColor,1);
}

Let’s discuss this part by part.

First, we declare some “global” variables. We declare the varying vector v_normal to receive normal information from the vertex shader. (Note that we choose to omit v_position, which isn’t used by this fragment shader.) We also declare two constants, the light direction vector lightDir and the surface color baseColor - these correspond to $\hat{l}$ and $c_d$ in the equation.

In the shader itself, the first thing we do is compute nhat (which is $\hat{n}$). We need to renormalize the vector: because the fragment normal is computed by linear interpolation of the vertex normals, it may no longer be unit length (even if the vertex normals were unit length).

Then we compute the dot product - just as in the equation. One slight deviation: we take the absolute value of this, so if the normal is facing inward I still get the same lighting. This makes sure things work for two-sided lighting.

Finally, we use this brightness amount to change the color.

There is a hidden trick here: the normal vector is in the view (or camera coordinate) system. The z-axis is perpendicular to the image plane (basically, pointing towards the camera). If you look at the results, you’ll see it as if the light is where the camera is. Notice how the light on the sphere is brightest at the part that points towards the camera. You should also notice that although this is diffuse lighting, it changes as the camera moves (because the light is moving with the camera).

10-04-01.js ( 10-04-01.html) is similar to the previous examples, but make sure you understand the shaders shaders/10-04.vs and shaders/10-04-01.fs before going on.

Light Parameters and Camera Coordinates

Usually, we like to think about lights in “world coordinates”, not coordinates that move with the cameras. So the previous example is inconvenient. Previously the light was attached to the camera. If we wanted to have the light defined in the world (for example, we would like to have the light coming from straight above - (0,1,0) - as if it were the sun at noon, or a light in the ceiling), we’re stuck.

It turns out this is a common problem. In many graphics systems, there is no notion of the “world coordinates” - there are just camera coordinates. All other coordinate systems are up to the programmer. The fact that we have “world coordinates” is our own convention.

There are a few things we could do, here are two general approaches:

  1. We could compute the normals in world coordinates. Unfortunately, while THREE gives us normalMatrix which is the adjoint of the modelViewMatrix, it has no equivalent pre-defined uniform for the adjoint of the modelMatrix. We have to compute it ourselves, and make our own uniform variable.
  2. We could transform the lights into view coordinates by transforming them by the viewing matrix. This is actually what THREE (and most graphics systems) do.

Let’s try both approaches and make a light from vertically above (with the same diffuse material).

In 10-04-02a.js ( 10-04-02a.html), we’ll try approach #2 first: transforming the lights. The simplest thing to do would be to apply the view transformation in the fragment shader, re-writing it as:

10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
varying vec3 v_normal;

// note that this is in WORLD COORDINATES
const vec3 lightDirWorld = vec3(0,1,0);
const vec3 baseColor = vec3(1,.8,.4);

void main()
{
    // we need to renormalize the normal since it was interpolated
    vec3 nhat = normalize(v_normal);

    // get the lighting vector in the view coordinates
    // warning: this is REALLY wasteful!
    vec3 lightDir = normalize(viewMatrix * vec4(lightDirWorld, 0)).xyz;

    // deal with two sided lighting
    float light = abs(dot(nhat, lightDir));

    // brighten the base color
    gl_FragColor = vec4(light * baseColor,1);
}

This works (note how the light comes from above, so the sides of the cube are dark):

Notice that because I am doing “two sided” lighting (with that abs), the light comes both from above and below (the top and bottom of the objects are lit).

The downside is this is really inefficient. We are doing a matrix multiply to change the light direction once for every fragment. That’s a lot of work - that we don’t need to be doing. We could have transformed the light position once and made it a uniform.

The alternative would be to make the light direction a uniform variable. The problem with this is that when we create uniform variables, we don’t know what the camera will be (or have the view matrix). For THREE’s built in lights, this is implemented in the render loop so that the appropriate light directions are computed just before rendering when the view matrix is known. THREE provides mechanisms for performing these kinds of “pre-rendering” computations, but we won’t discuss them. An in-between hack would be to perform the multiplication in the vertex shader, so it happens 3 times per triangle (rather than for each pixel).

We could use a similar strategy to define our own “model matrix adjoint” uniform, we would need to recompute it every time the model matrix changed. Again, THREE has ways to do this, but we aren’t going to take time to learn about them.

But here’s a hack you can use: usually, the modeling matrices are just rotations, translations and uniform scales. For the normals, we can ignore the translation. For the rotation, remember that (1) the adjoint is the inverse transpose and (2) the transpose of a rotation is the inverse. So, for rotations, the adjoint is the matrix itself. The only issue is the uniform scale - which does change the length of the vectors, but since we have to normalize them anyway, they don’t matter.

You can look at 10-04-02b.html (and its associated shaders/10-04-02b.vs and shaders/10-04-02b.fs) to see the code is different, but the result looks the same.

Actually, to make sure it’s different edit shaders/10-04-02b.fs (1) change the direction of the lighting so the sides of the square aren’t totally dark, and (2) change the lighting equation so it is “1 sided” (so only the sphere and at most three sides of the cube that face the light will be lit - the back will be dark). Because lighting is in world space, you can move the camera to the back side to check. (there are points for this)

Specular Lighting

Specular lighting is a little tricker - we need to account for the camera. Once again, THREE provides the camera position. But it is even easier than that: in view coordinates, the camera is the origin, so we know where it is! Computing the view direction is easy.

We were going to ask you to write this yourself - the equation is in the lectures and readings. But, a very kind TA from a previous year wrote it for you. We changed it to do everything in view coordinates.

The shaders are in shaders/10-04.vs and shaders/10-04-03.fs; the box is set up by 10-04-03.js ( 10-04-03.html).

But this only has specular! the object is generally dark. Make two simple changes to shaders/10-04-03.fs to show that you understand it:

  1. Add some diffuse lighting, so the object has a specular (highlight) and general diffuse lighting. Remember, you add these together. Be sure to “clamp” the total color so it doesn’t exceed 1.0.

  2. Change the specular color to white - so the object has yellow diffuse reflection, and white specular. This will make it look more like plastic than metal.

Then, in 10-04-03.txt, explain how we know that this is correct (1 sentence is OK) - tell us what a mix of specular and diffuse should look like, and how we can tell it really is combining the two.

Using THREE’s Lights

Of course, to do things correctly and make them blend into our scenes, we should use the lights that are defined in the THREE scene so our objects using our shaders have the same lighting as those using THREE’s shaders.

Doing this requires:

  1. Setting up uniforms that receive information about THREE’s lights. Fortunately, THREE will set this up for us. We just need to use some poorly documented parts of THREE (the UniformsLib).
  2. In our shaders, we need to declare all the uniforms that THREE provides.
  3. In our shaders, we need to loop over all of the lights and sum up their contributions.
  4. When we create the material we need to turn the lights on.

The upside is that THREE gives lighting information in view space, so the issues discussed above are taken care of.

You can see an example in the demos (look at Shader Test 9).

Things get even trickier if we want to do shadows.

We will not require you to figure out how to use THREE’s lights in a shader - it will be sufficient for the exercises (future pages) to make a simple directional light source in camera coordinates. However, you can make your shaders work with THREE’s lights for advanced points.

Summary: Lighting in Shaders

Short version: we’ll let THREE take care of it. We might want to do a little simple lighting to add to our more interesting shaders (next).

On Next: Procedural Textures we’ll try something more interesting.

Page 4 Rubric (5 points total)
Points (5):
Box 10-04-02b
1 pt
change diffuse lighting (direction and 2-sidedness)
Box 10-04-03
2 pt
add diffuse lighting
Box 10-04-03
1 pt
change specular to white
Box 10-04-03
1 pt
explain how you know it is correct