Page 1: Shader Basics

CS559 Spring 2023 Sample Solution

We will discuss shaders thoroughly in class. This is meant as a reminder (or maybe a cue to review the lecture notes, or ask a question in class if there is something you don’t understand). It should also serve as a modest tutorial to review the concepts.

Learning about Shaders

There are at least 4 different things that you have to understand in order to really use shaders effectively.

  1. The basic concepts of what shaders are, and why they are the way they are. This really means understanding how the hardware works and where the shaders fit into the overall process of rendering. If you don’t understand this basic stuff, shaders won’t make sense. Unless you understand how the pieces work, the rules about writing shaders will seem arbitrary.
  2. The language that we write shaders in. In some sense, this is a detail: shading languages are very similar, and differ only slightly in their syntax and built in functionality. For us, the language is GLSL (GLSL-ES to be precise).
  3. The ways in which the shader programs communicate with other programs. This includes the interface between shaders and the “host” program (in our case, the JavaScript code), as well as the way in which different shaders talk to each other. Appreciating these basic mechanisms is important, since the communications can be a hassle.
  4. The ways in which a specific system uses the mechanisms of #3 in order to provide convenient access to the graphics information. For example, in THREE, there need to be mechanisms to provide shaders with information about the lights in a scene.

The most important thing for class is #1 - since if you understand what is going on, everything else will fall into place. GLSL (#2) will be a practical matter, since you will want to write some shaders. Connecting with the host program (#3) isn’t that interesting - but it can be hard. Fortunately, THREE.js will help makes some parts easier (#4). We’ll just use the simple parts (for example, connecting to THREE’s complex lighting system can be tricky).

In class we’ll only really talk about vertex shaders and fragment shaders (also known as pixel shaders). There are other kinds of shaders, but they are optional and not used as often. Until you master these two, the others are even less likely to make sense.

In fact, in class we’ll focus on the fragment shaders. We’ll need to write vertex shaders (since vertex and fragment shaders always appear in pairs), but most of the work we do will be on fragment shaders.

Things You Need To Know Before Starting

You need to understand the basic concepts - shader programming will not make sense without them. Also, these basic concepts are what will still be true if you program in some other system. And we’ll ask you about this stuff on the exam.

Be sure to read enough of the required readings to understand the model of what shaders do and why they do it.

Here are some concepts to make sure you understand:

  1. The pipeline of how triangles go from a program to the screen. Knowing where the shaders fit into this process helps explain why they do what they do - and only what they do.
  2. The concepts of a vertex shader and a fragment shader (sometimes known as a pixel shader). You should know why we always need one of each.
  3. The different types of variables used to pass information to and from shaders. Understand uniform, attribute, and varying variables, and what each is used for.
  4. The need for attribute buffers to pass information about vertices.
  5. What kinds of things you can (and cannot) do in the shaders.
  6. What a fragment is - roughly it’s a pixel (we used to call them “pixel shaders”), but fragments are a more correct term.

If you start by trying to learn the GLSL language, or trying to read a fragment shader, it will seem really arbitrary. These basics tell you the rules.

Overview: What are all the pieces?

There are a bunch of concepts - and then there are the places where they get used in a program. Here we’ll try to make a connection.

  1. Before we draw any triangles, we need to define the shader programs that will be run on the graphics hardware for every vertex (the vertex shader) and for every fragment (the fragment shader). In THREE, these shaders are defined as part of a Material, so we can have different programs for each Material. When we make a material, the compiler is run to compile the shader programs. In fact, when we change the material, the compiler may need to be run (which is why we need to be careful about changing materials). Most THREE materials use built-in shaders, but we can use the special ShaderMaterial to allows us to provide our own program.
  2. When we draw the triangles for an object (a set of triangles that a set of shaders will process), our host program needs to send those triangles to the graphics hardware. It does this by putting all of the information about the vertices into an attribute buffer, which is a data structure that stores the vertices in a way that makes it efficient to send to the graphics hardware. THREE takes care of this: if we define a Geometry, it builds the buffers for us; if we use a BufferGeometry we have to work with the data structures directly. The attribute buffer has all the information about our vertices: our shaders will be given the information about vertices one at a time.
  3. As the triangles are drawn, for each vertex, a vertex shader is run. The shader executions for each vertex are independent. The vertex shader is passed the attributes of the one vertex it is meant to process. This is why vertex splitting is a big deal: if a vertex is “split” (so it has different attributes for different triangles), it has to become two vertices so each one can be processed separately. The result of vertex shading is that each vertex has some new properties associated with it, including a screen-space position.
  4. The vertices are re-assembled into triangles, and the rasterizer creates a list of fragments (pixels) that the triangle “covers”. Each of these fragments are then sent to be shaded. After this stage, each fragment is processed independently. The properties of the vertices of the triangle are interpolated to provide values for each fragment.
  5. For each fragment, a fragment shader is run. The fragment shader sees the vertices independently (effectively one at a time, but in parallel). The fragment shader program processes one fragment - it gets the information about the fragment as varying variables (values that we determined by interpolating the vertices that made up the triangle that caused the fragment). The fragment shader can determine the color of the fragment, and it is allowed to change the depth. However, a fragment’s position in the image cannot be changed - that would cause it to become a different fragment!
  6. After the fragment is processed, the fragment is tested and, assuming it passes the tests, the color and depth values are written to the image. The most important test is the depth test (z-buffer), which checks that the fragment is not occluded by other fragments.

All this has been true for all the triangles we’ve been drawing throughout this semester. The only thing that is new is that we’re going to write the shader programs in steps 3 and 5, rather than using the ones built in to THREE. In the past, you’ve used materials such as MeshStandardMaterial for which THREE has built-in shader programs. (You can try to read them if you want!)

So What Do Shaders Actually Do?

A vertex shader is a small program that runs on one vertex (each vertex is independent). It receives as input the attributes of that vertex and things that are constant (the same over all vertices) called uniforms. The only thing a vertex shader can do is add new attributes to the vertices. It must provide the screen-position of the vertex. It may optionally provide other attributes (we’ll call these output attributes, or varying variables) that can be used in later stages.

A fragment shader is a small program that runs on one fragment (think of this as a pixel of a triangle). Each fragment is processed independently. It receives as input the properties of the fragment. These are determined by interpolating the “output attributes” (varying variables) from the vertices. The inputs to a fragment shader are these varying variables as well as the values that are constant over the whole object (the uniform variables). The output of the fragment shader are more properties of the fragment. The most important one is the color (the thing that will get written to the image, if the fragment makes it through the tests).

Summary: Vertex and Fragment Shaders

Make sure that you understand what a vertex shader is and what a fragment shader is. If you don’t, re-read this page, watch the lectures again, or read one of the recommended readings.

To help you check your basic understanding, answer the following 2 questions - a sentence or two for each is OK. This is mainly for you to check you are ready to go on (but we’ll give you a few points for writing something into 10-01-01.txt).

Questions (to answer in 10-01-01.txt):

  1. What are the inputs and outputs of a vertex shader?
  2. What are the inputs and outputs of a fragment shader?

Now that we know what vertex and fragment shaders are, we can look at some simple examples on Next: Simple Shaders .

Page 1 Rubric (2 points total)
Points (2):
Box 10-01-01
2 pt
answered basic questions (1 pt each)