📗 Color of a vertex or a pixel is the weighted sum of:
➩ Emissive (object produces light): \(c_{e}\).
➩ Ambient light: \(c_{a} \cdot l_{a}\).
➩ (Sum over all light sources) diffuse light: \(\left(\hat{N} \cdot \hat{L}\right) \cdot c_{l} \cdot c_{d}\).
➩ (Sum over all light sources) specular light: \(\left(\hat{E} \cdot \hat{R}\right)^{p} \cdot c_{l} \cdot c_{s}\).
📗 List of vector notations:
➩ \(\hat{N}\): unit normal vector (perpendicular to the surface or fake normal).
➩ \(\hat{L}\): unit direction of light.
➩ \(\hat{E}\): unit direction of eye (or camera).
➩ \(\hat{R}\): unit direction of the perfect mirror reflection of the light (or \(2 \left(\hat{L} \cdot \hat{N}\right) \hat{N} - \hat{L}\)).
📗 Note: another way to write \(\hat{E} \cdot \hat{R}\) is \(\hat{N} \cdot \hat{H}\) (in Professor Gleicher's lecture and workbooks), where \(\hat{H}\) is the half-way vector between light direction and eye direction.
📗 \(\hat{N} \cdot \hat{L} = \left\|\hat{N}\right\| \left\|\hat{L}\right\| \cos\left(\theta\right) = \cos\left(\theta\right)\) since \(\hat{N}, \hat{L}\) are unit vectors, where \(\theta\) is the angle between vectors \(\hat{N}, \hat{L}\).
📗 Lambert's cosine law says observed light intensity is proportional to cosine of the angle between the light direction and surface normal. Therefore, diffuse light intensity is \(\cos\left(\theta\right) = \hat{N} \cdot \hat{L}\): Link.
📗 \(\left(\hat{N} \cdot \hat{L}\right) \hat{N} = \dfrac{\hat{N} \cdot \hat{L}}{\hat{N} \cdot \hat{N}} \hat{N} = proj_{\hat{N}} \hat{L}\) since \(\hat{N}\) is a unit vector, is the projection of \(\hat{L}\) onto \(\hat{N}\).
📗 Phong model approximate the light intensity by some power of the cosine of the angle between the reflected light and the camera (eye) position, which is \(\cos\left(\theta\right)^{p} = \left(\hat{R} \cdot \hat{E}\right)^{p}\), and the power \(p\) is called the Phong coefficient or shininess coefficient: Link.
📗 Get position and normal from attribute and copy them to varying in the vertex shader (transform them into the camera space modelViewMatrix * position and normalMatrix * normal).
📗 Use interpolated varying position p and normal n in the fragment shader.
📗 Light position l and colors are uniform variables in the fragment shader.
📗 Given these variables:
➩ \(\hat{N}\) is N = normalize(n) (need to normalize again after interpolation).
➩ \(\hat{L}\) is L = normalize(l - p) (vector from light to position).
➩ \(\hat{E}\) is E = normalize(-1.0 * p) (since camera is at the origin).
➩ \(\hat{R}\) is R = normalize(reflect(-L, N)) (reflect \(\hat{L}\) on the surface with normal \(\hat{N}\)): Doc.
📗 Usually, the cosine angles are clamped between 0 and 1: clamp(dot(N, L), 0.0, 1.0) and clamp(dot(E, R), 0.0, 1.0).
📗 Professor Gleicher's implementation of Phong model: Link.
📗 Write different functions mapping x from -1 to 1 and y from -1 to 1 to 0 to 1 so that they have the same value when sampled on a 3 pixel by 3 pixel grid.
📗 The problem of aliasing cannot be fixed. If it is not possible to have more pixels: using the pixels better is called anti-aliasing.
📗 Aliasing cannot be fixed after it already happened (points are already sampled). Filtering is about (consistently) getting rid of potential problems before they happen.
📗 In general, sharp edges (jaggies) and small points (lost between pixels) are bad: the solution is to make them bigger and make them smooth (or blur them).
📗 Anti-aliasing considers blurry to be better than wrong.
➩ Sample more points within a pixel, for example, five points at the center (default), \(\dfrac{1}{4}\) pixel up, \(\dfrac{1}{4}\) pixel down, \(\dfrac{1}{4}\) pixel to the left, and \(\dfrac{1}{4}\) pixel to the right.
➩ To figure out the value of some variable one pixel up, down, left, and right, use the derivative functions dFdx and dFdy: Doc and Doc.
📗 (2) Edge smoothing (blurring).
➩ Instead of the step function, use the smoothstep function: Doc.
➩ Not blurry enough lead to aliasing; too blurry looks bad.
➩ A blur of one pixel wide would be appropriate. Use dFdx or dFdy to figure out the change in the variable within 1 pixel, or fwidth = abs(dFdx) + abs(dFdy): Doc.
📗 dFdx is the difference between the value of F of the pixel and the pixel on the left (or right)
📗 dFdy is the difference between the value of F of the pixel and the pixel above (or below).
📗 A group of four pixels are usually computed together, so whether the dFdx and dFdy uses the pixel on the left or right (above or below) depends on which group the pixel belongs to.