📗 Another alternative to blending between multiple different meshes (sets of vertices), called morphing: Wikipedia.
📗 Multiple meshes can be stored in the same geometry as morph targets.
📗 All meshes are sent to the hardware, and vertex interpolation is done between targets by changing the weights: \(p = w_{1} p_{1} + w_{2} p_{2} + w_{3} p_{3} + ...\) is the position of a vertex given the positions of the same vertex in 3 (or more) morph targets.
📗 In THREE.js, THREE.Mesh has properties THREE.Mesh.morphTargetDictionary (list of morph targets) and THREE.Mesh.morphTargetInfluences (weights on each morph target). An example: Link.
📗 A simple model of light is computing the color as the (weighted) sum of the color from four light sources:
➩ Emissive: object itself produces light.
➩ Ambient: surface color of the object.
➩ Diffuse: \(\hat{N} \cdot \hat{L}\), where \(\hat{N}\) is the surface normal, \(\hat{L}\) is the direction of the light source: Wikipedia.
➩ Specular: \(\left(\hat{E} \cdot \hat{R}\right)^{p} = \left(\hat{E} \cdot \left(2 \left(\hat{L} \cdot \hat{N}\right) \hat{N} - \hat{L}\right)\right)^{p}\), where \(\hat{E}\) is the direction of the camera (E stands for eye), \(R\) is the direction of the perfect mirror reflection of the light source, \(p\) is the Phong exponent for shininess: Wikipedia
📗 More details about this in a later lecture on Shaders.
📗 RGB values represent XYZ values of the normal vector.
📗 A normal map is difficult to draw by hand accurately, but the normal vectors can be computed and converted to RBG values set as the pixel values of an image.
📗 Normal and bump maps change the normal of a surface.
➩ They do not change the side view.
➩ They do not change the shape (geometry) or the triangles.
➩ They do not cause occlusions or shadows.
📗 There is also a displacement map that changes the positions of the vertices. CORRECTION: it does not alter the geometry, it just renders it differently with a different vertext shader.
📗 In THREE.js, THREE.MeshStandardMaterial.displacementMap can be set to a texture to displace the vertices.
📗 UV coordinates are used to look up the color of a point on a triangle.
➩ Every point on the triangle can be represented by its Barycentric coordinate (similar to the u position on a curve, but with two parameters): \(p = \alpha p_{1} + \beta p_{2} + \gamma p_{3}\) where \(p_{1}, p_{2}, p_{3}\) are vertices of the triangle.
➩ The corresponding point on the texture image can be found as: \(\begin{bmatrix} u \\ v \end{bmatrix} = \alpha \begin{bmatrix} u_{1} \\ v_{1} \end{bmatrix} + \beta \begin{bmatrix} u_{2} \\ v_{2} \end{bmatrix} + \gamma \begin{bmatrix} u_{3} \\ v_{3} \end{bmatrix}\) where \(\begin{bmatrix} u_{1} \\ v_{1} \end{bmatrix} , \begin{bmatrix} u_{2} \\ v_{2} \end{bmatrix} , \begin{bmatrix} u_{3} \\ v_{3} \end{bmatrix}\) are the UV values associated with \(p_{1}, p_{2}, p_{3}\) respectively.
📗 Built-in buffer geometries in THREE.js have preset UV values, so mapping texture onto the surfaces of cubes, spheres, etc, are simple.
📗 A pixel on screen may cover an area in the texture image.
📗 In this case, MIP map (also called image pyramid: a set of the same image with different sizes) can be pre-computed and store, and smaller versions of the texture image can be used to look up the color: Wikipedia.
📗 Either the nearest MIP map can be used or linear interpolation between two MIP maps can be used, and within a MIP map, either the nearest pixel or bi-linear interpolation can be used.
📗 Having both linear interpolation between two MIP maps and bi-linear interpolation between pixels is also called tri-linear interpolation.
📗 THREE.js can specify THREE.Texture.minFilter as THREE.NearestMipmapNearestFilter, THREE.NearestMipmapLinearFilter, or THREE.LinearMipmapNearestFilter, THREE.LinearMipmapLinearFilter: Link.
📗 A single triangle can have more than one material property.
➩ For THREE.MeshStandardMaterial, there are metalnessMap, roughnessMap.
➩ For THREE.MeshPhongMaterial or THREE.MeshLambertMaterial, there is specularMap.
➩ For both, there are alphaMap (transparency), aoMap (ambient occlusion, or pre-computed self-shadow), emissiveMap (glow), lightMap (pre-computed light).
📗 They take in an image as the texture and use the RGB values of the pixels to change the material property based on the UV values.
📗 Since the sky box is fixed, its reflection on the objects are fixed too.
📗 For scenes with multiple objects or moving objects, the reflection can change:
➩ Draw the scene without reflection.
➩ Use a camera (cube camera) to take a picture at the position of the object.
➩ Use this picture as the texture (environment map) of the object.
➩ Repeat this multiple times.
📗 In THREE.js:
➩ Create a let camera = new THREE.CubeCamera(near, far, new THREE.WebGLCubeRenderTarget(size));: Doc.
➩ Change the position of the cube camera and take a picture using camera.update(renderer, scene);. It might be useful to make obj.visible = false; before taking the picture and obj.visible = true; after taking the picture.
➩ This can only be done once in THREE.js (cannot have reflections in reflections using just environment maps): Doc.
📗 Textures are used to fake normal, material property, environment reflection and shadow because they can be implemented efficiently in the graphics hardware.
📗 More on the graphics pipeline and interaction with the graphics hardware in the last third of the semester.
📗 Notes and code adapted from the course taught by Professor Michael Gleicher.
📗 Please use Ctrl+F5 or Shift+F5 or Shift+Command+R or Incognito mode or Private Browsing to refresh the cached JavaScript: Code.
📗 You can expand all the examples and demos: , or print the notes: .
📗 If there is an issue with TopHat during the lectures, please submit your answers on paper (include your Wisc ID and answers) or this Google Form at the end of the lecture.