Page 2: Linear, Affine, Projective
CS559 Spring 2021 Sample Solution  Workbook 4
Written by CS559 course staff
You can try out the example solutions here. The interesting code is on page 4 and page 5.
On this page, we’ll finally get to doing things in terms of vector / linear algebra.
Again, this is more of a review than a workbook. Read it to have the notation that we can use going forward.
Linear Transformations
Many useful transformations can be written as linear combinations of the input variables.
$$ x' = ax + by $$
$$ y' = cx + dy $$
The “new x” is some multiple of the old x, added to some multiple of the old y. Note that for the 2D case, there are 4 parameters (a,b,c,d) which correspond to the amount that new x depends on the old y (b), etc. In fact, a slightly different naming might make this clearer.
$$ x' = a_{xx} x + a_{xy} y $$
$$ y' = a_{yx} x + a_{yy} y $$
Where the four parameters have names like $ a_{yx} $
to denote its the amount that y depends on x.
Notice that the 4 parameters ($a_{xx}, a_{xy}, a_{yx}, a_{yy}$
) naturally form a square. The rows are all the variables that are used to compute each new variable (so the top row is for new x). The columns are the ways that each old variable affects the new ones (so the first column is the amount that the old x variable affects the new x and y).
Hopefully, you recognize this square of numbers as a matrix, and the operation performed in that equation as multiplying a matrix by a vector. So if we write:
$ \mathbf{x'} = \mathbf{Ax} $
where x' and x are vectors (of length 2), and A is a 2x2 matrix, you won’t be too surprised. Other than that we are ignoring whether something is a column vector or a row vector.
If you’re not comfortable with the matrix notation and matrix multiplication, now might be a good time to review it.
Even if you are familiar with matrices, make sure that you see what each row and column does in the matrix multiplication.
The matrix multiplication says that the first element of the result (x') comes from the top row of the matrix ($a_{xx}, a_{xy}$
) combined with the input vector (x,y or x). The operation between the two vectors is called a dot product. In fact, you can think of matrix by vector multiplication as the process of taking the dot product of each row of the matrix with the vector we’re multiplying by (on the right).
A linear transformation is a transformation where the function is multiplication by some matrix. Specifically, we multiply the vector on the right. So, the transformation (function) $ f_A $
would use the matrix A would as $ f_A(\mathbf x) = \mathbf A \mathbf x $
.
Here are some important facts about linear transformations. If you took a linear algebra class, you probably proved all of them already.
 Zero is preserved. If you put in the vector zero, you get zero out.
 Composition of functions is multiplication of matrices. If we have two transformations
$ f_A $
and$ f_B $
that we apply as composition$ f_B(f_A(\mathbf x)) $
this is the same as$ (f_B \circ f_A) (x) $
, which is$ \mathbf B \mathbf A \mathbf x $
.  Because matrix multiplication associates, you can multiply the matrices first:
$ \mathbf B \mathbf A \mathbf x = (\mathbf B \mathbf A) \mathbf x = \mathbf B (\mathbf A \mathbf x) $
.  The set of linear transformation is closed under composition. Any sequence of linear transformations is a linear transformation. For any sequence of linear transformations, there is a single linear transformation that has the same result.
 The order of application matters. Matrix multiplication does not commute.
Now is a good time to read section 6.1 of FCG4_Ch06.pdf (0.5mb), which will review all of this, and show you the matrices corresponding to the transformations that we learned about.
Translations (Affine Transformations)
Hopefully, you noticed that translation is not a linear transformation. If nothing else, it violates property #1 above (zero is preserved). Translation adds a vector to all points, including zero.
The combination of a linear transformation and a translation is called an affine transformation. Affine is another class of transformations that contains linear transformations (a linear transformation is an affine transformation with a zero translation).
We typically write affine transformations by multiplying first and adding second. So, for matrix A and translation vector t, we write $ f_{A,t}(\mathbf x) = \mathbf A \mathbf x + \mathbf t $
.
Writing this out to expose what matrix multiply does can emphasize that this can be viewed as a separate equation for each coordinate (row of the equation):
$$ \mathbf{x'} = \begin{bmatrix} x'_x \\ x'_y \end{bmatrix} = f_{A,t}(\mathbf x) = \mathbf A \mathbf x + \mathbf t = \begin{bmatrix} a_{xx} & a_{xy} \\ a_{yx} & a_{yy} \end{bmatrix} \begin{bmatrix} x_x \\ x_y \end{bmatrix} + \begin{bmatrix} t_x \\ t_y \end{bmatrix} $$
or
$$ x'_x = a_{xx} x_x + a_{xy} x_y + t_x \\ x'_y = a_{yx} x_x + a_{yy} x_y + t_y $$
It is tempting to put that 3x2 “rectangle” of numbers as a matrix. However, we cannot use multiply operations for composition or application. You can’t multiply two 3x2 matrices together, nor does it make sense to multiply a 3x2 matrix by a 2vector on the right.
Affine transformations are closed under composition, and have the property that any sequence of affine transformations can be represented as a single affine transformation. However, the composition is messy notationally. We can’t use the nice matrix notation where function composition is matrix multiplication. Unless we use the trick in the next section.
Homogeneous Coordinates
Several transformations that we want to have in 2D are not linear. But we like linear transformations.
The counterintuitive solution will be to stop using 2D points. We’ll embed the 2D space that we want to work in into a 3D space, do linear operations in this 3D space, and then interpret the points in 3D back in 2D.
This will turn out to have all sorts of advantages as we do more graphics because many important operations will turn into simple linear operations in a higher dimension. We’ll start with the simple version that will allow us to do affine transformations (including translations) as linear transformations.
The trick of using an n+1 dimensional space to represent n dimensional points is known as homogeneous coordinates. For this workbook, we’ll be using 3D homogeneous coordinates to represent 2D spaces. In the future, we’ll use 4D homogeneous coordinates to represent 3D spaces.
We’ll call the last coordinate of the 3 coordinates w. So our three coordinates in homogeneous space will be (x,y,w). Calling the extra dimension w will be a reminder that it is special, and also still work when we get to making 4D homogeneous coordinates for 3D space.
Basic version: we’ll represent the 2D point (x,y) by the 3vector (x,y,1). In this workbook, we’ll focus on the case where the last coordinate (w) is 1.
If you want the math version: Homogenous space treats all points along lines through the origin as equivalent. Our 2D space will be the plane w=1 inside of the 3D space. For any line through the origin, we project the entire line to its intersection with the plane. So if we have a point (x,y,w) in 3D, it’s equivalent to the point (x/w, y/w, 1), which we interpret as (x/w, y/w) in our 2D space.
If we change our 2D points into homogeneous coordinates, our transformations become 3x3 matrices to transform the 3D homogeneous points. An affine transformation in 2D becomes a linear transformation in the 3D homogeneous space. If we had the 2D affine transformation $ \mathbf{x'} = \mathbf A \mathbf x + \mathbf t $
we write the following in 3D:
$$ \begin{bmatrix} x'_x \\ x'_y \\ x'_w \end{bmatrix} = \begin{bmatrix} a_{xx}&a_{xy}&t_x \\ a_{yx}&a_{yy}&t_y \\0&0&1 \end{bmatrix} \begin{bmatrix} x_x\\x_y\\1 \end{bmatrix} $$
If you multiply this out, you’ll see that $ x'_w $
will always end up being 1. You’ll also notice that the top two rows end up being exactly the same expressions as in Box 2.
The cool thing about this: affine transformations in 2D are linear transformations in the 3D homogeneous space. We can represent transformations as matrices. Transformation composition is matrix multiplication.
Now you can read the rest of FCG4_Ch06.pdf (0.5mb) (you can skip 6.2 for now as it covers 3D transformations). Section 6.3 discusses homogeneous coordinates. You should also read Hart05jan19.pdf (2.7mb) which covers the same material in a different way.
Projective Coordinates
Homogeneous coordinates are sometimes called projective because we need to project points from the higher dimensional space back to our original space. This will be useful when we get to 3D since it will let us do perspective transformation (which involve a projection).
If the bottom row of the projective transformation (the one we apply to homogeneous coordinates) is zero except for a 1 in the lower right corner, the transformation is affine in the original space. If the point we transform starts out with a 1 for its w coordinate, it will end up with a 1 for its w coordinate.
However, if the bottom row of the matrix is not 0,0,1, or the w of the point is not 1, we will end up with a number other than 1 for the new w, and we need to remember to “divide by w” to convert back to 2D when the time comes.
To illustrate how this can work, consider the scaling transformation. We can write it two different ways:
$$ \begin{bmatrix} s & 0 & 0 \\ 0 & s & 0 \\ 0 & 0 & 1 \end{bmatrix} or \begin{bmatrix} 1&0&0\\0&1&0\\0&0&1/s \end{bmatrix} $$
Notice that these give the same result. If we transform the 2D point x,y using the first matrix we get (sx,sy,1) or (sx,sy). If we use the second matrix, we get (x,y,1/s), which after division by w (to convert back to 2D) gives us (sx,sy).
Reading Matrices
It is important to understand how the matrices work so we can “read” them to understand what a specific matrix does. This will also let us “write” matrices directly (without having to compose them from multiple simple transformations).
$$ \begin{bmatrix} x'_x \\ x'_y \\ x'_w \end{bmatrix} = \begin{bmatrix} a_{xx}&a_{xy}&t_x \\ a_{yx}&a_{yy}&t_y \\0&0&1 \end{bmatrix} \begin{bmatrix} x_x\\x_y\\1 \end{bmatrix} $$
We can read this matrix as follows:
 The first column (
$ a_{xx}, a_{yx}, 0 $
) tells us the new direction and scale of the x axis. For every unit of x (in$x_x$
), we move this vector in the new space.  The second column (
$a_{yx}, a_{yy}, 0$
) tells us the new direction and scale of the y axis.  The third column (
$ t_x, t_y, 1 $
) tells us the new position of the origin. It’s where the zero vector will go.
So, we can use this backwards if we know the transformation we want, we can create the matrix that achieves that transformation. Consider trying to transform the red box into the blue box:
From the picture, we can see that we want the origin to go to (2,1), we want the x axis to go to (4,2) and we want the y axis to go to (1,3). We can use this to create the matrix:
$$ \begin{bmatrix} 4 & 1 & 2 \\ 2 & 3 & 1 \\ 0 & 0 & 1 \end{bmatrix} $$
We could have created this matrix as the composition of basic transformations. But in this case, it was easier just to build the matrix by looking where things went.
Make sure that you can do this! It makes for a good exam question, but it’s also useful when you need to construct matrices to put objects in particular places.
Matrices and Transformations
If we look at the matrix:
$$ \begin{bmatrix} 2 & 0 & 5 \\ 0 & 2 & 5 \\ 0 & 0 & 1 \end{bmatrix} $$
We should be able to see that this scales both X and Y by a factor of 2, and translates things by 5 in each direction.
We could “apply” this matrix with the code:


Any points we draw after these transformations, would be affected by the matrix. Canvas actually builds the matrix by multiplying two two transformations together:
For computers, it’s easier to multiply matrices. For thinking about transformations, sometimes it is easier to think about what the transformation does and create the matrix. For example, if we want to consider what happens when we reverse the order of the transformations:


We could multiply the matrices together (and get a different result):
$$
\begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix}
\cdot
\begin{bmatrix} 1 & 0 & 5 \\ 0 & 1 & 5 \\ 0 & 0 & 1 \end{bmatrix} =
\begin{bmatrix} 2 & 0 & 10 \\ 0 & 2 & 10 \\ 0 & 0 & 1 \end{bmatrix}
$$
Or, we could think about how the translation now applies in the scaled coordinate system, so it moves things farther. Or we could think about how the translation is applied to the points first, and then scaled (so the translation amount is multiplied). With either of these, we could then build up the new matrix.
Try to solve this problem without writing out the matrices. It is in the html box file 040201.html because we want you to type the answer into the html file so it shows up here (change the ??
to numbers). We do this weird thing (of having you edit the html file) because we have no other good way to put questions into a workbook. (editing this file is worth points in the workbook)
Here’s one that is a little tricker, and doesn’t have the answer on the page already (040202.html):
And another one with a rotation (040203.html). Remember that $ \pi / 2 $
is 90 degrees, and think about what this does to the coordinate system. You shouldn’t need to write out the matrices and multiply them.
And, as long as you’ve gotten used to editing html files to change ??
to numbers, here is a chance to make sure you understand how to apply homogenous transformation matrices to points (040204.html):
Summary
In this page, we made the connection between transformations and matrices. This will allow you to read the books (which discuss transformations in terms of matrices). But it also will let you use matrices in code, which we will start to do on Page 3 (Matrices in Code).