The goal of optimization is to produce better code (fewer instructions, and, more importantly, code that runs faster). However, it is important not to change the behavior of the program (what it computes)!
We will look at the following ways to improve a program:
The idea behind peephole optimization is to examine the code "through a small window," looking for special cases that can be improved. Below are some common optimizations that can be performed this way. Note that in all cases that involve removing an instruction, it is assumed that that instruction is not the target of a branch.
Consider the following program:
public class Opt {
   public static void main() {
       int a;
       int b;
       if (true) {
         if (true) {
      	    b = 0;
         }
         else {
            b = 1;
         }
         return;
       }
       a = 1;
       b = a;
   }
}
Question 1:
The code generated for this program contains opportunities for the first two
kinds of peephole optimization (removing a redundant load, and replacing
a jump to a jump).
Can you explain how those opportunities arise just by looking at the
source code?
Question 2: Below is the generated code. Verify your answer to question 1 by finding the opportunities for the two kinds of optimization. What other opportunity for removing redundant code is common in this example?
.text .globl main main: # FUNCTION ENTRY sw $ra, 0($sp) #PUSH subu $sp, $sp, 4 sw $fp, 0($sp) #PUSH subu $sp, $sp, 4 addu $fp, $sp, 8 subu $sp, $sp, 8 # STATEMENTS # if-then li $t0, 1 sw $t0, 0($sp) #PUSH subu $sp, $sp, 4 lw $t0, 4($sp) #POP addu $sp, $sp, 4 beq $t0, 0, _L0 # if-then-else li $t0, 1 sw $t0, 0($sp) #PUSH subu $sp, $sp, 4 lw $t0, 4($sp) #POP addu $sp, $sp, 4 beq $t0, 0, _L1 li $t0, 0 sw $t0, 0($sp) #PUSH subu $sp, $sp, 4 lw $t0, 4($sp) #POP addu $sp, $sp, 4 sw $t0, -12($fp) b _L2 _L1: li $t0, 1 sw $t0, 0($sp) #PUSH subu $sp, $sp, 4 lw $t0, 4($sp) #POP addu $sp, $sp, 4 sw $t0, -12($fp) _L2: # return b main_Exit _L0: li $t0, 1 sw $t0, 0($sp) #PUSH subu $sp, $sp, 4 lw $t0, 4($sp) #POP addu $sp, $sp, 4 sw $t0, -8($fp) lw $t0, -8($fp) sw $t0, 0($sp) #PUSH subu $sp, $sp, 4 lw $t0, 4($sp) #POP addu $sp, $sp, 4 sw $t0, -12($fp) #FUNCTION EXIT main_Exit: lw $ra, 0($fp) move $sp, $fp #restore SP lw $fp, -4($fp) #restore FP jr $ra #return
Optimization #1: Loop-Invariant Code Motion
The ideas behind this optimization are:
Example:
for (i=0; i<100; i++) {
    for (j=0; j<100; j++) {
        for (k=0; k<100; k++) {
            A[i][j][k] = i*j*k
        }
    }
}
In this example, i*j
is invariant with respect to the inner loop.
But there are more loop-invariant expressions; to find them,
we need to look at a lower-level version of this code.
If we assume the following:
So the code for the inner loop is actually something like:
We can move the computations of the loop-invariant expressions out of their loops, assigning the values of those expressions to new temporaries, and then using the temporaries in place of the expressions. When we do that for the example above, we get:
Here is a comparison of the original code and the optimized code (the number of instructions performed in the innermost loop, which is executed 1,000,000 times):
| Original Code | New Code | |
| 5 multiplications (3 for lvalue, 2 for rvalue) | 2 multiplications (1 for lvalue, 1 for rvalue) | |
| 1 subtraction; 3 additions (for lvalue) | 1 addition (for lvalue) | |
| 1 indexed store | 1 indexed store | 
Questions:
If evaluating the expression might cause an error, then there is a possible problem if the expression might not be executed in the original, unoptimized code. For example:
What about preserving the order of events? e.g. if the unoptimized code performed output then had a runtime error, is it valid for the optimized code to simply have a runtime error? Also note that changing the order of floating-point computations may change the result, due to differing precisions.
ProfitabilityIf the computation might not execute in the original program, moving the computation might actually slow the program down!
Moving a computation is both safe and profitable if one of the following holds:
What are some examples of loops for which the compiler can be sure that the loop will execute at least once?
The basic idea here is to take advantage of patterns in for-loops to replace expensive operations, like multiplications, with cheaper ones, like additions.
The particular pattern that we will handle takes the general form of a loop where:
 The Ackermann function is famously slow to compute. In this example, the resultant call will return a number with nearly 20,000 digits. 
:
The Ackermann function is famously slow to compute. In this example, the resultant call will return a number with nearly 20,000 digits. 
:
for L from B to E do {
        $\vdots$
   $\ldots$ = L * M + C
        $\vdots$
}
Consider the sequences of values for L and for the induction expression:
| Iteration # | L | L * M + C | ||
| 1 | B | B * M + C | ||
| 2 | B + 1 | (B + 1) * M + C = B * M + M + C | ||
| 3 | B + 1 + 1 | (B + 1 + 1) * M + C = B * M + M + M + C | 
ind = B * M + C //Initialize temp to first value of expression
for L from B to E do {
        $\vdots$
   $\ldots$ = ind //Use ind instead of recalculating expression
        $\vdots$
   ind = ind + M //Increment ind at the end of the loop by M
}
Note that instead of doing a multiplication and an addition each time around the loop, we now do just one addition each time. Although in this example we've removed a multiplication, in general we are replacing a multiplication with an addition (that is why this optimization is called reduction in strength). Although this pattern may seem restrictive, in practice many loops fit into this template, especially since we allow M or C to be absent. In particular, if there were no C, the original induction expression would be: L * M, and that would be replaced inside the loop by: ind = ind + M; an addition replaces a multiplication.
Some languages actually have for-loops with the syntax used above (for i from low to high do ...), but other languages (including Java) do not use that syntax. Must a Java compiler give up on performing this optimization, or might it be able to recognize opportunities in some cases?
As mentioned above, many loops naturally fit the template for strength reduction that we defined above. Now let's see how to apply this optimization to the example code we used to illustrate moving loop-invariant computations out of the loop. Below is the code we had after moving the loop-invariant computations. Each induction expression is circled and identified by a number:
| Original Expression | Loop Index (L) | Multiply Term (M) | Addition Term (C) | |||
| #1: tmp0 + i * 40000 | i | 40000 | tmp0 | |||
| #2: tmp1 + j * 400 | j | 400 | tmp1 | |||
| #3: i * j | j | i | 0 | |||
| #4: temp * k | k | temp | 0 | |||
| #5: tmp2 + k * 4 | k | 4 | tmp2 | 
After performing the reduction in strength optimizations:
Suppose that the index variable is incremented by something other than one each time around the loop. For example, consider a loop of the form:
for (i=low; i<=high; i+=2) ...Can strength reduction still be performed? If yes, what changes must be made to the proposed algorithm?
Examples:
Question: Why is this a useful transformation?
Answers:
 It's worth noting that compilers work hard to keep values in registers and avoid loads. This process is called register allocation. As such, it's possible that all operand values are already in registers when an operation occurs, so it may not necessarily be true that we're saving a load here. Nevertheless, it's always better to use constants where possible.
It's worth noting that compilers work hard to keep values in registers and avoid loads. This process is called register allocation. As such, it's possible that all operand values are already in registers when an operation occurs, so it may not necessarily be true that we're saving a load here. Nevertheless, it's always better to use constants where possible. 
Furthermore, this kind of copy propagation can lead to opportunities for constant folding: evaluating, at compile time, an expression that involves only literals. For example:
	   while (...) {
	      x = a * b; // loop-inv
	      y = x * c;
	      ...
	   }
	   
	   Move "a * b" out of the loop:
	   
	   tmp1 = a * b;
	   while (...) {
	      x = tmp1;
	      y = x * c;
	      ...
	   }
	   
	   Note that at this point, even if c is not modified in
	   the loop, we cannot move "x * c" out
	   of the loop, because x gets its value inside the loop.
	   However, after we do copy propagation:
	   
	   tmp1 = a * b;
	   while (...) {
	      x = tmp1;
	      y = tmp1 * c;
	      ...
	   }
	   
	   "tmp1 * c" can also be moved out of the loop:
	   
	   tmp1 = a * b;
	   tmp2 = tmp1 * c;
	   while (...) {
	      x = tmp1;
	      y = tmp2;
	      ...
	   }
	   
	   
     Given a definition d that is a copy statement: x = y, and a use $u$ of x, we must determine whether the two important properties hold that permit the use of x to be replaced with y.
The first property (use $u$ is reached only by definition $d$) is best solved using the standard "reaching-definitions" dataflow-analysis problem, which computes, for each definition of a variable x, all of the uses of x that might be reached by that definition. Note that this property can also be determined by doing a backward depth-first or breadth-first search in the control-flow graph, starting at use $u$, and terminating a branch of the search when a definition of x is reached. If definition d is the only definition encountered in the search, then it is the only one that reaches use $u$. (This technique will, in general, be less efficient than doing reaching-definitions analysis.)
The second property (that variable y cannot change its value between definition d and use $u$), can also be verified using dataflow analysis, or using a backwards search in the control-flow graph starting at $u$, and quitting at $d$. If no definition of y is encountered during the search, then its value cannot change, and the copy propagation can be performed. Note that when y is a literal, property (2) is always satisfied.
Below is our running example (after doing reduction in strength). Each copy statements either has a red X next to it (if it can't be propagated) or a green check (if it can be propagated). In this particular example, each variable x that is defined in a copy statement reaches only one use. Comments indicate which of them cannot be propagated (because of a violation of property (1) -- in this example there are no instances where property (2) is violated).