LR Parsing Overview There are several different kinds of bottom-up parsing. We will discuss an approach called LR parsing, which includes SLR, LALR, and LR parsers. LR means that the input is scanned left-to-right, and that a rightmost derivation, in reverse, is constructed. SLR means "simple" LR, and LALR means "look-ahead" LR.
Every SLR(1) grammar is also LALR(1), and every LALR(1) grammar is also LR(1), so SLR is the most limited of the three, and LR is the most general. In practice, it is pretty easy to write an LALR(1) grammar for most programming languages (i.e., the "power" of an LR parser isn't usually needed). A disadvantage of LR parsers is that their tables can be very large. Therefore, parser generators like Yacc and Java Cup produce LALR(1) parsers.
Let's start by considering the advantages and disadvantages of the LR parsing family:
Advantages
Recall that top-down parsers use a stack. The contents of the stack represent a prediction of what the rest of the input should look like. The symbols on the stack, from top to bottom, should "match" the remaining input, from first to last token. For example, consider the following grammar
Grammar: | |||
$S$ | $\longrightarrow$ | $\varepsilon$ | ( $S$ ) | [ $S$ ] |
and parsed the input string ([]). At one point during the parse, after the first parenthesis has been consumed, the stack contains
[ S ] ) EOF(with the top-of-stack at the left). This is a prediction that the remaining input will start with a '[', followed by zero or more tokens matching an S, followed by the tokens ']' and ')', in that order, followed by end-of-file.
Bottom-up parsers also use a stack, but in this case, the stack represents a summary of the input already seen, rather than a prediction about input yet to be seen. For now, we will pretend that the stack symbols are terminals and nonterminals (as they are for predictive parsers). This isn't quite true, but it makes our introduction to bottom-up parsing easier to understand.
A bottom-up parser is also called a "shift-reduce" parser because it performs two kind of operations, shift operations and reduce operations. A shift operation simply shifts the next input token from the input to the top of the stack. A reduce operation is only possible when the top N symbols on the stack match the right-hand side of a production in the grammar. A reduce operation pops those symbols off the stack and pushes the non-terminal on the left-hand side of the production.
One way to think about LR parsing is that the parse tree for a given input is built, starting at the leaves and working up towards the root. More precisely, a reverse rightmost derivation is constructed.
Recall that a derivation (using a given grammar) is performed as follows:
A rightmost derivation is one in which the rightmost nonterminal is always the one chosen.
Rightmost DerivationCFG
$E$ | $\longrightarrow$ | $E$ + $T$ |
| | $T$ | |
$T$ | $\longrightarrow$ | $T$ * $F$ |
| | $F$ | |
$F$ | $\longrightarrow$ | id |
| | ( $E$ ) |
Rightmost derivation
Note that both the rightmost derivation and the bottom-up parse have 8 steps. Step 1 of the derivation corresponds to step 8 of the parse; step 2 of the derivation corresponds to step 7 of the parse; etc. Each step of building the parse tree (adding a new nonterminal as the parent of some existing subtrees) is called a reduction (that's where the "reduce" part of "shift-reduce" parsing comes from).
The difference between SLR, LALR, and LR parsers is in the tables that they use. Those tables use different techniques to determine when to do a reduce step, and, if there is more than one grammar rule with the same right-hand side, which left-hand-side nonterminal to push.
id + id * id
Stack | Input | Action | ||
id + id * id | shift (id) | |||
id | + id * id | reduce by $F$ $\longrightarrow$ id | ||
$F$ | + id * id | reduce by $T$ $\longrightarrow$ $F$ | ||
$T$ | + id * id | reduce by $E$ $\longrightarrow$ $T$ | ||
$E$ | + id * id | shift(+) | ||
$E$ + | id * id | shift(id) | ||
$E$ + id | * id | reduce by $F$ $\longrightarrow$ id | ||
$E$ + $F$ | * id | reduce by $T$ $\longrightarrow$ $F$ | ||
$E$ + $T$ | * id | shift(*) | ||
$E$ + $T$ * | id | shift(id) | ||
$E$ + $T$ * id | reduce by $F$ $\longrightarrow$ id | |||
$E$ + $T$ * $F$ | reduce by $T$ $\longrightarrow$ $T$ * $F$ | |||
$E$ + $T$ | reduce by $E$ $\longrightarrow$ $E$ + $T$ | |||
$E$ | accept |
(NOTE: the top of stack is to the right; the reverse rightmost derivation is formed by concatenating the stack with the remaining input at each reduction step)
As mentioned above, the symbols pushed onto the parser's stack are not actually terminals and nonterminals. Instead, they are states, that correspond to a finite-state machine that represents the parsing process (more on this soon).
All LR Parsers use two tables: the action table and the goto table. The action table is indexed by the top-of-stack symbol and the current token, and it tells which of the four actions to perform: shift, reduce, accept, or reject. The goto table is used during a reduce action as explained below.
Above we said that a shift action means to push the current token onto the stack. In fact, we actually push a state symbol onto the stack. Each "shift" action in the action table includes the state to be pushed.
Above, we also said that when we reduce using the grammar rule A → alpha, we pop alpha off of the stack and then push A. In fact, if alpha contains N symbols, we pop N states off of the stack. We then use the goto table to know what to push: the goto table is indexed by state symbol t and nonterminal A, where t is the state symbol that is on top of the stack after popping N times.
Here's pseudo code for the parser's main loop:
push initial state s0 a = scan() do forever t = top-of-stack (state) symbol switch action[t, a] { case shift s: push(s) a = scan() case reduce by A → alpha: for i = 1 to length(alpha) do pop() end t = top-of-stack symbol push(goto[t, A]) case accept: return( SUCCESS ) case error: call the error handler return( FAILURE ) } end do
Remember, all LR parsers use this same basic algorithm. As mentioned above, for all LR parsers, the states that are pushed onto the stack represent the states in an underlying finite-state machine. Each state represents "where we might be" in a parse; each "place we might be" is represented (within the state) by an item. What's different for the different LR parsers is:
SLR means simple LR; it is the weakest member of the LR family (i.e., every SLR grammar is also LALR and LR, but not vice versa). To understand SLR parsing we'll use a new example grammar (a very simple grammar for parameter lists):
$PList$ | $\longrightarrow$ | ( $IDList$ ) |
$IDList$ | $\longrightarrow$ | id |
| | $IDList$ id |
Building the Action and Goto Tables for an SLR Parser
Definition of an SLR item:
The item "PList $\longrightarrow$ . lparens IDList rparens" can be thought of as meaning "we may be parsing a PList, but so far we haven't seen anything".
The item "PList $\longrightarrow$ lparens . IDList rparens" means "we may be parsing a PList, and so far we've seen a lparens".
The item "PList $\longrightarrow$ lparens IDList . rparens" means "we may be parsing a PList, and so far we've seen a lparens and parsed an IDList.
We need 2 operations on sets of items: Closure and Goto
ClosureTo compute Closure($I$), where $I$ is a set of items:
while | |
there exists an item in Closure($I$) of the form
|
|
do | |
add B $\longrightarrow$ . $\gamma$ to Closure($I$) |
The idea is that the item "X $\longrightarrow$ $\alpha$ . B $\beta$" means "we may be trying to parse an X, and so far we've parsed all of $\alpha$, so the next thing we'll parse may be a B". And the item "B $\longrightarrow$ . $\gamma$" also means that the next thing we'll parse may be a B (in particular, a B that derives $\gamma$), but we haven't seen any part of it yet.
Example 1: Closure({ PList $\longrightarrow$ . lparens IDList rparens })
We'll begin by putting the initial item into the Closure (Step 1 above). So far, our set is: { PList $\longrightarrow$ . lparens IDList rparens}
Now, we will do step 2, checking the set we build for productions of the form B $\longrightarrow$ $\gamma$, where the item B $\longrightarrow$ . $\gamma$ is not in the set. There's only one item that we can check, and the symbol to the immediate right of the dot is lparens which is a terminal symbol. Obviously, there are no productions of the form B $\longrightarrow$ $\gamma$ with a terminal symbol on the left-hand side, so there's nothing else to check.
With Step 2 exhausted, we can return the set we've built up: Closure({ PList $\longrightarrow$ . lparens IDList rparens }) = { PList $\longrightarrow$ . lparens IDList rparens }
As with the previous example, we put the initial item into the Closure. So far, our set is { PList $\longrightarrow$ lparens . IDList rparens }
For step 2, we begin by selecting the only item in our working set, PList $\longrightarrow$ lparens . IDList rparens. We now look for productions with a left-hand side of IDList, since that's the symbol to the immediate right of the dot. One production of this form is "IDList $\longrightarrow$ id". Since the item IDList $\longrightarrow$ . id is not in the Closure yet, we add it. Our set so far is { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id}
We know that the item that we just added, IDList $\longrightarrow$ . id will not yield any more items, because the symbol immediately to the right of the dot is a terminal. However, we still haven't captured every production with IDList on the left-hand side, which we need to check because of our initial item. The grammar also has the production IDList $\longrightarrow$ IDList id, so we add the item "IDList $\longrightarrow$ . IDList id" to the closure. At this point, our working set is { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}
The new item that we added has IDList to the right-hand side of the dot. Fortunately, we've already exhausted every production of the grammar with IDList on the left-hand side. Thus, we can pronounce our working set complete:
Closure({ PList $\longrightarrow$ lparens . IDList rparens }) = { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}
Now that we have defined the Closure of a set of items, we can use it to define the Goto operation. The basic idea is that $I$ tells us where we might be in the parse, and Goto($I$, X) tells us where we might be after parsing an X. Here is the definition:
Let us begin by defining an intermediate set:
We can now build $\mathcal{W}$ by taking each item from $I$ (of which there is only one) and advancing the dot to the right.
Thus, $\mathcal{W}$ = {
PList $\longrightarrow$ lparens . IDList rparens}
With $\mathcal{W}$ in hand, we are ready to perform the Goto operation by computing Closure($\mathcal{W}$) = Closure( { PList $\longrightarrow$ lparens . IDList rparens} ) . We already computed this closure above as { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}, so we are done:
Goto($I$1, X1) = { PList $\longrightarrow$ lparens . IDList rparens , IDList $\longrightarrow$ . id, IDList $\longrightarrow$ . IDList id}
$I$2 = Goto($I$1, X1 )
X2 = IDList
The inner Goto operation is the result of Example 1, so we can substitute that result directly. Expanded, the problem statement is:
Item in $I$ of the form A $\longrightarrow$ $\alpha$ . X $\beta$ | Item of the form A $\longrightarrow$ $\alpha$ X . $\beta$ | |
PList $\longrightarrow$ lparens . IDList rparens | PList $\longrightarrow$ lparens IDList . rparens | |
IDList $\longrightarrow$ . IDList id | IDList $\longrightarrow$ IDList . id |
Thus, $\mathcal{W}$ = { PList $\longrightarrow$ lparens IDList . rparens, IDList $\longrightarrow$ IDList . id }
We can now take the closure of $\mathcal{W}$ to complete the operation. This
turns out to be trivial, since no element in $\mathcal{W}$ is followed by a nonterminal, and
therefore yields no additional items. Thus, Goto($I$2, X2) = Closure($\mathcal{W}$) = $\mathcal{W}$ =
PList $\longrightarrow$ lparens IDList . rparens, IDList $\longrightarrow$ IDList . id
}
Our ultimate goal is to create a To build the FSM:
Example
grammar S' → plist plist → ( idlist ) idlist → ID idlist → idlist ID
Corresponding SLR FSM
Given the FSM, here's how to build Action and Goto tables:
Example
FOLLOW(idlist) = { ), ID } FOLLOW(plist) = { $ }
Not every grammar is SLR(1). If a grammar is not SLR(1), there will be a conflict in the SLR Action table. There is a conflict in the table if there is a table entry with more than 1 rule in it. There are two possible kinds of conflicts:
A shift/reduce conflict means that it is not possible to determine, based only on the top-of-stack state symbol and the current token, whether to shift or to reduce. This kind of conflict arises when one state contains two items of the following form:
A reduce/reduce conflict means that it is not possible to determine, based only on the top-of-stack state symbol and the current token, whether to reduce by one grammar rule or by another grammar rule. This kind of conflict arises when one state contains two items of the form
A non-SLR(1) grammar
This grammar causes a shift/reduce conflict
grammar
S' $\longrightarrow$ Srelevant part of the FSM
This grammar causes a reduce/reduce conflict:
grammar
Follow sets
Follow(A) = {d,e}relevant part of the FSM