- Parse tree
- First and follow sets
- Top-down parsing
- Bottom-up parsing
Parsing is the process of verifying that a sequence of tokens conforms to the rules of a formal grammar. It also involves constructing a (sometimes implicit) parse tree. A program that performs parsing is known as a parser.
Note: Some compilers combine lexing and parsing into a single phase.
There are three general types of parser covered in these notes:
Top-down parsers build parse trees from the root to the leaves, whereas bottom-up parsers build parse trees by starting at the leaves and ending at the root [1, P. 192].
In the context of parsing, a grammar is a set of productions that describes how to form valid strings according to a formal language’s syntax.
Note: grammars can’t describe all aspects of a language, e.g. a context-free grammar can’t define that an identifier must be defined before it’s used [1, P. 209].
Terminals are the lexical elements used in specifying the rules for a grammar [1, P. 197].
Nonterminals are syntactic variables that denote sets of strings [1, P. 197].
A production is a rewrite rule specifying a symbol substitution that can be made.
A production head is a number of symbols (with at least one nonterminal). A production body is a sequence of terminals/nonterminals which can replace head.
A CFG (Context-Free Grammar) is a formal grammar where productions can be applied regardless of the context of a nonterminal. In other words, in a CFG a production head can only contain a single nonterminal.
Formally, a CFG consists of:
- A finite set of nonterminals .
- A finite set of terminals .
- A start symbol ().
- A finite set of productions , where each production is in the form and and .
Every construct that can be described by a regular expression can be described by a CFG, but the inverse is not true [1, P. 205].
A language that can be generated by a CFG is a context-free language [1, P. 200].
If two grammars generate the same language then the grammars are said to be equivalent [1, P. 200].
In these notes, productions are written in the following form:
Other productions would then define the nonterminals and , which are italicized.
A set of productions with a common head can be written using the vertical bar (|):
A derivation of a string for a grammar is a sequence of rule applications (productions) that transform the start symbol into a string.
A derivation can be represented as a tree (a parse tree).
A derivation proves that a string is an instance of an expression [1, P. 200].
The symbol means “derives in zero or more steps”.
The symbol means “derives in one or more steps”.
A sentential form is a string that is derivable from the start symbol. Formally, if , where is the start symbol for a grammar , then is a sentential form of .
A sentence of is a sentential form with no nonterminals. The language generated by a grammar is its set of sentences [1, P. 200].
In leftmost derivations, the leftmost nonterminal expression in each sentential is chosen (denoted ).
In rightmost derivations (canonical derivations), the rightmost nonterminal is chosen (denoted ).
At each step in a derivation, there are two decisions to be made:
- Which nonterminal should be replaced.
- Which production should replace it.
A grammar is left-recursive if it has a nonterminal such that there is a derivation for some string [1, P. 212].
Immediate left recursion is where there is a production .
Top-down parsing cannot handle left-recursive grammars. In this case the recursion must be eliminated with the rule:
Left factoring is the process of rewriting productions that share a common prefix, it can be done to make a grammar suitable to predictive parsing [1, P. 214].
For example, becomes:
An ambiguous grammar produces more than one parse tree for a given sentence [1, P. 202].
Ambiguous grammars are bad since they leave the meaning of the program undefined.
One way to remove ambiguity is to rewrite grammar unambiguously. Another approach is to use disambiguating declarations. The most popular forms are associativity declarations and precedence declarations, e.g. to declare left associativity in bison:
An LL(k) grammar is a grammar that can be parsed by an LL parser with k tokens of lookahead. e.g. LL(1) grammar can be parsed by an LL parser with 1 token lookahead.
An LL parser is a top-down parser that parses from left to right, constructing a left-most derivation.
A grammar is guaranteed not to be LL(1) if it is any of:
- Not left factored
- Left recursive
Even if a grammar does not meet one of these criteria, it is still not guaranteed to be LL(1).
An LR(k) grammar is a grammar that can be parsed by an LR parser with k tokens of lookahead.
An LR parser is a top-down parser that parses from right to left, constructing a right-most derivation.
BNF (Backus-Naur Form) is a syntax for context-free grammars.
<addition> ::= <number>+<number> <number> ::= <sign><integer>|<integer> <integer> ::= <digit>|<digit><integer> <digit>::=0|1|2|3|4|5|6|7|8|9 <sign> ::= +|-
A parse tree is a graphical representation of a derivation in which each non-leaf node represents the application of a production [1, P. 201].
The leaves of a parse tree are labelled by nonterminals or terminals [1, P. 201].
Parse trees ignore the order in which symbols in sentential form are replaced [1, P. 202].
An AST (Abstract Syntax Tree) is a tree representation of the syntactic structure of a program.
ASTs are similar to parse trees except that an AST’s interior nodes represent programming constructs, whereas a parse tree’s interior nodes represent nonterminals [1, P. 69].
Parse trees are sometimes called concrete syntax trees to distinguish them from ASTs.
An AST is a key data structure in a compiler.
First and follow sets are used to help construct top-down and bottom-up parsers.
To compute for all grammar symbols :
- If is terminal, .
- If is nonterminal and is a production for , add to if for some , and . If for then add to .
- If is a production then add to .
for a nonterminal is the set of terminals that can appear immediately to the right of in some sentential form.
To compute for all nonterminals :
- Place $ in (where is start symbol and $ is the input right end marker).
- If there is a production , add everything in to (except ).
- If there is a production , or a production where , add everything in to .
Top-down parsing is the process of producing a parse tree from an input sequence starting from the root by following the rewriting rules of a grammar [1, P. 217].
At each step of a top-down algorithm, you must determine the production to be applied for a nonterminal. Once a production has been chosen, the rest of the parsing involves matching the production body with the input string [1, P. 217].
A recursive descent parser is a top-down LL parser.
A recursive descent parser implements a procedure for each nonterminal. See Recursive descent parser Wiki article for full details.
A left-recursive grammar can cause a recursive descent parser to enter an infinite loop [1, P. 220].
Predictive parsing is a special case of recursive descent parsing that doesn’t require backtracking [1, P. 222].
Predictive parsers work on LL(1) grammars since the proper production to apply for a nonterminal can be determined by looking at only the current input symbol [1, P. 222].
Predictive parsing can be implemented using a predictive parsing table. A predictive parsing table is a 2D array where is a nonterminal, is a terminal (or the special end symbol $), and an entry is a production.
An LL(1) grammar will produce a parsing table where each entry uniquely identifies a production [1, P. 225].
A table can be constructed for CFG with the following rules:
- For each production in :
- For each terminal :
- If , for each terminal : . If and do:
The following pseudocode implements a table-driven predictive parser:
let a be the first symbol of w; let X be the top of the stack symbol while X != $: if X == a: pop the stack and let a be the next symbol of w else if X is a terminal: error() else if T[X, a] is an error entry: error() else if T[X, a] = X -> Y_1, Y_2, ..., Y_n: output production X -> Y_1, Y_2, ..., Y_n pop the stack push Y_n, ..., Y_2, Y_1 onto the stack with Y_1 on top let X be the top stack symbol
Bottom-up parsing reduces a string to the start symbol by inverting productions (begin with input token sequence, end with start symbol).
Most parser generator tools use bottom-up parsing.
Shift-reduce parsing is a form of bottom-up parsing.
In shift-reduce parsing, a string is split into two substrings. The left substring contains terminals and nonterminals, and the right substring is unexamined. The division can be represented by a bar (|).
There are two primary moves: shift and reduce. Shift reads one token of input. Reduce applies a reduction to the right end of the left string.
A reduction is the inverse of a derivation step. It replaces input sequences with productions. A complete reduction sequence produces a rightmost derivation when reversed [1, P. 235].
A shift-reduce parser is normally implemented with a stack, where shift pushes a terminal onto the stack and reduce pops symbols from the stack and pushes a nonterminal onto the stack.
A handle is a substring that matches the body of a production, and whose reduction allows further reductions back to the start symbol. Handles only appear on the top of the stack, never inside [1, Pp. 235-7].
An item is a production with a dot (.) somewhere in the production body that indicates how much of a production has been seen at given point in the parsing process [1, P. 242].
The items (also known as LR(0) items) for are:
The production generates an item [1, P. 242].
A viable prefix is a prefix of a right-sentential form that does not continue past the right end of the rightmost handle of that sentential form. As long as a viable prefix is on the stack, no parsing error has been detected [1, P. 256].
For any grammar, the set of viable prefixes is a regular language and therefore can be validated by a finite automata. You can build an NFA that takes the stack as its input and either accepts or rejects the stack. The NFA can then be converted to a DFA using the powerset construction.
The rules to construct an NFA for recognizing viable prefixes VPs:
- Add a dummy production to .
- NFA states are the items of (including the extra production).
- For item , add transition ( is nonterminal or terminal).
- For item and production ( is nonterminal) add transition
- Every state is an accepting state.
- Start state is .
When the NFA has been converted into a DFA, each state is a set of items representing the possible current state of the automaton. These states are known as the canonical collections of items.
Item is valid for viable prefix if .
A conflict occurs if there are multiple actions leading to a valid parse at a given step. A shift-reduce conflict is when a grammar allows both a shift and a reduce for the same item. A reduce-reduce conflict is when there are two or more possible reductions.
An LR(0) parser is a shift-reduce parser that uses zero tokens of lookahead to determine what action to take.
Let be the terminating state when running and be the next input token.
At each stage, an LR(0) parser will reduce by if contains item . An LR(0) parser will reduce if contains item .
Since there is zero lookahead, in any configuration of the parser there must be an unambiguous action that it can take.
An SLR parser is a type of LR parser.
An SLR parser improves on LR(0) with the modification that a reduction is only made if contains and .
SLR parsing algorithm:
- Let be DFA for viable prefixes of .
- Let be the initial configuration.
- Repeat until configuration is
- Let be current configuration.
- Run on current stack .
- If rejects , report parsing error.
- If accepts with items , let be next input.
- Shift if .
- Reduce if and .
- Report parsing error if neither applies.
In the case of conflicts when following this algorithm, the grammar is not an SLR grammar.
-  A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman, Compilers: Principles, Techniques, and Tools (2nd Edition). USA: Addison-Wesley Longman Publishing Co., Inc., 2006.