Earley Parsing
   HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, the Earley parser is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
for
parsing Parsing, syntax analysis, or syntactic analysis is a process of analyzing a String (computer science), string of Symbol (formal), symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal gramm ...
strings that belong to a given
context-free language In formal language theory, a context-free language (CFL), also called a Chomsky type-2 language, is a language generated by a context-free grammar (CFG). Context-free languages have many applications in programming languages, in particular, mos ...
, though (depending on the variant) it may suffer problems with certain nullable grammars. The algorithm, named after its inventor Jay Earley, is a
chart parser In computer science, a chart parser is a type of parser suitable for ambiguous grammars (including grammars of natural languages). It uses the dynamic programming approach—partial hypothesized results are stored in a structure called a chart a ...
that uses dynamic programming; it is mainly used for parsing in
computational linguistics Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics ...
. It was first introduced in his dissertation in 1968 (and later appeared in abbreviated, more legible form in a journal). Earley parsers are appealing because they can parse all context-free languages, unlike
LR parser In computer science, LR parsers are a type of bottom-up parsing, bottom-up parser that analyse deterministic context-free languages in linear time. There are several variants of LR parsers: SLR parsers, LALR parsers, canonical LR parser, canonica ...
s and LL parsers, which are more typically used in
compiler In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primaril ...
s but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case (n^3), where ''n'' is the length of the parsed string, quadratic time for unambiguous grammars (n^2), and linear time for all
deterministic context-free grammar In formal grammar theory, the deterministic context-free grammars (DCFGs) are a proper subset of the context-free grammars. They are the subset of context-free grammars that can be derived from deterministic pushdown automata, and they generate the ...
s. It performs particularly well when the rules are written left-recursively.


Earley recogniser

The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser.


The algorithm

In the following descriptions, α, β, and γ represent any
string String or strings may refer to: *String (structure), a long flexible structure made from threads twisted together, which is used to tie, bind, or hang other objects Arts, entertainment, and media Films * ''Strings'' (1991 film), a Canadian anim ...
of terminals/nonterminals (including the
empty string In formal language theory, the empty string, or empty word, is the unique String (computer science), string of length zero. Formal theory Formally, a string is a finite, ordered sequence of character (symbol), characters such as letters, digits ...
), X and Y represent single nonterminals, and ''a'' represents a terminal symbol. Earley's algorithm is a top-down dynamic programming algorithm. In the following, we use Earley's dot notation: given a production X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected. Input position 0 is the position prior to input. Input position ''n'' is the position after accepting the ''n''th token. (Informally, input positions can be thought of as locations at token boundaries.) For every input position, the parser generates a ''state set''. Each state is a
tuple In mathematics, a tuple is a finite sequence or ''ordered list'' of numbers or, more generally, mathematical objects, which are called the ''elements'' of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is o ...
(X → α • β, ''i''), consisting of * the production currently being matched (X → α β) * the current position in that production (visually represented by the dot •) * the position ''i'' in the input at which the matching of this production began: the ''origin position'' (Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.) A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state. The state set at input position ''k'' is called S(''k''). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: ''prediction'', ''scanning'', and ''completion''. * ''Prediction'': For every state in S(''k'') of the form (X → α • Y β, ''j'') (where ''j'' is the origin position as above), add (Y → • γ, ''k'') to S(''k'') for every production in the grammar with Y on the left-hand side (Y → γ). * ''Scanning'': If ''a'' is the next symbol in the input stream, for every state in S(''k'') of the form (X → α • ''a'' β, ''j''), add (X → α ''a'' • β, ''j'') to S(''k''+1). * ''Completion'': For every state in S(''k'') of the form (Y → γ •, ''j''), find all states in S(''j'') of the form (X → α • Y β, ''i'') and add (X → α Y • β, ''i'') to S(''k''). Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is. The algorithm accepts if (X → γ •, 0) ends up in S(''n''), where (X → γ) is the top level-rule and ''n'' the input length, otherwise it rejects.


Pseudocode

Adapted from Speech and Language Processing by Daniel Jurafsky and James H. Martin, DECLARE ARRAY S; function INIT(words) S ← CREATE_ARRAY(LENGTH(words) + 1) for k ← from 0 to LENGTH(words) do S ← EMPTY_ORDERED_SET function EARLEY_PARSE(words, grammar) INIT(words) ADD_TO_SET((γ → •S, 0), S for k ← from 0 to LENGTH(words) do for each state in S do // S can expand during this loop if not FINISHED(state) then if NEXT_ELEMENT_OF(state) is a nonterminal then PREDICTOR(state, k, grammar) // non_terminal else do SCANNER(state, k, words) // terminal else do COMPLETER(state, k) end end return chart procedure PREDICTOR((A → α•Bβ, j), k, grammar) for each (B → γ) in GRAMMAR_RULES_FOR(B, grammar) do ADD_TO_SET((B → •γ, k), S end procedure SCANNER((A → α•aβ, j), k, words) if j < LENGTH(words) and a ⊂ PARTS_OF_SPEECH(words then ADD_TO_SET((A → αa•β, j), S +1 end procedure COMPLETER((B → γ•, x), k) for each (A → α•Bβ, j) in S do ADD_TO_SET((A → αB•β, j), S end


Example

Consider the following simple grammar for arithmetic expressions:

::= # the start rule ::= "+" , ::= "*" , ::= "1" , "2" , "3" , "4" With the input: 2 + 3 * 4 This is the sequence of state sets: The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.


Constructing the parse forest

Earley's dissertation briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. But Tomita noticed that this does not take into account the relations between symbols, so if we consider the grammar S → SS , b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb. Another method is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest. * Predicted items have a null SPPF pointer. * The scanner creates an SPPF node representing the non-terminal it is scanning. * Then when the scanner or completer advance an item, they add a derivation whose children are the node from the item whose dot was advanced, and the one for the new symbol that was advanced over (the non-terminal or completed item). SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.


Optimizations

Philippe McLean and R. Nigel Horspool in their pape
"A Faster Earley Parser"
combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.


See also

* CYK algorithm *
Context-free grammar In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the fo ...
*
Parsing algorithms Parsing, syntax analysis, or syntactic analysis is a process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar by breaking it into parts. The term ''pa ...


Citations


Other reference materials

* * * {{parsers Parsing algorithms Dynamic programming