Cocke-Younger-Kasami Algorithm
   HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, the Cocke–Younger–Kasami algorithm (alternatively called CYK, or CKY) is a
parsing Parsing, syntax analysis, or syntactic analysis is a process of analyzing a String (computer science), string of Symbol (formal), symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal gramm ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
for
context-free grammar In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the fo ...
s published by Itiroo Sakai in 1961. The algorithm is named after some of its rediscoverers: John Cocke, Daniel Younger,
Tadao Kasami was a Japanese information theorist who made significant contributions to error correcting codes. He was the earliest to publish the key ideas for the CYK algorithm, separately discovered by Daniel Younger (1967) and John Cocke (1970). Kasami ...
, and
Jacob T. Schwartz __NOTOC__ Jacob Theodore "Jack" Schwartz (January 9, 1930 – March 2, 2009) was an American mathematician, computer scientist, and professor of computer science at the New York University Courant Institute of Mathematical Sciences. He was the ...
. It employs
bottom-up parsing In computer science, parsing reveals the grammatical structure of linear input text, as a first step in working out its meaning. Bottom-up parsing recognizes the text's lowest-level small details first, before its mid-level structures, and leaves t ...
and dynamic programming. The standard version of CYK operates only on context-free grammars given in
Chomsky normal form In formal language theory, a context-free grammar, ''G'', is said to be in Chomsky normal form (first described by Noam Chomsky) if all of its production rules are of the form: : ''A'' → ''BC'',   or : ''A'' → ''a'',   or : ''S'' â ...
(CNF). However any context-free grammar may be algorithmically transformed into a CNF grammar expressing the same language . The importance of the CYK algorithm stems from its high efficiency in certain situations. Using big ''O'' notation, the worst case running time of CYK is \mathcal\left( n^3 \cdot \left, G \ \right), where n is the length of the parsed string and \left, G \ is the size of the CNF grammar G . This makes it one of the most efficient parsing algorithms in terms of worst-case
asymptotic complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
, although other algorithms exist with better average running time in many practical scenarios.


Standard form

The dynamic programming algorithm requires the context-free grammar to be rendered into
Chomsky normal form In formal language theory, a context-free grammar, ''G'', is said to be in Chomsky normal form (first described by Noam Chomsky) if all of its production rules are of the form: : ''A'' → ''BC'',   or : ''A'' → ''a'',   or : ''S'' â ...
(CNF), because it tests for possibilities to split the current sequence into two smaller sequences. Any context-free grammar that does not generate the empty string can be represented in CNF using only production rules of the forms A\rightarrow \alpha and A\rightarrow B C; to allow for the empty string, one can explicitly allow S\to \varepsilon, where S is the start symbol.


Algorithm


As pseudocode

The algorithm in
pseudocode In computer science, pseudocode is a description of the steps in an algorithm using a mix of conventions of programming languages (like assignment operator, conditional operator, loop) with informal, usually self-explanatory, notation of actio ...
is as follows: let the input be a string ''I'' consisting of ''n'' characters: ''a''1 ... ''a''''n''. let the grammar contain ''r'' nonterminal symbols ''R''1 ... ''R''''r'', with start symbol ''R''1. let ''P'' 'n'',''n'',''r''be an array of booleans. Initialize all elements of ''P'' to false. let ''back'' 'n'',''n'',''r''be an array of lists of backpointing triples. Initialize all elements of ''back'' to the empty list. for each ''s'' = 1 to ''n'' for each unit production ''R''''v'' → ''a''''s'' set ''P'' '1'',''s'',''v''= true for each ''l'' = 2 to ''n'' ''-- Length of span'' for each ''s'' = 1 to ''n''-''l''+1 ''-- Start of span'' for each ''p'' = 1 to ''l''-1 ''-- Partition of span'' for each production ''R''''a'' → ''R''''b'' ''R''''c'' if ''P'' 'p'',''s'',''b''and ''P'' 'l''-''p'',''s''+''p'',''c''then set ''P'' 'l'',''s'',''a''= true, append to ''back'' 'l'',''s'',''a'' if ''P'' ,''1'',''1''is true then ''I'' is member of language return ''back'' -- by ''retracing the steps through back, one can easily construct all possible parse trees of the string.'' else return "not a member of language"


Probabilistic CYK (for finding the most probable parse)

Allows to recover the most probable parse given the probabilities of all productions.
let the input be a string ''I'' consisting of ''n'' characters: ''a''1 ... ''a''''n''. let the grammar contain ''r'' nonterminal symbols ''R''1 ... ''R''''r'', with start symbol ''R''1. let ''P'' 'n'',''n'',''r''be an array of real numbers. Initialize all elements of ''P'' to zero. let ''back'' 'n'',''n'',''r''be an array of backpointing triples. for each ''s'' = 1 to ''n'' for each unit production ''R''''v'' →''a''''s'' set ''P'' '1'',''s'',''v''= Pr(''R''''v'' →''a''''s'') for each ''l'' = 2 to ''n'' ''-- Length of span'' for each ''s'' = 1 to ''n''-''l''+1 ''-- Start of span'' for each ''p'' = 1 to ''l''-1 ''-- Partition of span'' for each production ''R''''a'' → ''R''''b'' ''R''''c'' prob_splitting = Pr(''R''''a'' →''R''''b'' ''R''''c'') * ''P'' 'p'',''s'',''b''* ''P'' 'l''-''p'',''s''+''p'',''c'' if prob_splitting > ''P'' 'l'',''s'',''a''then set ''P'' 'l'',''s'',''a''= prob_splitting set ''back'' 'l'',''s'',''a''= if ''P'' ,''1'',''1''> 0 then find the parse tree by retracing through ''back'' return the parse tree else return "not a member of language"


As prose

In informal terms, this algorithm considers every possible substring of the input string and sets P ,s,v/math> to be true if the substring of length l starting from s can be generated from the nonterminal R_v. Once it has considered substrings of length 1, it goes on to substrings of length 2, and so on. For substrings of length 2 and greater, it considers every possible partition of the substring into two parts, and checks to see if there is some production A \to B \; C such that B matches the first part and C matches the second part. If so, it records A as matching the whole substring. Once this process is completed, the input string is generated by the grammar if the substring containing the entire input string is matched by the start symbol.


Example

This is an example grammar: :\begin \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce\\ \ce & \ \ce \end Now the sentence ''she eats a fish with a fork'' is analyzed using the CYK algorithm. In the following table, in P ,j,k/math>, is the number of the row (starting at the bottom at 1), and is the number of the column (starting at the left at 1). For readability, the CYK table for ''P'' is represented here as a 2-dimensional matrix ''M'' containing a set of non-terminal symbols, such that is in if, and only if, . In the above example, since a start symbol ''S'' is in , the sentence can be generated by the grammar.


Extensions


Generating a parse tree

The above algorithm is a
recognizer A finite-state machine (FSM) or finite-state automaton (FSA, plural: ''automata''), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number o ...
that will only determine if a sentence is in the language. It is simple to extend it into a
parser Parsing, syntax analysis, or syntactic analysis is a process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar by breaking it into parts. The term '' ...
that also constructs a
parse tree A parse tree or parsing tree (also known as a derivation tree or concrete syntax tree) is an ordered, rooted tree that represents the syntactic structure of a string according to some context-free grammar. The term ''parse tree'' itself is use ...
, by storing parse tree nodes as elements of the array, instead of the boolean 1. The node is linked to the array elements that were used to produce it, so as to build the tree structure. Only one such node in each array element is needed if only one parse tree is to be produced. However, if all parse trees of an ambiguous sentence are to be kept, it is necessary to store in the array element a list of all the ways the corresponding node can be obtained in the parsing process. This is sometimes done with a second table B ,n,rof so-called ''backpointers''. The end result is then a shared-forest of possible parse trees, where common trees parts are factored between the various parses. This shared forest can conveniently be read as an
ambiguous grammar In computer science, an ambiguous grammar is a context-free grammar for which there exists a string (computer science), string that can have more than one leftmost derivation or parse tree. Every non-empty context-free language admits an ambiguous ...
generating only the sentence parsed, but with the same ambiguity as the original grammar, and the same parse trees up to a very simple renaming of non-terminals, as shown by .


Parsing non-CNF context-free grammars

As pointed out by , the drawback of all known transformations into Chomsky normal form is that they can lead to an undesirable bloat in grammar size. The size of a grammar is the sum of the sizes of its production rules, where the size of a rule is one plus the length of its right-hand side. Using g to denote the size of the original grammar, the size blow-up in the worst case may range from g^2 to 2^, depending on the transformation algorithm used. For the use in teaching, Lange and Leiß propose a slight generalization of the CYK algorithm, "without compromising efficiency of the algorithm, clarity of its presentation, or simplicity of proofs" .


Parsing weighted context-free grammars

It is also possible to extend the CYK algorithm to parse strings using
weighted A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set. The result of this application of a weight function is ...
and
stochastic context-free grammar In theoretical linguistics and computational linguistics, probabilistic context free grammars (PCFGs) extend context-free grammars, similar to how hidden Markov models extend regular grammars. Each Formal grammar#The syntax of grammars, production i ...
s. Weights (probabilities) are then stored in the table P instead of booleans, so P ,j,Awill contain the minimum weight (maximum probability) that the substring from i to j can be derived from A. Further extensions of the algorithm allow all parses of a string to be enumerated from lowest to highest weight (highest to lowest probability).


Numerical stability

When the probabilistic CYK algorithm is applied to a long string, the splitting probability can become very small due to multiplying many probabilities together. This can be dealt with by summing log-probability instead of multiplying probabilities.


Valiant's algorithm

The worst case running time of CYK is \Theta(n^3 \cdot , G, ), where ''n'' is the length of the parsed string and , ''G'', is the size of the CNF grammar ''G''. This makes it one of the most efficient algorithms for recognizing general context-free languages in practice. gave an extension of the CYK algorithm. His algorithm computes the same parsing table as the CYK algorithm; yet he showed that algorithms for efficient multiplication of matrices with 0-1-entries can be utilized for performing this computation. Using the
Coppersmith–Winograd algorithm Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in m ...
for multiplying these matrices, this gives an asymptotic worst-case running time of O(n^ \cdot , G, ). However, the constant term hidden by the
Big O Notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ...
is so large that the Coppersmith–Winograd algorithm is only worthwhile for matrices that are too large to handle on present-day computers , and this approach requires subtraction and so is only suitable for recognition. The dependence on efficient matrix multiplication cannot be avoided altogether: has proved that any parser for context-free grammars working in time O(n^ \cdot , G, ) can be effectively converted into an algorithm computing the product of (n \times n)-matrices with 0-1-entries in time O(n^), and this was extended by Abboud et al. to apply to a constant-size grammar.


See also

*
GLR parser A GLR parser (generalized left-to-right rightmost derivation parser) is an extension of an LR parser algorithm to handle non-deterministic and ambiguous grammars. The theoretical foundation was provided in a 1974 paper by Bernard Lang (along wit ...
*
Earley parser In computer science, the Earley parser is an algorithm for parsing strings that belong to a given context-free language, though (depending on the variant) it may suffer problems with certain nullable grammars. The algorithm, named after its inve ...
*
Packrat parser The Packrat parser is a type of Parsing, parser that shares similarities with the recursive descent parser in its construction. However, it differs because it takes parsing expression grammar, parsing expression grammars (PEGs) as input rather t ...
*
Inside–outside algorithm For Parsing, parsing algorithms in computer science, the inside–outside algorithm is a way of re-estimating production probabilities in a probabilistic context-free grammar. It was introduced by James K. Baker in 1979 as a generalization of the f ...


References


Sources

* * * * * * * * * * *


External links


Interactive Visualization of the CYK algorithm



Exorciser is a Java application to generate exercises in the CYK algorithm as well as Finite State Machines, Markov algorithms etc
{{Parsers Parsing algorithms