HOME

TheInfoList



OR:

Sliding window based part-of-speech tagging is used to part-of-speech tag a text. A high percentage of words in a natural language are words which out of context can be assigned more than one part of speech. The percentage of these ambiguous words is typically around 30%, although it depends greatly on the language. Solving this problem is very important in many areas of natural language processing. For example in
machine translation Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates t ...
changing the part-of-speech of a word can dramatically change its translation. Sliding window based part-of-speech taggers are programs which assign a single part-of-speech to a given lexical form of a word, by looking at a fixed sized "window" of words around the word to be disambiguated. The two main advantages of this approach are: * It is possible to automatically train the tagger, getting rid of the need of manually tagging a corpus. * The tagger can be implemented as a
finite state automaton A finite-state machine (FSM) or finite-state automaton (FSA, plural: ''automata''), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number o ...
(
Mealy machine In the theory of computation, a Mealy machine is a finite-state machine whose output values are determined both by its current state and the current inputs. This is in contrast to a Moore machine, whose output values are determined solely by its cu ...
)


Formal definition

Let :\Gamma = \ be the set of grammatical tags of the application, that is, the set of all possible tags which may be assigned to a word, and let :W = \ be the vocabulary of the application. Let : T : W \rightarrow P ( \Gamma ) be a function for morphological analysis which assigns each w its set of possible tags, T ( w ) \subseteq \Gamma, that can be implemented by a full-form lexicon, or a morphological analyser. Let :\Sigma = \ be the set of word classes, that in general will be a
partition Partition may refer to: Computing Hardware * Disk partitioning, the division of a hard disk drive * Memory partition, a subdivision of a computer's memory, usually for use by a single job Software * Partition (database), the division of a ...
of W with the restriction that for each \sigma \in \Sigma all of the words w, \Sigma, \sigma will receive the same set of tags, that is, all of the words in each word class \sigma belong to the same ambiguity class. Normally, \Sigma is constructed in a way that for high frequency words, each word class contains a single word, while for low frequency words, each word class corresponds to a single ambiguity class. This allows good performance for high frequency ambiguous words, and doesn't require too many parameters for the tagger. With these definitions it is possible to state problem in the following way: Given a text w w \ldots w \in W^* each word w /math> is assigned a word class T ( w ) \in \Sigma (either by using the lexicon or morphological analyser) in order to get an ambiguously tagged text \sigma \sigma \ldots \sigma \in W^*. The job of the tagger is to get a tagged text \gamma \gamma \ldots \gamma /math> (with \gamma \in T(\sigma ) as correct as possible. A statistical tagger looks for the most probable tag for an ambiguously tagged text \sigma \sigma \ldots \sigma /math>: :\gamma^* \ldots \gamma^* = \operatorname_ p(\gamma \ldots \gamma \sigma \ldots \sigma Using Bayes formula, this is converted into: :\gamma^* \ldots \gamma^* = \operatorname_ p(\gamma \ldots \gamma p(\sigma \ldots \sigma \gamma \ldots \gamma where p(\gamma \gamma \ldots \gamma is the probability that a particular tag (syntactic probability) and p(\sigma \dots \sigma \gamma \ldots \gamma is the probability that this tag corresponds to the text \sigma \ldots \sigma /math> (lexical probability). In a
Markov model In probability theory, a Markov model is a stochastic model used to model pseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it (that is, it assumes the Mark ...
, these probabilities are approximated as products. The syntactic probabilities are modelled by a first order Markov process: :p(\gamma \gamma \ldots \gamma = \prod_^ p(\gamma +1\gamma where \gamma /math> and \gamma +1/math> are delimiter symbols. Lexical probabilities are independent of context: :p(\sigma \sigma \ldots \sigma \gamma \gamma \ldots \gamma = \prod_^ p(\sigma \gamma One form of tagging is to approximate the first probability formula: :p(\sigma \sigma \ldots \sigma \gamma \gamma \ldots \gamma = \prod_^ p(\gamma C_ \sigma C_ where C_ = \sigma - N_\sigma - N_\ldots \sigma - 1/math> is the right context of the size N_. In this way the sliding window algorithm only has to take into account a context of size N_ + N_ + 1. For most applications N_ = N_{(+)} = 1. For example to tag the ambiguous word "run" in the sentence "He runs from danger", only the tags of the words "He" and "from" are needed to be taken into account.


Further reading

* Sanchez-Villamil, E., Forcada, M. L., and Carrasco, R. C. (2005).
Unsupervised training of a finite-state sliding-window part-of-speech tagger
. ''Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence'', vol. 3230, p. 454-463 Computational linguistics