HOME

TheInfoList



OR:

Brown clustering is a hard hierarchical agglomerative clustering problem based on distributional information proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter V. de Souza, Jennifer Lai, and Robert Mercer. It is typically applied to text, grouping words into clusters that are assumed to be semantically related by virtue of their having been embedded in similar contexts.


Introduction

In
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
, Brown clustering or IBM clustering is a form of
hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into ...
of words based on the contexts in which they occur, proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter de Souza, Jennifer Lai, and Robert Mercer of IBM in the context of
language model A language model is a probability distribution over sequences of words. Given any sequence of words of length , a language model assigns a probability P(w_1,\ldots,w_m) to the whole sequence. Language models generate probabilities by training on ...
ing. The intuition behind the method is that a class-based language model (also called cluster -gram model), i.e. one where probabilities of words are based on the classes (clusters) of previous words, is used to address the data sparsity problem inherent in language modeling. Jurafsky and Martin give the example of a flight reservation system that needs to estimate the likelihood of the bigram "to Shanghai", without having seen this in a training set. The system can obtain a good estimate if it can cluster "Shanghai" with other city names, then make its estimate based on the likelihood of phrases such as "to London", "to Beijing" and "to Denver".


Technical definition

Brown groups items (i.e., types) into classes, using a binary merging criterion based on the log-probability of a text under a class-based language model, i.e. a probability model that takes the clustering into account. Thus, average mutual information (AMI) is the optimization function, and merges are chosen such that they incur the least loss in global mutual information. As a result, the output can be thought of not only as a
binary tree In computer science, a binary tree is a k-ary k = 2 tree data structure in which each node has at most two children, which are referred to as the ' and the '. A recursive definition using just set theory notions is that a (non-empty) binary t ...
but perhaps more helpfully as a sequence of merges, terminating with one big class of all words. This model has the same general form as a hidden Markov model, reduced to bigram probabilities in Brown's solution to the problem. MI is defined as: :\operatorname(c_i, c_j) = \Pr(\langle c_i, c_j\rangle) \log_2 \frac Finding the clustering that maximizes the likelihood of the data is computationally expensive. The approach proposed by Brown et al. is a
greedy heuristic A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally ...
. The work also suggests use of Brown clusterings as a simplistic bigram class-based language model. Given cluster membership indicators for the tokens in a text, the probability of the word instance given preceding word is given by: :\Pr(w_i , w_) = \Pr(w_i , c_i) \Pr(c_i , c_) This has been criticised as being of limited utility, as it only ever predicts the most common word in any class, and so is restricted to word types; this is reflected in the low relative reduction in perplexity found when using this model and Brown.


Variations

Other works have examined trigrams in their approaches to the Brown clustering problem. Brown clustering as proposed generates a fixed number of output classes. It is important to choose the correct number of classes, which is task-dependent. The cluster memberships of words resulting from Brown clustering can be used as features in a variety of machine-learned natural language processing tasks. A generalization of the algorithm was published in the AAAI conference in 2016, including a succinct formal definition of the 1992 version and then also the general form. Core to this is the concept that the classes considered for merging do not necessarily represent the final number of classes output, and that altering the number of classes considered for merging directly affects the speed and quality of the final result. There are no known theoretical guarantees on the greedy heuristic proposed by Brown et al. (as of February 2018). However, the clustering problem can be framed as estimating the parameters of the underlying class-based language model: it is possible to develop a consistent estimator for this model under mild assumptions.


See also

* Feature learning


References

{{reflist, 30em


External links


How to tune Brown clustering
Cluster analysis Hidden Markov models Language modeling Computational linguistics Statistical natural language processing