Latent semantic analysis (LSA) is a technique in
natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
, in particular
distributional semantics
Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. T ...
, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (the
distributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique called
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is r ...
(SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by
cosine similarity
In data analysis, cosine similarity is a measure of similarity between two sequences of numbers. For defining it, the sequences are viewed as vectors in an inner product space, and the cosine similarity is defined as the cosine of the angle betw ...
between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.
An information retrieval technique using latent semantic structure was patented in 1988
US Patent 4,839,853 now expired) by
Scott Deerwester
Scott Deerwester (born 1956) is one of the inventors of latent semantic analysis. He was a member of the faculty of the Colgate University, University of Chicago and the Hong Kong University of Science and Technology. He moved to Hong Kong
...
,
Susan Dumais,
George Furnas,
Richard Harshman,
Thomas Landauer Dr. Thomas K. Landauer (April 25, 1932 – March 26, 2014) was a Professor Emeritus at the Department of Psychology of the University of Colorado. He received his doctorate in 1960 from Harvard University, and also held academic appointments at Harv ...
,
Karen Lochbaum
Karen may refer to:
* Karen (name), a given name and surname
* Karen (slang), a term and meme for a demanding woman displaying certain behaviors
People
* Karen people, an ethnic group in Myanmar and Thailand
** Karen languages or Karenic la ...
and
Lynn Streeter. In the context of its application to
information retrieval, it is sometimes called latent semantic indexing (LSI).
Overview
Occurrence matrix
LSA can use a
document-term matrix
A document-term matrix is a mathematical matrix that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix i ...
which describes the occurrences of terms in documents; it is a
sparse matrix
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse b ...
whose rows correspond to
terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is
tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.
This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.
Rank lowering
After the construction of the occurrence matrix, LSA finds a
low-rank approximation to the
term-document matrix. There could be various reasons for these approximations:
* The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an ''approximation'' (a "least and necessary evil").
* The original term-document matrix is presumed ''noisy'': for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a ''de-noisified matrix'' (a better matrix than the original).
* The original term-document matrix is presumed overly
sparse
Sparse is a computer software tool designed to find possible coding faults in the Linux kernel. Unlike other such tools, this static analysis tool was initially designed to only flag constructs that were likely to be of interest to kernel de ...
relative to the "true" term-document matrix. That is, the original matrix lists only the words actually ''in'' each document, whereas we might be interested in all words ''related to'' each document—generally a much larger set due to
synonymy
A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in a given language. For example, in the English language, the words ''begin'', ''start'', ''commence'', and ''initiate'' are all ...
.
The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:
:: -->
This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem with
polysemy
Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a sin ...
, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.
Derivation
Let
be a matrix where element
describes the occurrence of term
in document
(this can be, for example, the frequency).
will look like this:
:
Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:
:
Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:
:
Now the
dot product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alg ...
between two term vectors gives the
correlation
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statisti ...
between the terms over the set of documents. The
matrix product
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the ...
contains all these dot products. Element
(which is equal to element
) contains the dot product
(
). Likewise, the matrix
contains the dot products between all the document vectors, giving their correlation over the terms:
.
Now, from the theory of linear algebra, there exists a decomposition of
such that
and
are
orthogonal matrices
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
One way to express this is
Q^\mathrm Q = Q Q^\mathrm = I,
where is the transpose of and is the identity ma ...
and
is a
diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal ...
. This is called a
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is r ...
(SVD):
:
The matrix products giving us the term and document correlations then become
:
Since
and
are diagonal we see that
must contain the
eigenvector
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of
, while
must be the eigenvectors of
. Both products have the same non-zero eigenvalues, given by the non-zero entries of
, or equally, by the non-zero entries of
. Now the decomposition looks like this:
:
The values
are called the singular values, and
and
the left and right singular vectors.
Notice the only part of
that contributes to
is the
row.
Let this row vector be called
.
Likewise, the only part of
that contributes to
is the
column,
.
These are ''not'' the eigenvectors, but ''depend'' on ''all'' the eigenvectors.
It turns out that when you select the
largest singular values, and their corresponding singular vectors from
and
, you get the rank
approximation to
with the smallest error (
Frobenius norm
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions).
Preliminaries
Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m ...
). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vector
then has
entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vector
is an approximation in this lower-dimensional space. We write this approximation as
:
You can now do the following:
* See how related documents
and
are in the low-dimensional space by comparing the vectors
and
(typically by
cosine similarity
In data analysis, cosine similarity is a measure of similarity between two sequences of numbers. For defining it, the sequences are viewed as vectors in an inner product space, and the cosine similarity is defined as the cosine of the angle betw ...
).
* Comparing terms
and
by comparing the vectors
and
. Note that
is now a column vector.
* Documents and term vector representations can be clustered using traditional clustering algorithms like k-means using similarity measures like cosine.
* Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.
To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:
:
Note here that the inverse of the diagonal matrix
may be found by inverting each nonzero value within the matrix.
This means that if you have a query vector
, you must do the translation
before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:
:
:
:
Applications
The new low-dimensional space typically can be used to:
* Compare the documents in the low-dimensional space (
data clustering
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of ...
,
document classification
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") ...
).
* Find similar documents across languages, after analyzing a base set of translated documents (
cross-language information retrieval).
* Find relations between terms (
synonymy
A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in a given language. For example, in the English language, the words ''begin'', ''start'', ''commence'', and ''initiate'' are all ...
and
polysemy
Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a sin ...
).
* Given a query of terms, translate it into the low-dimensional space, and find matching documents (
information retrieval).
* Find the best similarity between small groups of terms, in a semantic way (i.e. in a context of a knowledge corpus), as for example in multi choice questions
MCQ answering model.
* Expand the feature space of machine learning / text mining systems
* Analyze word association in text corpus
Synonymy and polysemy are fundamental problems in
natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
:
* Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "
physicians
A physician (American English), medical practitioner (Commonwealth English), medical doctor, or simply doctor, is a health professional who practices medicine, which is concerned with promoting, maintaining or restoring health through th ...
", even though the words have the same meaning.
* Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.
Commercial applications
LSA has been used to assist in performing
prior art
Prior art (also known as state of the art or background art) is a concept in patent law used to determine the patentability of an invention, in particular whether an invention meets the novelty and the inventive step or non-obviousness criteria ...
searches for
patents
A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an enabling disclosure of the invention."A p ...
.
Applications in human memory
The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of
free recall
Free recall is a common task in the psychological study of memory. In this task, participants study a list of items on each trial, and then are prompted to recall the items in any order. Items are usually presented one at a time for a short du ...
and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the
Semantic Proximity Effect.
When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.
Another model, termed
Word Association Spaces
A word is a basic element of language that carries an objective or practical meaning, can be used on its own, and is uninterruptible. Despite the fact that language speakers often have an intuitive grasp of what a word is, there is no consen ...
(WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.
Implementation
The
SVD
''Svenska Dagbladet'' (, "The Swedish Daily News"), abbreviated SvD, is a daily List of Swedish newspapers, newspaper published in Stockholm, Sweden.
History and profile
The first issue of ''Svenska Dagbladet'' appeared on 18 December 1884. ...
is typically computed using large matrix methods (for example,
Lanczos method
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the m "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an n \times n Hermitian matrix ...
s) but may also be computed incrementally and with greatly reduced resources via a
neural network
A neural network is a network or neural circuit, circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up ...
-like approach, which does not require the large, full-rank matrix to be held in memory.
A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed.
MATLABan
Pythonimplementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution.
In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.
Limitations
Some of LSA's drawbacks include:
* The resulting dimensions might be difficult to interpret. For instance, in
:: ↦
:the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to
:: ↦
:will occur. This leads to results which can be justified on the mathematical level, but have no immediately obvious meaning in natural language. Though, the (1.3452 * car + 0.2828 * bottle) component could be justified on account of the fact that both bottles and cars have transparent and opaque parts, are man made and with high probability contain logos/words on their surface; thus, in many ways these two concepts "share semantics." That is, within a language in question, there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word/class/concept assignment task.
* LSA can only partially capture
polysemy
Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a sin ...
(i.e., multiple meanings of a word) because each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space. For example, the occurrence of "chair" in a document containing "The Chair of the Board" and in a separate document containing "the chair maker" are considered the same. The behavior results in the vector representation being an ''average'' of all the word's different meanings in the corpus, which can make it difficult for comparison.
However, the effect is often lessened due to words having a
predominant sense throughout a corpus (i.e. not all meanings are equally likely).
* Limitations of
bag of words model (BOW), where a text is represented as an unordered collection of words. To address some of the limitation of
bag of words model (BOW),
multi-gram dictionary can be used to find direct and indirect association as well as
higher-order co-occurrence In linguistics, co-occurrence or cooccurrence is an above-chance frequency of occurrence of two terms (also known as coincidence or concurrence) from a text corpus alongside each other in a certain order. Co-occurrence in this linguistic sense ...
s among terms.
* The
probabilistic model
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
of LSA does not match observed data: LSA assumes that words and documents form a joint
Gaussian
Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below.
There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
model (
ergodic hypothesis
In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., t ...
), while a
Poisson distribution
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known ...
has been observed. Thus, a newer alternative is
probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one ca ...
, based on a
multinomial model, which is reported to give better results than standard LSA.
Alternative methods
Semantic hashing
In semantic hashing documents are mapped to memory addresses by means of a
neural network
A neural network is a network or neural circuit, circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up ...
in such a way that semantically similar documents are located at nearby addresses.
Deep neural network
Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.
...
essentially builds a
graphical model
A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. They are commonly used in probability ...
of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than
locality sensitive hashing In computer science, locality-sensitive hashing (LSH) is an algorithmic technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since ...
, which is the fastest current method.
Latent semantic indexing
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is r ...
(SVD) to identify patterns in the relationships between the
terms and
concept
Concepts are defined as abstract ideas. They are understood to be the fundamental building blocks of the concept behind principles, thoughts and beliefs.
They play an important role in all aspects of cognition. As such, concepts are studied by s ...
s contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a
body of text by establishing associations between those terms that occur in similar
context
Context may refer to:
* Context (language use), the relevant constraints of the communicative situation that influence language use, language variation, and discourse summary
Computing
* Context (computing), the virtual environment required to s ...
s.
[Deerwester, S., et al, Improving Information Retrieval with Latent Semantic Indexing, Proceedings of the 51st Annual Meeting of the American Society for Information Science 25, 1988, pp. 36–40.]
LSI is also an application of
correspondence analysis Correspondence analysis (CA) is a multivariate statistical technique proposed by Herman Otto Hartley (Hirschfeld) and later developed by Jean-Paul Benzécri. It is conceptually similar to principal component analysis, but applies to categorical rat ...
, a multivariate statistical technique developed by
Jean-Paul Benzécri
Jean-Paul Benzécri was a French mathematician and statistician. He studied at École Normale Supérieure and was professor at Université de Rennes and later for most of his career at the Paris Institute of Statistics (l'Institut de Statistique ...
in the early 1970s, to a
contingency table
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business ...
built from word counts in documents.
Called " indexing" because of its ability to correlate related terms that are in a collection of text, it was first applied to text at
Bellcore
iconectiv is a supplier of network planning and network management services to telecommunications providers. Known as Bellcore after its establishment in the United States in 1983 as part of the break-up of the Bell System, the company's name ...
in the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.
Benefits of LSI
LSI helps overcome synonymy by increasing
recall
Recall may refer to:
* Recall (bugle call), a signal to stop
* Recall (information retrieval), a statistical measure
* ''ReCALL'' (journal), an academic journal about computer-assisted language learning
* Recall (memory)
* ''Recall'' (Overwat ...
, one of the most problematic constraints of
Boolean keyword queries and vector space models.
Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of
information retrieval systems. As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.
LSI is also used to perform automated
document categorization
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") ...
. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text.
[Landauer, T., et al.]
Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report
M. I. Jordan, M. J. Kearns & S. A. Solla (Eds.), Advances in Neural Information Processing Systems 10, Cambridge: MIT Press, 1998, pp. 45–51. Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories. LSI uses ''example'' documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.
Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.
Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic
concept searching A concept search (or conceptual search) is an automated information retrieval method that is used to search electronically stored unstructured text (for example, digital archives, email, scientific literature, etc.) for information that is conceptu ...
and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.
LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.
LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.). This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.
Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.
LSI has proven to be a useful solution to a number of conceptual matching problems. The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.
LSI timeline
*Mid-1960s – Factor analysis technique first described and tested (H. Borko and M. Bernick)
*1988 – Seminal paper on LSI technique published
[
*1989 – Original patent granted ][
*1992 – First use of LSI to assign articles to reviewers
*1994 – Patent granted for the cross-lingual application of LSI (Landauer et al.)
*1995 – First use of LSI for grading essays (Foltz, et al., Landauer et al.)
*1999 – First implementation of LSI technology for intelligence community for analyzing unstructured text ( SAIC).
*2002 – LSI-based product offering to intelligence-based government agencies (SAIC)
]
Mathematics of LSI
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text.
Term-document matrix
LSI begins by constructing a term-document matrix, , to identify the occurrences of the unique terms within a collection of documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, , initially representing the number of times the associated term appears in the indicated document, . This matrix is usually very large and very sparse.
Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, of , to be the product of a local term weight, , which describes the relative frequency of a term in a document, and a global weight, , which describes the relative frequency of the term within the entire collection of documents.
Some common local weighting functions are defined in the following table.
Some common global weighting functions are defined in the following table.
Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets. In other words, each entry of is computed as:
:
:
Rank-reduced singular value decomposition
A rank-reduced, singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is r ...
is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix, , into three other matrices— an ''m'' by ''r'' term-concept vector matrix , an ''r'' by ''r'' singular values matrix , and a ''n'' by ''r'' concept-document vector matrix, , which satisfy the following relations:
In the formula, A is the supplied ''m'' by ''n'' weighted matrix of term frequencies in a collection of text where ''m'' is the number of unique terms, and ''n'' is the number of documents. T is a computed ''m'' by ''r'' matrix of term vectors where ''r'' is the rank of A—a measure of its unique dimensions ≤ min(''m,n''). S is a computed ''r'' by ''r'' diagonal matrix of decreasing singular values, and D is a computed ''n'' by ''r'' matrix of document vectors.
The SVD is then truncated to reduce the rank by keeping only the largest ''k'' « ''r'' diagonal entries in the singular value matrix S,
where ''k'' is typically on the order 100 to 300 dimensions.
This effectively reduces the term and document vector matrix sizes to ''m'' by ''k'' and ''n'' by ''k'' respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as:
:::::::A ≈ A''k'' = T''k'' S''k'' D''k''T
Efficient LSI algorithms only compute the first ''k'' singular values and term and document vectors as opposed to computing a full SVD and then truncating it.
Note that this rank reduction is essentially the same as doing Principal Component Analysis
Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
(PCA) on the matrix A, except that PCA subtracts off the means. PCA loses the sparseness of the A matrix, which can make it infeasible for large lexicons.
Querying and augmenting LSI vector spaces
The computed T''k'' and D''k'' matrices define the term and document vector spaces, which with the computed singular values, S''k'', embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.
The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, ''d'', for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.
A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.
The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called ''folding in''. Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in ) is needed.
Additional uses of LSI
It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.
LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization. Below are some other ways in which LSI is being used:
* Information discovery ( eDiscovery, Government/Intelligence community, Publishing)
* Automated document classification
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectual ...
(eDiscovery, Government/Intelligence community, Publishing)
* Text summarization (eDiscovery, Publishing)
* Relationship discovery (Government, Intelligence community, Social Networking)
* Automatic generation of link charts of individuals and organizations (Government, Intelligence community)
* Matching technical papers and grants with reviewers (Government)
* Online customer support (Customer Management)
* Determining document authorship (Education)
* Automatic keyword annotation of images
* Understanding software source code (Software Engineering)
* Filtering spam (System Administration)
* Information visualization[Landauer, T., Laham, D., and Derr, M.]
From Paragraph to Graph: Latent Semantic Analysis for Information Visualization
Proceedings of the National Academy of Sciences, 101, 2004, pp. 5214–5219.
* Essay scoring (Education)
* Literature-based discovery
* Stock returns prediction
* Dream Content Analysis (Psychology)
LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.
Challenges to LSI
Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques. However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open source gensim
Gensim is an open-source library for unsupervised topic modeling, document indexing, retrieval by similarity, and other natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer ...
software package.
Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents). However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection.[Landauer, Thomas K., and Dumais, Susan T., Latent Semantic Analysis, Scholarpedia, 3(11):4356, 2008.] Checking the proportion of variance retained, similar to PCA or factor analysis
Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed ...
, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality.[Landauer, T. K., Foltz, P. W., & Laham, D. (1998)]
Introduction to Latent Semantic Analysis
Discourse Processes, 25, 259-284 When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality.
See also
* Coh-Metrix
* Compound term processing
* Distributional semantics
Distributional semantics is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. T ...
* Explicit semantic analysis
* Latent semantic mapping Latent semantic mapping (LSM) is a data-driven framework to model globally meaningful relationships implicit in large volumes of (often textual) data. It is a generalization of latent semantic analysis. In information retrieval, LSA enables retriev ...
* Latent semantic structure indexing Latent semantic structure indexing (LaSSI) is a technique for calculating chemical similarity derived from latent semantic analysis (LSA).
LaSSI was developed at Merck & Co. and patented in 2007 by Richard Hull, Eugene Fluder, Suresh Singh, Rober ...
* Principal components analysis
Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
* Probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one ca ...
* Spamdexing
Spamdexing (also known as search engine spam, search engine poisoning, black-hat search engine optimization, search spam or web spam) is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link buildin ...
* Word vector
* Topic model
In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden ...
** Latent Dirichlet allocation
In natural language processing, Latent Dirichlet Allocation (LDA) is a generative statistical model that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an exa ...
References
Further reading
*
* Original article where the model was first exposed.
*
(PDF)
Illustration of the application of LSA to document retrieval.
*
*
*
*
External links
Articles on LSA
Latent Semantic Analysis
a scholarpedia article on LSA written by Tom Landauer, one of the creators of LSA.
Talks and demonstrations
LSA Overview
talk by Prof
describing LSA, its applications in Information Retrieval, and its connections to probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one ca ...
.
Complete LSA sample code in C# for Windows
The demo code includes enumeration of text files, filtering stop words, stemming, making a document-term matrix and SVD.
Implementations
Due to its cross-domain applications in Information Retrieval, Natural Language Processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
(NLP), Cognitive Science and Computational Linguistics
Computational linguistics is an Interdisciplinarity, interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, comput ...
, LSA has been implemented to support many different kinds of applications.
Sense Clusters
an Information Retrieval-oriented perl implementation of LSA
S-Space Package
a Computational Linguistics and Cognitive Science-oriented Java implementation of LSA
Semantic Vectors
applies Random Projection, LSA, and Reflective Random Indexing to Lucene
Apache Lucene is a free and open-source search engine software library, originally written in Java by Doug Cutting. It is supported by the Apache Software Foundation and is released under the Apache Software License. Lucene is widely used as ...
term-document matrices
Infomap Project
an NLP-oriented C implementation of LSA (superseded by semanticvectors project)
Text to Matrix Generator
A MATLAB Toolbox for generating term-document matrices from text collections, with support for LSA
* Gensim
Gensim is an open-source library for unsupervised topic modeling, document indexing, retrieval by similarity, and other natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer ...
contains a Python implementation of LSA for matrices larger than RAM.
{{Natural Language Processing
Information retrieval techniques
Natural language processing
Latent variable models
Semantic relations