HOME

TheInfoList



OR:

In
information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
,
linguistics Linguistics is the scientific study of human language. It is called a scientific study because it entails a comprehensive, systematic, objective, and precise analysis of all aspects of language, particularly its nature and structure. Ling ...
, and
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after the Soviet mathematician
Vladimir Levenshtein Vladimir Iosifovich Levenshtein ( rus, Влади́мир Ио́сифович Левенште́йн, p=vlɐˈdʲimʲɪr ɨˈosʲɪfəvʲɪtɕ lʲɪvʲɪnˈʂtʲejn, a=Ru-Vladimir Iosifovich Levenstein.oga; 20 May 1935 – 6 September 2017) was a ...
, who considered this distance in 1965. Levenshtein distance may also be referred to as ''edit distance'', although that term may also denote a larger family of distance metrics known collectively as
edit distance In computational linguistics and computer science, edit distance is a string metric, i.e. a way of quantifying how dissimilar two strings (e.g., words) are to one another, that is measured by counting the minimum number of operations required to ...
. It is closely related to pairwise string alignments.


Definition

The Levenshtein distance between two strings a, b (of length , a, and , b, respectively) is given by \operatorname(a, b) where : \operatorname(a, b) = \begin , a, & \text , b, = 0, \\ , b, & \text , a, = 0, \\ \operatorname\big(\operatorname(a),\operatorname(b)\big) & \text a = b \\ 1 + \min \begin \operatorname\big(\operatorname(a), b\big) \\ \operatorname\big(a, \operatorname(b)\big) \\ \operatorname\big(\operatorname(a), \operatorname(b)\big) \\ \end & \text \end where the \operatorname of some string x is a string of all but the first character of x, and x /math> is the nth character of the string x, counting from 0. Note that the first element in the minimum corresponds to deletion (from a to b), the second to insertion and the third to replacement. This definition corresponds directly to the naive recursive implementation.


Example

For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following 3 edits change one into the other, and there is no way to do it with fewer than 3 edits: # kitten → sitten (substitution of "s" for "k"), # sitten → sittin (substitution of "i" for "e"), # sittin → sitting (insertion of "g" at the end).


Upper and lower bounds

The Levenshtein distance has several simple upper and lower bounds. These include: * It is at least the absolute value of the difference of the sizes of the two strings. * It is at most the length of the longer string. * It is zero if and only if the strings are equal. * If the strings have the same size, the
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
is an upper bound on the Levenshtein distance. The Hamming distance is the number of positions at which the corresponding symbols in the two strings are different. * The Levenshtein distance between two strings is no greater than the sum of their Levenshtein distances from a third string (
triangle inequality In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of degenerate triangles, but ...
). An example where the Levenshtein distance between two strings of the same length is strictly less than the Hamming distance is given by the pair "flaw" and "lawn". Here the Levenshtein distance equals 2 (delete "f" from the front; insert "n" at the end). The
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
is 4.


Applications

In
approximate string matching In computer science, approximate string matching (often colloquially referred to as fuzzy string searching) is the technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching ...
, the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. The short strings could come from a dictionary, for instance. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance,
spell checker In software, a spell checker (or spelling checker or spell check) is a software feature that checks for misspellings in a text. Spell-checking features are often embedded in software or services, such as a word processor, email client, electronic ...
s, correction systems for
optical character recognition Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a sc ...
, and software to assist natural-language translation based on
translation memory A translation memory (TM) is a database that stores "segments", which can be sentences, paragraphs or sentence-like units (headings, titles or elements in a list) that have previously been translated, in order to aid human translators. The translat ...
. The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in fuzzy string searching in applications such as
record linkage Record linkage (also known as data matching, data linkage, entity resolution, and many other terms) is the task of finding records in a data set that refer to the same entity across different data sources (e.g., data files, books, websites, and d ...
, the compared strings are usually short to help improve speed of comparisons. In linguistics, the Levenshtein distance is used as a metric to quantify the
linguistic distance Linguistic distance is how different one language or dialect is from another. Although they lack a uniform approach to quantifying linguistic distance between languages, practitioners of linguistics use the concept in a variety of linguistic situat ...
, or how different two languages are from one another.. It is related to
mutual intelligibility In linguistics, mutual intelligibility is a relationship between languages or dialects in which speakers of different but related varieties can readily understand each other without prior familiarity or special effort. It is sometimes used as ...
: the higher the linguistic distance, the lower the mutual intelligibility, and the lower the linguistic distance, the higher the mutual intelligibility.


Relationship with other edit distance metrics

There are other popular measures of
edit distance In computational linguistics and computer science, edit distance is a string metric, i.e. a way of quantifying how dissimilar two strings (e.g., words) are to one another, that is measured by counting the minimum number of operations required to ...
, which are calculated using a different set of allowable edit operations. For instance, * the Damerau–Levenshtein distance allows the transposition of two adjacent characters alongside insertion, deletion, substitution; * the longest common subsequence (LCS) distance allows only insertion and deletion, not substitution; * the
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
allows only substitution, hence, it only applies to strings of the same length. * the
Jaro distance Jaro may refer to: *Jaro, Iloilo City, a district of Iloilo City, Philippines * Jaro, Indonesia, a subdistrict in Tabalong Regency, South Kalimantan *Jaro, Leyte, a municipality in the province of Leyte, Philippines * Jaro Medien (Jaro Media), a G ...
allows only transposition.
Edit distance In computational linguistics and computer science, edit distance is a string metric, i.e. a way of quantifying how dissimilar two strings (e.g., words) are to one another, that is measured by counting the minimum number of operations required to ...
is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA
sequence alignment In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences. Al ...
algorithms such as the
Smith–Waterman algorithm The Smith–Waterman algorithm performs local sequence alignment; that is, for determining similar regions between two strings of nucleic acid sequences or protein sequences. Instead of looking at the entire sequence, the Smith–Waterman algorit ...
, which make an operation's cost depend on where it is applied.


Computation


Recursive

This is a straightforward, but inefficient, recursive
Haskell Haskell () is a general-purpose, statically-typed, purely functional programming language with type inference and lazy evaluation. Designed for teaching, research and industrial applications, Haskell has pioneered a number of programming lan ...
implementation of a lDistance function that takes two strings, ''s'' and ''t'', together with their lengths, and returns the Levenshtein distance between them: lDistance :: Eq a => -> -> Int lDistance [] t = length t -- If s is empty, the distance is the number of characters in t lDistance s [] = length s -- If t is empty, the distance is the number of characters in s lDistance (a : s') (b : t') = if a

b then lDistance s' t' -- If the first characters are the same, they can be ignored else 1 + minimum -- Otherwise try all three possible actions and select the best one [ lDistance (a : s') t', -- Character is inserted (b inserted) lDistance s' (b : t'), -- Character is deleted (a deleted) lDistance s' t' -- Character is replaced (a replaced with b) ]
This implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times. A more efficient method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible suffixes might be stored in an array M, where M j] is the distance between the last i characters of string s and the last j characters of string t. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is in the table in the last row and column, representing the distance between all of the characters in s and all the characters in t.


Iterative with full matrix

(Note: This section uses 1-based strings instead of 0-based strings.) Computing the Levenshtein distance is based on the observation that if we reserve a Matrix (mathematics), matrix to hold the Levenshtein distances between all
prefixes A prefix is an affix which is placed before the stem of a word. Adding it to the beginning of one word changes it into another word. For example, when the prefix ''un-'' is added to the word ''happy'', it creates the word ''unhappy''. Particula ...
of the first string and all prefixes of the second, then we can compute the values in the matrix in a
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
fashion, and thus find the distance between the two full strings as the last value computed. This algorithm, an example of bottom-up
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
, is discussed, with variants, in the 1974 article ''The String-to-string correction problem'' by Robert A. Wagner and Michael J. Fischer. This is a straightforward
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
implementation for a function LevenshteinDistance that takes two strings, ''s'' of length ''m'', and ''t'' of length ''n'', and returns the Levenshtein distance between them: function LevenshteinDistance(char s ..m char t ..n: // for all i and j, d ,jwill hold the Levenshtein distance between // the first i characters of s and the first j characters of t declare int d ..m, 0..n set each element in d to zero // source prefixes can be transformed into empty string by // dropping all characters for i from 1 to m: d , 0:= i // target prefixes can be reached from empty source prefix // by inserting every character for j from 1 to n: d
, j The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
:= j for j from 1 to n: for i from 1 to m: if s = t substitutionCost := 0 else: substitutionCost := 1 d
, j The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
:= minimum(d -1, j+ 1, // deletion d
, j-1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
+ 1, // insertion d -1, j-1+ substitutionCost) // substitution return d
, n The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
Two examples of the resulting matrix (hovering over a tagged number reveals the operation performed to get that number): : The invariant maintained throughout the algorithm is that we can transform the initial segment into using a minimum of operations. At the end, the bottom-right element of the array contains the answer.


Iterative with two matrix rows

It turns out that only two rows of the table the previous row and the current row being calculated are needed for the construction, if one does not want to reconstruct the edited input strings. The Levenshtein distance may be calculated iteratively using the following algorithm: function LevenshteinDistance(char s ..m-1 char t ..n-1: // create two work vectors of integer distances declare int v0 + 1 declare int v1 + 1 // initialize v0 (the previous row of distances) // this row is A i]: edit distance from an empty s to t; // that distance is the number of characters to append to s to make t. for i from 0 to n: v0 = i for i from 0 to m - 1: // calculate v1 (current row distances) from the previous row v0 // first element of v1 is A + 10] // edit distance is delete (i + 1) chars from s to match empty t v1 = i + 1 // use formula to fill in the rest of the row for j from 0 to n - 1: // calculating costs for A + 1j + 1] deletionCost := v0 + 1+ 1 insertionCost := v1 + 1 if s = t substitutionCost := v0 else: substitutionCost := v0 + 1 v1 + 1:= minimum(deletionCost, insertionCost, substitutionCost) // copy v1 (current row) to v0 (previous row) for next iteration // since data in v1 is always invalidated, a swap without copy could be more efficient swap v0 with v1 // after the last swap, the results of v1 are now in v0 return v0 Hirschberg's algorithm combines this method with divide and conquer. It can compute the optimal edit sequence, and not just the edit distance, in the same asymptotic time and space bounds.


Automata

Levenshtein automata efficiently determine whether a string has an edit distance lower than a given constant from a given string.


Approximation

The Levenshtein distance between two strings of length can be approximated to within a factor : (\log n)^, where is a free parameter to be tuned, in time .


Computational complexity

It has been shown that the Levenshtein distance of two strings of length cannot be computed in time for any ε greater than zero unless the
strong exponential time hypothesis In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential t ...
is false.


See also

*
agrep agrep (approximate grep) is an open-source approximate string matching program, developed by Udi Manber and Sun Wu between 1988 and 1991, for use with the Unix operating system. It was later ported to OS/2, DOS, and Windows. It selects the best-s ...
* Damerau–Levenshtein distance *
diff In computing, the utility diff is a data comparison tool that computes and displays the differences between the contents of files. Unlike edit distance notions used for other purposes, diff is line-oriented rather than character-oriented, but ...
*
Dynamic time warping In time series analysis, dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences, which may vary in speed. For instance, similarities in walking could be detected using DTW, even if one person was walk ...
*
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore ...
* Homology of sequences in genetics *
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chan ...
*
Hunt–Szymanski algorithm In computer science, the Hunt–Szymanski algorithm, also known as Hunt–McIlroy algorithm, is a solution to the longest common subsequence problem. It was one of the first non-heuristic algorithms used in diff which compares a pair of files each ...
* Jaccard index * Locality-sensitive hashing * Longest common subsequence problem *
Lucene Apache Lucene is a free and open-source search engine software library, originally written in Java by Doug Cutting. It is supported by the Apache Software Foundation and is released under the Apache Software License. Lucene is widely used as ...
(an open source search engine that implements edit distance) *
Manhattan distance A taxicab geometry or a Manhattan geometry is a geometry whose usual distance function or metric of Euclidean geometry is replaced by a new metric in which the distance between two points is the sum of the absolute differences of their Cartesian co ...
*
Metric space In mathematics, a metric space is a set together with a notion of '' distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general setti ...
* MinHash *
Optimal matching Optimal matching is a sequence analysis method used in social science, to assess the dissimilarity of ordered arrays of tokens that usually represent a time-ordered sequence of socio-economic states two individuals have experienced. Once such dist ...
algorithm *
Numerical taxonomy Numerical taxonomy is a classification system in biological systematics which deals with the grouping by numerical methods of taxonomic units based on their character states. It aims to create a taxonomy using numeric algorithms like cluster ...
*
Sørensen similarity index Sørensen () is a Danish- Norwegian patronymic surname meaning "son of Søren" (given name equivalent of Severin). , it is the eighth most common surname in Denmark. Immigrants to English-speaking countries often changed the spelling to ''Sorensen ...


References


External links

*
Rosseta Code implementations of Levenshtein distance
{{DEFAULTSORT:Levenshtein Distance String metrics Dynamic programming Articles with example pseudocode Quantitative linguistics