Wagner–Fischer algorithm
In computer science, the Wagner–Fischer algorithm is a dynamic programming algorithm that computes the edit distance between two strings of characters.
History
The Wagner–Fischer algorithm has a history of multiple invention. Navarro lists the following inventors of it, with date of publication, and acknowledges that the list is incomplete:[1]:43
- Vintsyuk, 1968
- Needleman and Wunsch, 1970
- Sankoff, 1972
- Sellers, 1974
- Wagner and Fischer, 1974
- Lowrance and Wagner, 1975
Calculating distance
The Wagner–Fischer algorithm computes edit distance based on the observation that if we reserve a matrix to hold the edit distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix by flood filling the matrix, and thus find the distance between the two full strings as the last value computed.
A straightforward implementation, as pseudocode for a function EditDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them, looks as follows. Note that the input strings are one-indexed, while the matrix d is zero-indexed, and [i..k]
is a closed range.
int EditDistance(char s[1..m], char t[1..n])
// For all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t.
// Note that d has (m+1) x (n+1) values.
let d be a 2-d array of int with dimensions [0..m, 0..n]
for i in [0..m]
d[i, 0] ← i // the distance of any first string to an empty second string
// (transforming the string of the first i characters of s into
// the empty string requires i deletions)
for j in [0..n]
d[0, j] ← j // the distance of any second string to an empty first string
for j in [1..n]
for i in [1..m]
if s[i] = t[j] then
d[i, j] ← d[i-1, j-1] // no operation required
else
d[i, j] ← minimum of
(
d[i-1, j] + 1, // a deletion
d[i, j-1] + 1, // an insertion
d[i-1, j-1] + 1 // a substitution
)
return d[m,n]
Two examples of the resulting matrix (hovering over an underlined number reveals the operation performed to get that number):
|
|
The invariant maintained throughout the algorithm is that we can transform the initial segment s[1..i]
into t[1..j]
using a minimum of d[i,j]
operations. At the end, the bottom-right element of the array contains the answer.
Proof of correctness
As mentioned earlier, the invariant is that we can transform the initial segment s[1..i]
into t[1..j]
using a minimum of d[i,j]
operations. This invariant holds since:
- It is initially true on row and column 0 because
s[1..i]
can be transformed into the empty stringt[1..0]
by simply dropping alli
characters. Similarly, we can transforms[1..0]
tot[1..j]
by simply adding allj
characters. - If
s[i] = t[j]
, and we can transforms[1..i-1]
tot[1..j-1]
ink
operations, then we can do the same tos[1..i]
and just leave the last character alone, givingk
operations. - Otherwise, the distance is the minimum of the three possible ways to do the transformation:
- If we can transform
s[1..i]
tot[1..j-1]
ink
operations, then we can simply addt[j]
afterwards to gett[1..j]
ink+1
operations (insertion). - If we can transform
s[1..i-1]
tot[1..j]
ink
operations, then we can removes[i]
and then do the same transformation, for a total ofk+1
operations (deletion). - If we can transform
s[1..i-1]
tot[1..j-1]
ink
operations, then we can do the same tos[1..i]
, and exchange the originals[i]
fort[j]
afterwards, for a total ofk+1
operations (substitution).
- If we can transform
- The operations required to transform
s[1..n]
intot[1..m]
is of course the number required to transform all ofs
into all oft
, and sod[n,m]
holds our result.
This proof fails to validate that the number placed in d[i,j]
is in fact minimal; this is more difficult to show, and involves an argument by contradiction in which we assume d[i,j]
is smaller than the minimum of the three, and use this to show one of the three is not minimal.
Possible modifications
Possible modifications to this algorithm include:
- We can adapt the algorithm to use less space, O(m) instead of O(mn), since it only requires that the previous row and current row be stored at any one time.
- We can store the number of insertions, deletions, and substitutions separately, or even the positions at which they occur, which is always
j
. - We can normalize the distance to the interval
[0,1]
. - If we are only interested in the distance if it is smaller than a threshold k, then it suffices to compute a diagonal stripe of width 2k+1 in the matrix. In this way, the algorithm can be run in O(kl) time, where l is the length of the shortest string.[2]
- We can give different penalty costs to insertion, deletion and substitution. We can also give penalty costs that depend on which characters are inserted, deleted or substituted.
- This algorithm parallelizes poorly, due to a large number of data dependencies. However, all the
cost
values can be computed in parallel, and the algorithm can be adapted to perform theminimum
function in phases to eliminate dependencies. - By examining diagonals instead of rows, and by using lazy evaluation, we can find the Levenshtein distance in O(m (1 + d)) time (where d is the Levenshtein distance), which is much faster than the regular dynamic programming algorithm if the distance is small.[3]
Seller's variant for string search
By initializing the first row of the matrix with zeros, we obtain a variant of the Wagner–Fischer algorithm that can be used for fuzzy string search of a string in a text.[1] This modification gives the end-position of matching substrings of the text. To determine the start-position of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the start-position from the end-position.[4]
The resulting algorithm is by no means efficient, but was at the time of its publication (1980) one of the first algorithms that performed approximate search.[1]
References
- ↑ Gusfield, Dan (1997). Algorithms on strings, trees, and sequences: computer science and computational biology. Cambridge, UK: Cambridge University Press. ISBN 0-521-58519-8.
- ↑ Allison L (September 1992). "Lazy Dynamic-Programming can be Eager". Inf. Proc. Letters. 43 (4): 207–12. doi:10.1016/0020-0190(92)90202-7.
- ↑ Bruno Woltzenlogel Paleo. An approximate gazetteer for GATE based on levenshtein distance. Student Section of the European Summer School in Logic, Language and Information (ESSLLI), 2007.