Go to file
matthieugomez 3892272a0f Update README.md 2019-08-14 10:44:09 -04:00
benchmark Switch to Projects.toml 2019-04-22 10:09:44 -04:00
src Update RatcliffObershelp.jl 2019-08-14 10:32:50 -04:00
test Update utf8.jl 2019-08-14 10:30:36 -04:00
.gitignore 0.7 first 2018-07-04 12:07:26 -04:00
.travis.yml remove 0.7 2018-09-03 09:58:02 -04:00
LICENSE.md first commit 2015-10-22 12:12:44 -04:00
Project.toml Update Project.toml 2019-04-22 10:15:54 -04:00
README.md Update README.md 2019-08-14 10:44:09 -04:00

README.md

StringDistances Build Status Coverage Status

This Julia package computes various distances between strings (UTF-8 encoding)

Syntax

The function compare returns a similarity score between two strings. The function always returns a score between 0 and 1, with a value of 0 being completely different and a value of 1 being completely similar.

using StringDistances
compare(Hamming(), "martha", "martha")
#> 1.0
compare(Hamming(), "martha", "marhta")
#> 0.6666666666666667

Distances

Edit Distances

Q-Grams Distances

Q-gram distances compare the set of all substrings of length q in each string.

Others

Distance Modifiers

The package includes distance "modifiers", that can be applied to any distance.

  • Winkler boosts the similary score of strings with common prefixes. The Winkler adjustment was originally defined for the Jaro similarity score but this package defines it for any string distance.

    compare(Jaro(), "martha", "marhta")
    #> 0.9444444444444445
    compare(Winkler(Jaro()), "martha", "marhta")
    #> 0.9611111111111111
    
    compare(QGram(2), "william", "williams")
    #> 0.9230769230769231
    compare(Winkler(QGram(2)), "william", "williams")
    #> 0.9538461538461539
    
  • Modifiers from the Python library fuzzywuzzy. One difference with this Python library is that modifiers are defined for any distance, not just the levenshtein one.

    • Partial returns the maximal similarity score between the shorter string and substrings of the longer string.

      compare(Levenshtein(), "New York Yankees", "Yankees")
      #> 0.4375
      compare(Partial(Levenshtein()), "New York Yankees", "Yankees")
      #> 1.0
      
    • TokenSort adjusts for differences in word orders by reording words alphabetically.

      compare(RatcliffObershelp(), "mariners vs angels", "angels vs mariners")
      #> 0.44444
      compare(TokenSort(RatcliffObershelp()),"mariners vs angels", "angels vs mariners")
      #> 1.0
      
    • TokenSet adjusts for differences in word orders and word numbers by comparing the intersection of two strings with each string.

      compare(Jaro(),"mariners vs angels", "los angeles angels at seattle mariners")
      #> 0.559904
      compare(TokenSet(Jaro()),"mariners vs angels", "los angeles angels at seattle mariners")
      #> 0.944444
      
    • TokenMax combines scores using the base distance, the Partial, TokenSort and TokenSet modifiers, with penalty terms depending on string lengths.

      compare(TokenMax(RatcliffObershelp()),"mariners vs angels", "los angeles angels at seattle mariners")
      #> 0.855
      

Compare vs Evaluate

The function compare returns a similarity score: a value of 0 means completely different and a value of 1 means completely similar. In contrast, the function evaluate returns the litteral distance between two strings, with a value of 0 being completely similar. some distances are between 0 and 1. Others are unbouded.

compare(Levenshtein(), "New York", "New York")
#> 1.0
evaluate(Levenshtein(), "New York", "New York")
#> 0

Which distance should I use?

As a rule of thumb,

  • Standardize strings before comparing them (cases, whitespaces, accents, abbreviations...)
  • Only consider using one of the Edit distances if word order matters.
  • The distance Tokenmax(RatcliffObershelp()) is a good choice to link names or adresses across datasets.

References