Apache Spark中的高效字符串匹配

mrt*_*nsd 26 python fuzzy-search string-matching apache-spark pyspark

使用OCR工具我从截图中提取文本(每个约1-5个句子).但是,在手动验证提取的文本时,我注意到有时会出现几个错误.

鉴于文本"你好!我真的喜欢Spark❤️!",我注意到:

1)像"I","!"和"l"这样的字母被"|"代替.

2)Emojis未被正确提取并被其他字符替换或被遗漏.

3)不时删除空格.

结果,我可能会得到一个像这样的字符串:"你好7l |真实|喜欢Spark!"

因为我试图将这些字符串与包含正确文本的数据集相匹配(在这种情况下"Hello there!我真的很喜欢Spark❤️!"),我正在寻找一种有效的方法来匹配Spark中的字符串.

任何人都可以建议一个有效的Spark算法,它允许我比较提取文本(〜100.000)与我的数据集(约1亿)?

hi-*_*zir 29

我不会首先使用Spark,但如果你真的致力于特定的堆栈,你可以结合一堆ml变换器来获得最佳匹配.你需要Tokenizer(或split):

import org.apache.spark.ml.feature.RegexTokenizer

val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")
Run Code Online (Sandbox Code Playgroud)

NGram (例如3克)

import org.apache.spark.ml.feature.NGram

val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")
Run Code Online (Sandbox Code Playgroud)

Vectorizer(例如CountVectorizerHashingTF):

import org.apache.spark.ml.feature.HashingTF

val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")
Run Code Online (Sandbox Code Playgroud)

并且LSH:

import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}

// Increase numHashTables in practice.
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")
Run Code Online (Sandbox Code Playgroud)

结合 Pipeline

import org.apache.spark.ml.Pipeline

val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))
Run Code Online (Sandbox Code Playgroud)

适合示例数据:

val query = Seq("Hello there 7l | real|y like Spark!").toDF("text")
val db = Seq(
  "Hello there ! I really like Spark ??!", 
  "Can anyone suggest an efficient algorithm"
).toDF("text")

val model = pipeline.fit(db)
Run Code Online (Sandbox Code Playgroud)

改变两者:

val dbHashed = model.transform(db)
val queryHashed = model.transform(query)
Run Code Online (Sandbox Code Playgroud)

并加入

model.stages.last.asInstanceOf[MinHashLSHModel]
  .approxSimilarityJoin(dbHashed, queryHashed, 0.75).show
Run Code Online (Sandbox Code Playgroud)
+--------------------+--------------------+------------------+                  
|            datasetA|            datasetB|           distCol|
+--------------------+--------------------+------------------+
|[Hello there ! ...|[Hello there 7l |...|0.5106382978723405|
+--------------------+--------------------+------------------+
Run Code Online (Sandbox Code Playgroud)

在Pyspark中可以使用相同的方法

from pyspark.ml import Pipeline
from pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSH

query = spark.createDataFrame(
    ["Hello there 7l | real|y like Spark!"], "string"
).toDF("text")

db = spark.createDataFrame([
    "Hello there ! I really like Spark ??!", 
    "Can anyone suggest an efficient algorithm"
], "string").toDF("text")


model = Pipeline(stages=[
    RegexTokenizer(
        pattern="", inputCol="text", outputCol="tokens", minTokenLength=1
    ),
    NGram(n=3, inputCol="tokens", outputCol="ngrams"),
    HashingTF(inputCol="ngrams", outputCol="vectors"),
    MinHashLSH(inputCol="vectors", outputCol="lsh")
]).fit(db)

db_hashed = model.transform(db)
query_hashed = model.transform(query)

model.stages[-1].approxSimilarityJoin(db_hashed, query_hashed, 0.75).show()
# +--------------------+--------------------+------------------+
# |            datasetA|            datasetB|           distCol|
# +--------------------+--------------------+------------------+
# |[Hello there ! ...|[Hello there 7l |...|0.5106382978723405|
# +--------------------+--------------------+------------------+
Run Code Online (Sandbox Code Playgroud)

有关

  • 这是多快?我有两个包含 1000 万行和 7000 万行的数据集。我必须比较它们中的字符串。需要多长时间?正如这个答案中提到的,如果没有火花你会怎么做? (2认同)
  • 我正在努力计算1000万到7000万行大小的桌子之间的levenshtein距离.那当然需要时间,这真的很多.我有两个问题:上面提到的算法有多快,如果不使用spark你会怎么做? (2认同)