jac*_*enc 59 python nlp fuzzy-comparison
我正在努力完成的是一个程序,它读入一个文件,并根据原始句子比较每个句子.与原始句子完全匹配的句子将得到1分,而总则相反的句子将得到0.所有其他模糊句子将得到1到0之间的等级.
我不确定使用哪个操作来允许我在Python 3中完成此操作.
我已经包含了示例文本,其中文本1是原始文本,其他前面的字符串是比较.
文字1:这是一个黑暗和暴风雨的夜晚.我独自一人坐在红色的椅子上.因为我有三只猫,所以我不是一个人.
文字20:这是一个阴暗而暴风雨的夜晚.我独自一人坐在深红色的椅子上.因为我有三只猫所以我不是完全孤独//应该得分高而不是1分
文字21:这是一个阴暗而暴躁的夜晚.我独自一人坐在深红色的座位上.因为我有三只猫所以我并不是完全孤独//应该得分低于文本20
文字22:我独自一人坐在深红色的教堂上.因为我有三只猫,所以我不是一个人.这是一个阴沉而暴躁的夜晚.//应该低于文本21而不是0
文字24:这是一个黑暗和暴风雨的夜晚.我并不孤单.我没坐在红色的椅子上.我有三只猫.//应该得0分!
con*_*gus 96
有一个名为的包fuzzywuzzy.通过pip安装:
pip install fuzzywuzzy
Run Code Online (Sandbox Code Playgroud)
用法简单:
>>> from fuzzywuzzy import fuzz
>>> fuzz.ratio("this is a test", "this is a test!")
96
Run Code Online (Sandbox Code Playgroud)
该软件包建立在difflib.你问,为什么不用它呢?除了更简单之外,它还有许多不同的匹配方法(如令牌顺序不敏感,部分字符串匹配),这使得它在实践中更加强大.这些process.extract函数特别有用:从集合中找到最匹配的字符串和比率.从他们的自述文件:
部分比率
>>> fuzz.partial_ratio("this is a test", "this is a test!")
100
Run Code Online (Sandbox Code Playgroud)
令牌排序比率
>>> fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
90
>>> fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
100
Run Code Online (Sandbox Code Playgroud)
令牌集比率
>>> fuzz.token_sort_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
84
>>> fuzz.token_set_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
100
Run Code Online (Sandbox Code Playgroud)
处理
>>> choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"]
>>> process.extract("new york jets", choices, limit=2)
[('New York Jets', 100), ('New York Giants', 78)]
>>> process.extractOne("cowboys", choices)
("Dallas Cowboys", 90)
Run Code Online (Sandbox Code Playgroud)
mac*_*mac 80
标准库(称为difflib)中有一个模块可以比较字符串并根据它们的相似性返回分数.本SequenceMatcher类应该做你所追求的.
编辑: python提示的小例子:
>>> from difflib import SequenceMatcher as SM
>>> s1 = ' It was a dark and stormy night. I was all alone sitting on a red chair. I was not completely alone as I had three cats.'
>>> s2 = ' It was a murky and stormy night. I was all alone sitting on a crimson chair. I was not completely alone as I had three felines.'
>>> SM(None, s1, s2).ratio()
0.9112903225806451
Run Code Online (Sandbox Code Playgroud)
HTH!
hob*_*obs 15
fuzzyset索引和搜索比fuzzywuzzy(difflib)快得多.
from fuzzyset import FuzzySet
corpus = """It was a murky and stormy night. I was all alone sitting on a crimson chair. I was not completely alone as I had three felines
It was a murky and tempestuous night. I was all alone sitting on a crimson cathedra. I was not completely alone as I had three felines
I was all alone sitting on a crimson cathedra. I was not completely alone as I had three felines. It was a murky and tempestuous night.
It was a dark and stormy night. I was not alone. I was not sitting on a red chair. I had three cats."""
corpus = [line.lstrip() for line in corpus.split("\n")]
fs = FuzzySet(corpus)
query = "It was a dark and stormy night. I was all alone sitting on a red chair. I was not completely alone as I had three cats."
fs.get(query)
# [(0.873015873015873, 'It was a murky and stormy night. I was all alone sitting on a crimson chair. I was not completely alone as I had three felines')]
Run Code Online (Sandbox Code Playgroud)
警告:要小心,不要混用unicode,并bytes在你的fuzzyset.
该任务称为释义识别,这是自然语言处理研究的一个活跃领域。我已经链接了几篇最先进的论文,其中很多都可以在 GitHub 上找到开源代码。
请注意,所有回答的问题都假设两个句子之间存在某种字符串/表面相似性,而实际上,两个字符串相似性很小的句子在语义上是相似的。
如果您对这种相似性感兴趣,可以使用Skip-Thoughts。根据 GitHub 指南安装软件并转到自述文件中的释义检测部分:
import skipthoughts
model = skipthoughts.load_model()
vectors = skipthoughts.encode(model, X_sentences)
Run Code Online (Sandbox Code Playgroud)
这会将您的句子 (X_sentences) 转换为向量。稍后您可以通过以下方式找到两个向量的相似度:
similarity = 1 - scipy.spatial.distance.cosine(vectors[0], vectors[1])
Run Code Online (Sandbox Code Playgroud)
我们假设 vector[0] 和 vector 1是 X_sentences[0], X_sentences 1的对应向量,你想找到它们的分数。
还有其他模型可以将句子转换为向量,您可以在此处找到。
一旦你将你的句子转换成向量,相似度就是找到这些向量之间的余弦相似度的问题。
2020 年更新Google 发布了一个 名为BERT 的新模型,该模型基于名为 Tensorflow 的深度学习框架。还有一个很多人觉得更容易使用的实现叫做Transformers. 这些程序的作用是,它们接受两个短语或句子,并且能够训练它们说出这两个短语/句子是否相同。要训练它们,您需要许多带有标签 1 或 0 的句子(如果它们具有相同的含义)。您使用训练数据(已经标记的数据)训练这些模型,然后您将能够使用训练后的模型对新的短语/句子对进行预测。您可以在相应的 github 页面或许多其他地方(例如this)找到如何训练(他们称之为微调)这些模型。
也已经有可用的英文标记训练数据,称为 MRPC(微软释义识别语料库)。请注意,BERT 还存在多语言或特定语言版本,因此该模型也可以扩展(例如训练)其他语言。
| 归档时间: |
|
| 查看次数: |
71671 次 |
| 最近记录: |