cha*_*id1 5 python ocr split tokenize nltk
我正在使用NLTK处理从PDF文件中提取的一些文本.我可以完整地恢复文本,但是有很多实例没有捕获单词之间的空格,所以我得到的单词ifI不是代替if I,thatposition而是代替that position或andhe's代替and he's.
我的问题是:如何使用NLTK查找它无法识别/未学习的单词,并查看是否存在更可能发生的"附近"单词组合?有没有更优雅的方式来实现这种检查,而不是简单地通过无法识别的单词,一次一个字符,拆分它,并查看它是否产生两个可识别的单词?
我建议您考虑改用pyenchant,因为它是解决此类问题的更可靠的解决方案。您可以在此处下载pyenchant 。这是安装后如何获得结果的示例:
>>> text = "IfI am inthat position, Idon't think I will." # note the lack of spaces
>>> from enchant.checker import SpellChecker
>>> checker = SpellChecker("en_US")
>>> checker.set_text(text)
>>> for error in checker:
for suggestion in error.suggest():
if error.word.replace(' ', '') == suggestion.replace(' ', ''): # make sure the suggestion has exact same characters as error in the same order as error and without considering spaces
error.replace(suggestion)
break
>>> checker.get_text()
"If I am in that position, I don't think I will." # text is now fixed
Run Code Online (Sandbox Code Playgroud)