sha*_*nuo 6 python nltk hunspell
我需要拆分斜线然后报告标签.这是hunspell字典格式.我试图在github上找到一个可以做到这一点的类,但找不到一个.
# vi test.txt
test/S
boy
girl/SE
home/
house/SE123
man/E
country
wind/ES
Run Code Online (Sandbox Code Playgroud)
代码:
from collections import defaultdict
myl=defaultdict(list)
with open('test.txt') as f :
for l in f:
l = l.rstrip()
try:
tags = l.split('/')[1]
myl[tags].append(l.split('/')[0])
for t in tags:
myl[t].append( l.split('/')[0])
except:
pass
Run Code Online (Sandbox Code Playgroud)
输出:
defaultdict(list,
{'S': ['test', 'test', 'girl', 'house', 'wind'],
'SE': ['girl'],
'E': ['girl', 'house', 'man', 'man', 'wind'],
'': ['home'],
'SE123': ['house'],
'1': ['house'],
'2': ['house'],
'3': ['house'],
'ES': ['wind']})
Run Code Online (Sandbox Code Playgroud)
SE组应该有3个单词'girl','wind'和'house'.应该没有ES组,因为它包含在内且与"SE"相同,SE123应保持不变.我怎么做到这一点?
更新:
我设法添加了双字母,但如何添加3,4,5克?
from collections import defaultdict
import nltk
myl=defaultdict(list)
with open('hi_IN.dic') as f :
for l in f:
l = l.rstrip()
try:
tags = l.split('/')[1]
ntags=''.join(sorted(tags))
myl[ntags].append(l.split('/')[0])
for t in tags:
myl[t].append( l.split('/')[0])
bigrm = list(nltk.bigrams([i for i in tags]))
nlist=[x+y for x, y in bigrm]
for t1 in nlist:
t1a=''.join(sorted(t1))
myl[t1a].append(l.split('/')[0])
except:
pass
Run Code Online (Sandbox Code Playgroud)
我想如果我在源代码处对标签进行排序会有所帮助:
with open('test1.txt', 'w') as nf:
with open('test.txt') as f :
for l in f:
l = l.rstrip()
try:
tags = l.split('/')[1]
except IndexError:
nline= l
else:
ntags=''.join(sorted(tags))
nline= l.split('/')[0] + '/' + ntags
nf.write(nline+'\n')
Run Code Online (Sandbox Code Playgroud)
这将创建一个带有已排序标签的新文件test1.txt.但是三卦的问题仍未解决.
我下载了一个示例文件:
!wget https://raw.githubusercontent.com/wooorm/dictionaries/master/dictionaries/en-US/index.dic
使用"grep"命令的报告是正确的.
!grep 'P.*U' index1.dic
CPU/M
GPU
aware/PU
cleanly/PRTU
common/PRTUY
conscious/PUY
easy/PRTU
faithful/PUY
friendly/PRTU
godly/PRTU
grateful/PUY
happy/PRTU
healthy/PRTU
holy/PRTU
kind/PRTUY
lawful/PUY
likely/PRTU
lucky/PRTU
natural/PUY
obtrusive/PUY
pleasant/PTUY
prepared/PU
reasonable/PU
responsive/PUY
righteous/PU
scrupulous/PUY
seemly/PRTU
selfish/PUY
timely/PRTU
truthful/PUY
wary/PRTU
wholesome/PU
willing/PUY
worldly/PTU
worthy/PRTU
Run Code Online (Sandbox Code Playgroud)
在排序标签文件上使用bigrams的python报告不包含上面提到的所有单词.
myl['PU']
['aware',
'aware',
'conscious',
'faithful',
'grateful',
'lawful',
'natural',
'obtrusive',
'prepared',
'prepared',
'reasonable',
'reasonable',
'responsive',
'righteous',
'righteous',
'scrupulous',
'selfish',
'truthful',
'wholesome',
'wholesome',
'willing']
Run Code Online (Sandbox Code Playgroud)
鉴于我理解正确,这更多的是构建一个数据结构,对于给定的标签,构造正确的列表.我们可以通过构建一个只考虑奇异标签的字典来实现这一点.之后,当一个人查询多个标签时,我们会计算交叉点.因此,这使得它紧凑表示,以及方便地提取例如全部用标签的元素AC,这将列出与标签元件ABCD,ACD,ZABC等.
因此我们可以构造一个解析器:
from collections import defaultdict
class Hunspell(object):
def __init__(self, data):
self.data = data
def __getitem__(self, tags):
if not tags:
return self.data.get(None, [])
elements = [self.data.get(tag ,()) for tag in tags]
data = set.intersection(*map(set, elements))
return [e for e in self.data.get(tags[0], ()) if e in data]
@staticmethod
def load(f):
data = defaultdict(list)
for line in f:
try:
element, tags = line.rstrip().split('/', 1)
for tag in tags:
data[tag].append(element)
data[None].append(element)
except ValueError:
pass # element with no tags
return Hunspell(dict(data))
Run Code Online (Sandbox Code Playgroud)
完成后的列表处理是以__getitem__正确的顺序检索元素.
然后我们可以将文件加载到内存中:
>>> with open('test.txt') as f:
... h = Hunspell.load(f)
Run Code Online (Sandbox Code Playgroud)
并查询任意键:
>>> h['SE']
['girl', 'house', 'wind']
>>> h['ES']
['girl', 'house', 'wind']
>>> h['1']
['house']
>>> h['']
['test', 'girl', 'home', 'house', 'man', 'wind']
>>> h['S3']
['house']
>>> h['S2']
['house']
>>> h['SE2']
['house']
>>> h[None]
['test', 'girl', 'home', 'house', 'man', 'wind']
>>> h['4']
[]
Run Code Online (Sandbox Code Playgroud)
查询不存在的标签将导致空列表.因此,我们在调用时推迟了"交叉"过程.事实上,我们已经可以生成所有可能的交叉点,但这将导致大型数据结构,并且可能会产生大量工作