NLTK可以轻松计算单词的重要性.信件怎么样?

ist*_*ses 6 python nlp nltk n-gram

我已经在网上看到大量关于python NLTK如何使计算bigrams单词变得容易的文档.

信件怎么样?

我想要做的是插入字典并让它告诉我不同​​字母对的相对频率.

最终,我想制作一些马尔可夫过程,以产生看起来很可能(但是假的)单词.

mik*_*iku 5

以下是使用collections模块中的Counter的示例(模数相对频率分布):

#!/usr/bin/env python

import sys
from collections import Counter
from itertools import islice
from pprint import pprint

def split_every(n, iterable):
    i = iter(iterable)
    piece = ''.join(list(islice(i, n)))
    while piece:
        yield piece
        piece = ''.join(list(islice(i, n)))

def main(text):
    """ return ngrams for text """
    freqs = Counter()
    for pair in split_every(2, text): # adjust n here
        freqs[pair] += 1
    return freqs

if __name__ == '__main__':
    with open(sys.argv[1]) as handle:
        freqs = main(handle.read()) 
        pprint(freqs.most_common(10))
Run Code Online (Sandbox Code Playgroud)

用法:

$ python 14168601.py lorem.txt
[('t ', 32),
 (' e', 20),
 ('or', 18),
 ('at', 16),
 (' a', 14),
 (' i', 14),
 ('re', 14),
 ('e ', 14),
 ('in', 14),
 (' c', 12)]
Run Code Online (Sandbox Code Playgroud)


vpe*_*kar 5

如果您只需要 bigrams,则不需要 NLTK。您可以简单地执行以下操作:

from collections import Counter
text = "This is some text"
bigrams = Counter(x+y for x, y in zip(*[text[i:] for i in range(2)]))
for bigram, count in bigrams.most_common():
    print bigram, count
Run Code Online (Sandbox Code Playgroud)

输出:

is 2
s  2
me 1
om 1
te 1
 t 1
 i 1
e  1
 s 1
hi 1
so 1
ex 1
Th 1
xt 1
Run Code Online (Sandbox Code Playgroud)