在python nltk中计算n-gram频率

Rkz*_*Rkz 24 python nltk n-gram

我有以下代码.我知道我可以使用apply_freq_filter函数来过滤掉小于频率计数的搭配.但是,在我决定为过滤设置的频率之前,我不知道如何在文档中获取所有n-gram元组的频率(在我的情况下是bi-gram).如您所见,我正在使用nltk collocations类.

import nltk
from nltk.collocations import *
line = ""
open_file = open('a_text_file','r')
for val in open_file:
    line += val
tokens = line.split()

bigram_measures = nltk.collocations.BigramAssocMeasures()
finder = BigramCollocationFinder.from_words(tokens)
finder.apply_freq_filter(3)
print finder.nbest(bigram_measures.pmi, 100)
Run Code Online (Sandbox Code Playgroud)

Ram*_*han 35

NLTK具有自己的功能bigrams generator和便利的FreqDist()功能.

f = open('a_text_file')
raw = f.read()

tokens = nltk.word_tokenize(raw)

#Create your bigrams
bgs = nltk.bigrams(tokens)

#compute frequency distribution for all the bigrams in the text
fdist = nltk.FreqDist(bgs)
for k,v in fdist.items():
    print k,v
Run Code Online (Sandbox Code Playgroud)

一旦您可以访问BiGrams和频率分布,您就可以根据需要进行过滤.

希望有所帮助.


Rkz*_*Rkz 10

finder.ngram_fd.viewitems()功能有效


小智 7

我尝试了以上所有方法并找到了更简单的解决方案。NLTK 附带一个简单的最常见频率 Ngram。

Filtered_sentence 是我的单词标记

import nltk
from nltk.util import ngrams
from nltk.collocations import BigramCollocationFinder
from nltk.metrics import BigramAssocMeasures

word_fd = nltk.FreqDist(filtered_sentence)
bigram_fd = nltk.FreqDist(nltk.bigrams(filtered_sentence))

bigram_fd.most_common()
Run Code Online (Sandbox Code Playgroud)

输出应为:

[(('working', 'hours'), 31),
 (('9', 'hours'), 14),
 (('place', 'work'), 13),
 (('reduce', 'working'), 11),
 (('improve', 'experience'), 9)]
Run Code Online (Sandbox Code Playgroud)


Vah*_*hab 6

from nltk import FreqDist
from nltk.util import ngrams    
def compute_freq():
   textfile = open('corpus.txt','r')

   bigramfdist = FreqDist()
   threeramfdist = FreqDist()

   for line in textfile:
        if len(line) > 1:
        tokens = line.strip().split(' ')

        bigrams = ngrams(tokens, 2)
        bigramfdist.update(bigrams)
compute_freq()
Run Code Online (Sandbox Code Playgroud)