NLTK BigramCollocationFinder返回的bigram总数是多少?

use*_*766 2 python nltk n-gram

我试图用我自己的代码重现一些常见的nlp指标,包括Manning和Scheutze的搭配显着性t检验和搭配意义的卡方检验.

我在以下24个令牌列表中调用nltk.bigrams():

tokens = ['she', 'knocked', 'on', 'his', 'door', 'she', 'knocked', 'at', 
'the', 'door','100', 'women', 'knocked', 'on', "Donaldson's", 'door', 'a', 
'man', 'knocked', 'on', 'the', 'metal', 'front', 'door']`
Run Code Online (Sandbox Code Playgroud)

我得到23个双子星:

[('she', 'knocked'), ('knocked', 'on'), ('on', 'his'), ('his', 'door'), ('door', 'she'), 
('she', 'knocked'), ('knocked', 'at'), ('at', 'the'), ('the', 'door'), ('door', '100'), 
('100', 'women'), ('women', 'knocked'), ('knocked', 'on'), ('on', "Donaldson's"), 
("Donaldson's", 'door'), ('door', 'a'), ('a', 'man'), ('man', 'knocked'),
('knocked', 'on'), ('on', 'the'), ('the', 'metal'), ('metal', 'front'), ('front',    
'door')]`
Run Code Online (Sandbox Code Playgroud)

如果我想确定t统计量('she', 'knocked'),我输入:

#Total bigrams is 23
t = (2/23 - 4/23)/(math.sqrt(2/23/23))`
t = 1.16826337761`
Run Code Online (Sandbox Code Playgroud)

但是,当我尝试:

finder = BigramCollocationFinder.from_words(tokens)`
student_t = finder.score_ngrams(bigram_measures.student_t)`
student_t = (('she', 'knocked'), 1.178511301977579)`
Run Code Online (Sandbox Code Playgroud)

当我将我的bigram数量的大小变为24(原始令牌列表的长度)时,我得到的答案与NLTK相同:

('she', 'knocked'): 1.17851130198
Run Code Online (Sandbox Code Playgroud)

我的问题很简单:我对这些假设检验的人口数量有什么用?标记化列表的长度或二元组列表的长度?或者该过程是否计算了在nltk.bigram()方法中不输出的终端单元?

alv*_*vas 5

首先,我们score_ngram()从nltk.collocations.BigramCollocationFinder中挖掘出来.请参阅https://github.com/nltk/nltk/blob/develop/nltk/collocations.py:

def score_ngram(self, score_fn, w1, w2):
    """Returns the score for a given bigram using the given scoring
    function.  Following Church and Hanks (1990), counts are scaled by
    a factor of 1/(window_size - 1).
    """
    n_all = self.word_fd.N()
    n_ii = self.ngram_fd[(w1, w2)] / (self.window_size - 1.0)
    if not n_ii:
        return
    n_ix = self.word_fd[w1]
    n_xi = self.word_fd[w2]
    return score_fn(n_ii, (n_ix, n_xi), n_all)
Run Code Online (Sandbox Code Playgroud)

然后我们来看看student_t()nltk.metrics.association,请参阅https://github.com/nltk/nltk/blob/develop/nltk/metrics/association.py:

### Indices to marginals arguments:

NGRAM = 0
"""Marginals index for the ngram count"""

UNIGRAMS = -2
"""Marginals index for a tuple of each unigram count"""

TOTAL = -1
"""Marginals index for the number of words in the data"""

def student_t(cls, *marginals):
      """Scores ngrams using Student's t test with independence hypothesis
      for unigrams, as in Manning and Schutze 5.3.1.
      """
      return ((marginals[NGRAM] -
                _product(marginals[UNIGRAMS]) /
                float(marginals[TOTAL] ** (cls._n - 1))) /
              (marginals[NGRAM] + _SMALL) ** .5)
Run Code Online (Sandbox Code Playgroud)

_product()_SMALL是:

_product = lambda s: reduce(lambda x, y: x * y, s)
_SMALL = 1e-20
Run Code Online (Sandbox Code Playgroud)

所以回到你的例子:

from nltk.collocations import BigramCollocationFinder, BigramAssocMeasures

tokens = ['she', 'knocked', 'on', 'his', 'door', 'she', 'knocked', 'at', 
'the', 'door','100', 'women', 'knocked', 'on', "Donaldson's", 'door', 'a', 
'man', 'knocked', 'on', 'the', 'metal', 'front', 'door']

finder = BigramCollocationFinder.from_words(tokens)
bigram_measures = BigramAssocMeasures()
print finder.word_fd.N()

student_t = {k:v for k,v in finder.score_ngrams(bigram_measures.student_t)}
print student_t['she', 'knocked']
Run Code Online (Sandbox Code Playgroud)

[OUT]:

24
1.17851130198
Run Code Online (Sandbox Code Playgroud)

在NLTK中,它将令牌数作为人口数,即24.但我会说这通常不是如何student_t计算考试成绩的.我会选择#Ngrams而不是#Tokens,请参阅nlp.stanford.edu/fsnlp/promo/colloc.pdf和www.cse.unt.edu/~rada/CSCE5290/Lectures/Collocations.ppt.但由于人口是常数,而当#Tokenis是>>>时,我不确定差异的影响大小是否占很大比例,因为#Tokens =#Ngrams + 1为bigrams.

让我们继续深入研究NLTK如何计算student_t.因此,如果我们剥离student_t()出来并只输入参数,我们得到相同的输出:

import math

NGRAM = 0
"""Marginals index for the ngram count"""

UNIGRAMS = -2
"""Marginals index for a tuple of each unigram count"""

TOTAL = -1
"""Marginals index for the number of words in the data"""

_product = lambda s: reduce(lambda x, y: x * y, s)
_SMALL = 1e-20

def student_t(*marginals):
    """Scores ngrams using Student's t test with independence hypothesis
    for unigrams, as in Manning and Schutze 5.3.1.
    """
    _n = 2
    return ((marginals[NGRAM] -
                _product(marginals[UNIGRAMS]) /
                float(marginals[TOTAL] ** (_n - 1))) /
              (marginals[NGRAM] + _SMALL) ** .5)

ngram_freq = 2
w1_freq = 2
w2_freq = 4
total_num_words = 24

print student_t(ngram_freq, (w1_freq,w2_freq), total_num_words)
Run Code Online (Sandbox Code Playgroud)

所以我们看到,bigrams NLTKstudent_t分数计算如下:

import math
(2 - 2*4/float(24)) / math.sqrt(2 + 1e-20)
Run Code Online (Sandbox Code Playgroud)

在公式中:

(ngram_freq - (w1_freq * w2_freq) / total_num_words) / sqrt(ngram_freq + 1e-20)
Run Code Online (Sandbox Code Playgroud)