如何构建包含二元组的 gensim 字典?

fra*_*ure 6 python nlp gensim

我正在尝试构建一个 Tf-Idf 模型,该模型可以使用gensim 对二元组和一元组进行评分。为此,我构建了一个 gensim 词典,然后使用该词典创建用于构建模型的语料库的词袋表示。

构建字典的步骤如下所示:

dict = gensim.corpora.Dictionary(tokens)
Run Code Online (Sandbox Code Playgroud)

这里token是对unigram和二元语法像这样的列表:

[('restore',),
 ('diversification',),
 ('made',),
 ('transport',),
 ('The',),
 ('grass',),
 ('But',),
 ('distinguished', 'newspaper'),
 ('came', 'well'),
 ('produced',),
 ('car',),
 ('decided',),
 ('sudden', 'movement'),
 ('looking', 'glasses'),
 ('shapes', 'replaced'),
 ('beauties',),
 ('put',),
 ('college', 'days'),
 ('January',),
 ('sometimes', 'gives')]
Run Code Online (Sandbox Code Playgroud)

但是,当我向 提供这样的列表时gensim.corpora.Dictionary(),该算法会将所有标记减少为二元组,例如:

test = gensim.corpora.Dictionary([(('happy', 'dog'))])
[test[id] for id in test]
=> ['dog', 'happy']
Run Code Online (Sandbox Code Playgroud)

有没有办法用包含二元组的 gensim 生成字典?

qai*_*ser 5

from gensim.models import Phrases
from gensim.models.phrases import Phraser
from gensim import models



docs = ['new york is is united states', 'new york is most populated city in the world','i love to stay in new york']

token_ = [doc.split(" ") for doc in docs]
bigram = Phrases(token_, min_count=1, threshold=2,delimiter=b' ')


bigram_phraser = Phraser(bigram)

bigram_token = []
for sent in token_:
    bigram_token.append(bigram_phraser[sent])
Run Code Online (Sandbox Code Playgroud)

输出将是: [['new york', 'is', 'is', 'united', 'states'],['new york', 'is', 'most', 'populated', 'city', 'in', 'the', 'world'],['i', 'love', 'to', 'stay', 'in', 'new york']]

#now you can make dictionary of bigram token 
dict_ = gensim.corpora.Dictionary(bigram_token)

print(dict_.token2id)
#Convert the word into vector, and now you can use tfidf model from gensim 
corpus = [dict_.doc2bow(text) for text in bigram_token]

tfidf_model = models.TfidfModel(corpus)
Run Code Online (Sandbox Code Playgroud)