Rad*_*dim 32
取决于您要使用的相似性指标.
sim = gensim.matutils.cossim(vec_lda1, vec_lda2)
Run Code Online (Sandbox Code Playgroud)
Hellinger距离对于概率分布(例如LDA主题)之间的相似性很有用:
import numpy as np
dense1 = gensim.matutils.sparse2full(lda_vec1, lda.num_topics)
dense2 = gensim.matutils.sparse2full(lda_vec2, lda.num_topics)
sim = np.sqrt(0.5 * ((np.sqrt(dense1) - np.sqrt(dense2))**2).sum())
Run Code Online (Sandbox Code Playgroud)
Pal*_*and 23
不知道这是否有帮助,但是当使用实际文档作为查询时,我设法获得了文档匹配和相似性的成功结果.
dictionary = corpora.Dictionary.load('dictionary.dict')
corpus = corpora.MmCorpus("corpus.mm")
lda = models.LdaModel.load("model.lda") #result from running online lda (training)
index = similarities.MatrixSimilarity(lda[corpus])
index.save("simIndex.index")
docname = "docs/the_doc.txt"
doc = open(docname, 'r').read()
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lda = lda[vec_bow]
sims = index[vec_lda]
sims = sorted(enumerate(sims), key=lambda item: -item[1])
print sims
Run Code Online (Sandbox Code Playgroud)
驻留在语料库中的所有文档与用作查询的文档之间的相似度得分将是每个sim卡的第二个索引.
提供的答案很好,但对初学者不太友好。我想从训练LDA模型开始,计算余弦相似度。
训练模型部分:
docs = ["latent Dirichlet allocation (LDA) is a generative statistical model",
"each document is a mixture of a small number of topics",
"each document may be viewed as a mixture of various topics"]
# Convert document to tokens
docs = [doc.split() for doc in docs]
# A mapping from token to id in each document
from gensim.corpora import Dictionary
dictionary = Dictionary(docs)
# Representing the corpus as a bag of words
corpus = [dictionary.doc2bow(doc) for doc in docs]
# Training the model
model = LdaModel(corpus=corpus, id2word=dictionary, num_topics=10)
Run Code Online (Sandbox Code Playgroud)
为了提取分配给文档的每个主题的概率,通常有两种方法。我在这里提供两者:
# Some preprocessing for documents like the training the model
test_doc = ["LDA is an example of a topic model",
"topic modelling refers to the task of identifying topics"]
test_doc = [doc.split() for doc in test_doc]
test_corpus = [dictionary.doc2bow(doc) for doc in test_doc]
# Method 1
from gensim.matutils import cossim
doc1 = model.get_document_topics(test_corpus[0], minimum_probability=0)
doc2 = model.get_document_topics(test_corpus[1], minimum_probability=0)
print(cossim(doc1, doc2))
# Method 2
doc1 = model[test_corpus[0]]
doc2 = model[test_corpus[1]]
print(cossim(doc1, doc2))
Run Code Online (Sandbox Code Playgroud)
输出:
#Method 1
0.8279631530869963
#Method 2
0.828066885140262
Run Code Online (Sandbox Code Playgroud)
正如您所看到的,这两种方法通常是相同的,区别在于第二种方法返回的概率有时不会达到此处讨论的 1 。对于大型语料库,可以通过传递整个语料库来给出可能性向量:
# Some preprocessing for documents like the training the model
test_doc = ["LDA is an example of a topic model",
"topic modelling refers to the task of identifying topics"]
test_doc = [doc.split() for doc in test_doc]
test_corpus = [dictionary.doc2bow(doc) for doc in test_doc]
# Method 1
from gensim.matutils import cossim
doc1 = model.get_document_topics(test_corpus[0], minimum_probability=0)
doc2 = model.get_document_topics(test_corpus[1], minimum_probability=0)
print(cossim(doc1, doc2))
# Method 2
doc1 = model[test_corpus[0]]
doc2 = model[test_corpus[1]]
print(cossim(doc1, doc2))
Run Code Online (Sandbox Code Playgroud)
注意:分配给文档中每个主题的概率总和可能会略高于 1,或者在某些情况下略小于 1。这是因为浮点算术舍入误差。