主题分发:在python中执行LDA后,我们如何查看哪个文档属于哪个主题

jxn*_*jxn 22 python nltk lda gensim

我可以从gensim运行LDA代码,并使用各自的关键字获得前10个主题.

现在,我想进一步了解LDA算法的准确性,看看他们将哪些文档聚集到每个主题中.这对于gensim LDA有可能吗?

基本上我想做这样的事情,但在python和使用gensim.

具有topicmodel的LDA,如何查看不同文档属于哪些主题?

alv*_*vas 27

使用主题的概率,您可以尝试设置一些阈值并将其用作聚类基线,但我确信有比这种"hacky"方法更好的方法来进行聚类.

from gensim import corpora, models, similarities
from itertools import chain

""" DEMO """
documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
             "The EPS user interface management system",
             "System and human system engineering testing of EPS",
             "Relation of user perceived response time to error measurement",
             "The generation of random binary unordered trees",
             "The intersection graph of paths in trees",
             "Graph minors IV Widths of trees and well quasi ordering",
             "Graph minors A survey"]

# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
         for document in documents]

# remove words that appear only once
all_tokens = sum(texts, [])
tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1)
texts = [[word for word in text if word not in tokens_once] for text in texts]

# Create Dictionary.
id2word = corpora.Dictionary(texts)
# Creates the Bag of Word corpus.
mm = [id2word.doc2bow(text) for text in texts]

# Trains the LDA models.
lda = models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=3, \
                               update_every=1, chunksize=10000, passes=1)

# Prints the topics.
for top in lda.print_topics():
  print top
print

# Assigns the topics to the documents in corpus
lda_corpus = lda[mm]

# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = list(chain(*[[score for topic_id,score in topic] \
                      for topic in [doc for doc in lda_corpus]]))
threshold = sum(scores)/len(scores)
print threshold
print

cluster1 = [j for i,j in zip(lda_corpus,documents) if i[0][1] > threshold]
cluster2 = [j for i,j in zip(lda_corpus,documents) if i[1][1] > threshold]
cluster3 = [j for i,j in zip(lda_corpus,documents) if i[2][1] > threshold]

print cluster1
print cluster2
print cluster3
Run Code Online (Sandbox Code Playgroud)

[out]:

0.131*trees + 0.121*graph + 0.119*system + 0.115*user + 0.098*survey + 0.082*interface + 0.080*eps + 0.064*minors + 0.056*response + 0.056*computer
0.171*time + 0.171*user + 0.170*response + 0.082*survey + 0.080*computer + 0.079*system + 0.050*trees + 0.042*graph + 0.040*minors + 0.040*human
0.155*system + 0.150*human + 0.110*graph + 0.107*minors + 0.094*trees + 0.090*eps + 0.088*computer + 0.087*interface + 0.040*survey + 0.028*user

0.333333333333

['The EPS user interface management system', 'The generation of random binary unordered trees', 'The intersection graph of paths in trees', 'Graph minors A survey']
['A survey of user opinion of computer system response time', 'Relation of user perceived response time to error measurement']
['Human machine interface for lab abc computer applications', 'System and human system engineering testing of EPS', 'Graph minors IV Widths of trees and well quasi ordering']
Run Code Online (Sandbox Code Playgroud)

只是为了让它更清晰:

# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = []
for doc in lda_corpus
    for topic in doc:
        for topic_id, score in topic:
            scores.append(score)
threshold = sum(scores)/len(scores)
Run Code Online (Sandbox Code Playgroud)

上面的代码是所有单词的分数和所有文档的所有主题的总和.然后将总和归一化得分数.

  • 你能更具体地解释一下这行代码吗?`scores = list(链[*[主题得分,主题得分] \为主题[doc for doc in lda_corpus]]))threshold = sum(scores)/ len(得分)` (3认同)
  • 我也试图重新实现褐色(http://stackoverflow.com/questions/20998832/what-does-the-brown-clustering-algorithm-output-mean),但给出(主题,概率)元组,你可以从http://stackoverflow.com/questions/20990538/how-can-i-cluster-a-list-of-a-list-of-tuple-tag-probability-python尝试此脚本 (2认同)
  • 通过删除[this question]中的独特单词,我获得了更好的表现(http://stackoverflow.com/questions/21100903/improve-performance-remove-all-strings-in-a-big-list-appearing-只有一次) (2认同)

nos*_*nos 9

如果你想使用的技巧

cluster1 = [j for i,j in zip(lda_corpus,documents) if i[0][1] > threshold]
cluster2 = [j for i,j in zip(lda_corpus,documents) if i[1][1] > threshold]
cluster3 = [j for i,j in zip(lda_corpus,documents) if i[2][1] > threshold]
Run Code Online (Sandbox Code Playgroud)

在alvas的上一个答案中,请确保在LdaModel中设置minimum_probability = 0

gensim.models.ldamodel.LdaModel(corpus,
            num_topics=num_topics, id2word = dictionary,
            passes=2, minimum_probability=0)
Run Code Online (Sandbox Code Playgroud)

否则lda_corpus和文档的维度可能不一致,因为gensim将抑制概率低于minimum_probability的任何语料库.

将文档分组到主题中的另一种方法是根据最大概率分配主题

    lda_corpus = [max(prob,key=lambda y:y[1])
                    for prob in lda[mm] ]
    playlists = [[] for i in xrange(topic_num])]
    for i, x in enumerate(lda_corpus):
        playlists[x[0]].append(documents[i])
Run Code Online (Sandbox Code Playgroud)

注意lda[mm]粗略地说是列表或2D矩阵.行数是文档数,列数是主题数.例如,每个矩阵元素是形式的元组(3,0.82).这里3指的是主题索引,0.82指的是该主题的相应概率.默认情况下,minimum_probability=0.01省略任何概率小于0.01的元组lda[mm].如果您使用具有最大概率的分组方法,则可以将其设置为1 /#主题.


归档时间:

查看次数:

18928 次

最近记录:

7 年 前