Sklearn 潜在狄利克雷分配的实际工作原理是什么?

Sta*_*ack 2 nlp python-3.x latent-semantic-analysis scikit-learn

我有一些文本,我正在使用 sklearn LatentDirichletAllocation算法从文本中提取主题。

我已经使用 Keras 将文本转换为序列,我正在这样做:

from sklearn.decomposition import LatentDirichletAllocation

lda = LatentDirichletAllocation()
X_topics = lda.fit_transform(X)
Run Code Online (Sandbox Code Playgroud)

X:

print(X)
#  array([[0, 988, 233, 21, 42, 5436, ...],
   [0, 43, 6526, 21, 566, 762, 12, ...]])
Run Code Online (Sandbox Code Playgroud)

X_topics:

print(X_topics)
#  array([[1.24143852e-05, 1.23983890e-05, 1.24238815e-05, 2.08399432e-01,
    7.91563331e-01],
   [5.64976371e-01, 1.33304549e-05, 5.60003133e-03, 1.06638803e-01,
    3.22771464e-01]])
Run Code Online (Sandbox Code Playgroud)

我的问题是,从 返回的到底是什么fit_transform,我知道这应该是从文本中检测到的主要主题,但我无法将这些数字映射到索引,所以我无法看到这些序列的含义,我搜索失败对于实际发生的事情的解释,因此任何建议将不胜感激。

Jam*_*_SO 5

首先,一般性解释 - 将 LDiA 视为一种聚类算法,默认情况下,它将根据文本中单词的频率确定 10 个质心,并且它将比其他单词赋予其中一些单词更大的权重由于靠近质心。每个质心代表此上下文中的一个“主题”,其中该主题未命名,但可以通过在形成每个簇时最占主导地位的单词来描述。

一般来说,您使用 LDA 所做的事情是:

  • 让它告诉你给定文本的 10 个(或其他)主题是什么。
    或者
  • 让它告诉你一些新文本最接近哪个质心/主题

对于第二种情况,您的期望是 LDiA 将为 10 个集群/主题中的每一个输出新文本的“分数”。最高分数的索引是该新文本所属的簇/主题的索引。

我更喜欢 gensim.models.LdaMulticore,但由于您已经使用了 sklearn.decomposition.LatentDirichletAllocation 我将使用它。

这是运行此过程的一些示例代码(取自此处)

from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn.datasets import fetch_20newsgroups
import random

n_samples = 2000
n_features = 1000
n_components = 10
n_top_words = 20

def print_top_words(model, feature_names, n_top_words):
    for topic_idx, topic in enumerate(model.components_):
        message = "Topic #%d: " % topic_idx
        message += " ".join([feature_names[i]
                             for i in topic.argsort()[:-n_top_words - 1:-1]])
        print(message)
    print()
    
data, _ = fetch_20newsgroups(shuffle=True, random_state=1,
                             remove=('headers', 'footers', 'quotes'),
                             return_X_y=True)
X = data[:n_samples]
#create a count vectorizer using the sklearn CountVectorizer which has some useful features
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
                                max_features=n_features,
                                stop_words='english')
vectorizedX = tf_vectorizer.fit_transform(X)
lda = LatentDirichletAllocation(n_components=n_components, max_iter=5,
                                learning_method='online',
                                learning_offset=50.,
                                random_state=0)
lda.fit(vectorizedX)

Run Code Online (Sandbox Code Playgroud)

现在让我们尝试一个新文本:

testX = tf_vectorizer.transform(["I am educated about learned stuff"])
#get lda to score this text against each of the 10 topics
lda.transform(testX)

Out:
array([[0.54995409, 0.05001176, 0.05000163, 0.05000579, 0.05      ,
        0.05001033, 0.05000001, 0.05001449, 0.05000123, 0.05000066]])

#looks like the first topic has the high score - now what are the words that are most associated with each topic?
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)

Out:
Topics in LDA model:
Topic #0: edu com mail send graphics ftp pub available contact university list faq ca information cs 1993 program sun uk mit
Topic #1: don like just know think ve way use right good going make sure ll point got need really time doesn
Topic #2: christian think atheism faith pittsburgh new bible radio games alt lot just religion like book read play time subject believe
Topic #3: drive disk windows thanks use card drives hard version pc software file using scsi help does new dos controller 16
Topic #4: hiv health aids disease april medical care research 1993 light information study national service test led 10 page new drug
Topic #5: god people does just good don jesus say israel way life know true fact time law want believe make think
Topic #6: 55 10 11 18 15 team game 19 period play 23 12 13 flyers 20 25 22 17 24 16
Topic #7: car year just cars new engine like bike good oil insurance better tires 000 thing speed model brake driving performance
Topic #8: people said did just didn know time like went think children came come don took years say dead told started
Topic #9: key space law government public use encryption earth section security moon probe enforcement keys states lunar military crime surface technology


Run Code Online (Sandbox Code Playgroud)

似乎很合理 - 示例文本是关于教育的,第一个主题的词云是关于教育的。

下面的图片来自另一个数据集(火腿短信与垃圾短信,因此只有两个可能的主题),我使用 PCA 将其缩减为 3 维,但如果图片有帮助,这两个数据集(来自不同角度的相同数据)可能会给出一般意义LDiA 的情况。(图表来自潜在判别分析与 LDiA,但表示仍然相关)

在此输入图像描述

在此输入图像描述

虽然 LDiA 是一种无监督方法,但要在业务环境中实际使用它,您可能需要至少手动干预以给出对您的环境有意义的主题名称。例如,为新闻聚合网站上的报道分配主题区域,在 [“商业”、“体育”、“娱乐”等] 中进行选择

为了进一步研究,也许可以运行这样的内容: https://towardsdatascience.com/topic-modeling-and-latent-dirichlet-allocation-in-python-9bf156893c24

  • 关于如何命名主题(分析热门主题词?)的补充也很有帮助 (2认同)
  • 我很抱歉 - 在撰写这个答案时,我必须从示例中删除一些代码 - 现在应该好了 (2认同)