Whi*_*ger 1 python tf-idf scikit-learn
我有一些代码在文档集合上运行基本的TF-IDF矢量化器,返回DXF的稀疏矩阵,其中D是文档数,F是术语数.没问题.
但是如何在文档中找到特定术语的TF-IDF分数?即在术语(在他们的文本表示中)和它们在结果稀疏矩阵中的位置之间是否存在某种字典?
是.请参阅.vocabulary_拟合/转换的TF-IDF矢量图.
In [1]: from sklearn.datasets import fetch_20newsgroups
In [2]: data = fetch_20newsgroups(categories=['rec.autos'])
In [3]: from sklearn.feature_extraction.text import TfidfVectorizer
In [4]: cv = TfidfVectorizer()
In [5]: X = cv.fit_transform(data.data)
In [6]: cv.vocabulary_
Run Code Online (Sandbox Code Playgroud)
它是一种形式的字典:
{word : column index in array}
这是另一种使用CountVectorizerandTfidfTransformer找到Tfidf给定单词分数的解决方案:
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
# our corpus
data = ['I like dog', 'I love cat', 'I interested in cat']
cv = CountVectorizer()
# convert text data into term-frequency matrix
data = cv.fit_transform(data)
tfidf_transformer = TfidfTransformer()
# convert term-frequency matrix into tf-idf
tfidf_matrix = tfidf_transformer.fit_transform(data)
# create dictionary to find a tfidf word each word
word2tfidf = dict(zip(cv.get_feature_names(), tfidf_transformer.idf_))
for word, score in word2tfidf.items():
print(word, score)
Run Code Online (Sandbox Code Playgroud)
输出:
(u'love', 1.6931471805599454)
(u'like', 1.6931471805599454)
(u'i', 1.0)
(u'dog', 1.6931471805599454)
(u'cat', 1.2876820724517808)
(u'interested', 1.6931471805599454)
(u'in', 1.6931471805599454)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6594 次 |
| 最近记录: |