我使用sklean使用命令as计算文档中术语的TFIDF值
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(documents)
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
Run Code Online (Sandbox Code Playgroud)
X_train_tf是scipy稀疏形状矩阵
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(documents)
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
Run Code Online (Sandbox Code Playgroud)
输出为(2257,35788).如何在特定文档中获取TF-IDF?更具体地说,如何在给定文档中获取具有最大TF-IDF值的单词?
我认为函数TfidfVectorizer没有正确计算IDF因子.例如,使用sklearn.feature_extraction.text.TfidfVectorizer从tf-idf要素权重复制代码:
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ["This is very strange",
"This is very nice"]
vectorizer = TfidfVectorizer(
use_idf=True, # utiliza o idf como peso, fazendo tf*idf
norm=None, # normaliza os vetores
smooth_idf=False, #soma 1 ao N e ao ni => idf = ln(N+1 / ni+1)
sublinear_tf=False, #tf = 1+ln(tf)
binary=False,
min_df=1, max_df=1.0, max_features=None,
strip_accents='unicode', # retira os acentos
ngram_range=(1,1), preprocessor=None, stop_words=None, tokenizer=None, vocabulary=None
)
X = vectorizer.fit_transform(corpus)
idf = vectorizer.idf_
print dict(zip(vectorizer.get_feature_names(), idf))
Run Code Online (Sandbox Code Playgroud)
输出是:
{u'is': …Run Code Online (Sandbox Code Playgroud) 我有一个场景,我从互联网上检索信息/原始数据,并将它们放入各自的json或.txt文件中.
从那以后,我想通过使用tf-idf来计算每个文档中每个术语的频率及其余弦相似度.
例如:有50个不同的文档/文本文件,包含5000个单词/字符串,每个我想从第一个文档/文本中取出第一个单词,并比较所有总共250000个单词找到它的频率然后对第二个单词和等等所有50个文件/文本.
每个频率的预期输出将是0 -1
我怎么能这样做.我一直指的是sklear包,但是大多数只包含每个比较中的几个字符串.
information-retrieval nltk tf-idf python-2.7 cosine-similarity