Cre*_*ost 2 python tf-idf pandas scikit-learn tfidfvectorizer
Tfidftransformer我对&的使用有点困惑,Tfidfvectorizer因为它们看起来很相似。一种使用单词来转换矩阵 ( Tfidfvectorizer),另一种使用已经转换的文本 (使用CountVectorizer) 来转换矩阵。
谁能解释一下这两者之间的区别吗?
CountVectorizer + TfidfTransformer = TfidfVectorizer 这是简单实用的理解方式。TfidfVectorizer 一步执行 CountVectorizer 和 TfidfTransformer。
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
# transformer a
a = Pipeline(steps =[
('count_verctorizer', CountVectorizer()),
('tfidf', TfidfTransformer()),
])
# transformer b
b = TfidfVectorizer()
Run Code Online (Sandbox Code Playgroud)
a变压器b也会做同样的转换。
如果在向模型提供特征之前预处理仅包含 TFIDF,那么b将是最佳选择。但有时我们想拆分预处理。例如,我们希望在进行逆文档频率之前仅保留最佳术语。在这种时候,我们会选择a。因为我们可以执行 CountVectorizer,然后在 IDF 之前进行额外的预处理。例如
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import chi2
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegressionCV
# do counter terms and allow max 150k terms with 1-2 Ngrams
# select the best 10K (reducing the size of our features)
# do the IDF and the pass to our model
hisia = Pipeline(steps =[
('count_verctorizer', CountVectorizer(ngram_range=(1, 2),
max_features=150000,
)
),
('feature_selector', SelectKBest(chi2, k=10000)),
('tfidf', TfidfTransformer(sublinear_tf=True)),
('logistic_regression', LogisticRegressionCV(cv=5,
solver='saga',
scoring='accuracy',
max_iter=200,
n_jobs=-1,
random_state=42,
verbose=0))
])
Run Code Online (Sandbox Code Playgroud)
在示例中,我们在将术语传递给 IDF 之前执行了术语的特征选择。这是可能的,因为我们可以通过首先执行CountVectorizer和TfidfTransformer