小编mri*_*ank的帖子

我的模型是否应该在训练数据集上始终给出 100% 的准确率?

from sklearn.naive_bayes import MultinomialNB # Multinomial Naive Bayes on Lemmatized Text

X_train, X_test, y_train, y_test = train_test_split(df['Rejoined_Lemmatize'], df['Product'], random_state = 0)

X_train_counts = tfidf.fit_transform(X_train)
clf = MultinomialNB().fit(X_train_counts, y_train)
y_temp = clf.predict(tfidf.transform(X_train))
Run Code Online (Sandbox Code Playgroud)

我正在训练数据集本身上测试我的模型。它给了我以下结果:

                          precision    recall  f1-score   support

               accuracy                           0.92    742500
              macro avg       0.93      0.92      0.92    742500
           weighted avg       0.93      0.92      0.92    742500
Run Code Online (Sandbox Code Playgroud)

训练数据集的准确度< 100% 是否可以接受?

python machine-learning tf-idf scikit-learn naivebayes

2
推荐指数
1
解决办法
1万
查看次数

如何使用 Tf-idf 特征来训练模型?

from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(sublinear_tf= True, 
                       min_df = 5, 
                       norm= 'l2', 
                       ngram_range= (1,2), 
                       stop_words ='english')

feature1 = tfidf.fit_transform(df.Rejoined_Stem)
array_of_feature = feature1.toarray()
Run Code Online (Sandbox Code Playgroud)

我使用上面的代码来获取我的文本文档的功能。

from sklearn.naive_bayes import MultinomialNB # Multinomial Naive Bayes on Lemmatized Text
X_train, X_test, y_train, y_test = train_test_split(df['Rejoined_Lemmatize'], df['Product'], random_state = 0)
X_train_counts = tfidf.fit_transform(X_train)
clf = MultinomialNB().fit(X_train_counts, y_train)
y_pred = clf.predict(tfidf.transform(X_test))
Run Code Online (Sandbox Code Playgroud)

然后我使用这段代码来训练我的模型。有人可以解释一下在训练模型时如何使用上述特征,因为在训练时 feature1 变量没有在任何地方使用?

machine-learning scikit-learn text-classification naivebayes tfidfvectorizer

1
推荐指数
1
解决办法
4880
查看次数