Dav*_*vid 7 python machine-learning scikit-learn cross-validation logistic-regression
我试图使用SciKit的Logistic回归来预测一组标签.我的数据实际上是不平衡的(有更多'0'而不是'1'标签)所以我必须在交叉验证步骤中使用F1得分指标来"平衡"结果.
[Input]
X_training, y_training, X_test, y_test = generate_datasets(df_X, df_y, 0.6)
logistic = LogisticRegressionCV(
Cs=50,
cv=4,
penalty='l2',
fit_intercept=True,
scoring='f1'
)
logistic.fit(X_training, y_training)
print('Predicted: %s' % str(logistic.predict(X_test)))
print('F1-score: %f'% f1_score(y_test, logistic.predict(X_test)))
print('Accuracy score: %f'% logistic.score(X_test, y_test))
[Output]
>> Predicted: [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
>> Actual: [0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 1]
>> F1-score: 0.285714
>> Accuracy score: 0.782609
>> C:\Anaconda3\lib\site-packages\sklearn\metrics\classification.py:958:
UndefinedMetricWarning:
F-score is ill-defined and being set to 0.0 due to no predicted samples.
Run Code Online (Sandbox Code Playgroud)
我当然知道这个问题与我的数据集有关:它太小了(它只是真实的一个样本).但是,任何人都可以解释我所看到的"UndefinedMetricWarning"警告的含义吗?窗帘后面究竟发生了什么?
| 归档时间: |
|
| 查看次数: |
10871 次 |
| 最近记录: |