我正在使用tensorflow hub进行图像再训练分类任务.默认情况下,tensorflow脚本retrain.py计算cross_entropy和准确度.
train_accuracy, cross_entropy_value = sess.run([evaluation_step, cross_entropy],feed_dict={bottleneck_input: train_bottlenecks, ground_truth_input: train_ground_truth})
Run Code Online (Sandbox Code Playgroud)
我想得到F1得分,精度,召回和混淆矩阵.我怎么能用这个脚本获取这些值?
下面我介绍了一种使用scikit-learn包计算所需指标的方法.
您可以使用precision_recall_fscore_support方法和使用confusion_matrix方法的混淆矩阵计算F1得分,精度和召回:
from sklearn.metrics import precision_recall_fscore_support, confusion_matrix
Run Code Online (Sandbox Code Playgroud)
两种方法都采用两个一维阵列对象,分别存储地面实况和预测标签.
在所提供的代码中,训练数据的地面实况标签存储在行1054和1060中train_ground_truth
定义的变量中,同时存储验证数据的地面实况标签并在行1087中定义.validation_ground_truth
计算预测类标签的张量由add_evaluation_step函数定义并返回.您可以修改第1034行以捕获该张量对象:
evaluation_step, prediction = add_evaluation_step(final_tensor, ground_truth_input)
# now prediction stores the tensor object that
# calculates predicted class labels
Run Code Online (Sandbox Code Playgroud)
现在,您可以更新第1076行,以便prediction
在调用时进行评估sess.run()
:
train_accuracy, cross_entropy_value, train_predictions = sess.run(
[evaluation_step, cross_entropy, prediction],
feed_dict={bottleneck_input: train_bottlenecks,
ground_truth_input: train_ground_truth})
# train_predictions now stores class labels predicted by model
# calculate precision, recall and F1 score
(train_precision,
train_recall,
train_f1_score, _) = precision_recall_fscore_support(y_true=train_ground_truth,
y_pred=train_predictions,
average='micro')
# calculate confusion matrix
train_confusion_matrix = confusion_matrix(y_true=train_ground_truth,
y_pred=train_predictions)
Run Code Online (Sandbox Code Playgroud)
同样,您可以通过修改第1095行来计算验证子集的度量标准:
validation_summary, validation_accuracy, validation_predictions = sess.run(
[merged, evaluation_step, prediction],
feed_dict={bottleneck_input: validation_bottlenecks,
ground_truth_input: validation_ground_truth})
# validation_predictions now stores class labels predicted by model
# calculate precision, recall and F1 score
(validation_precision,
validation_recall,
validation_f1_score, _) = precision_recall_fscore_support(y_true=validation_ground_truth,
y_pred=validation_predictions,
average='micro')
# calculate confusion matrix
validation_confusion_matrix = confusion_matrix(y_true=validation_ground_truth,
y_pred=validation_predictions)
Run Code Online (Sandbox Code Playgroud)
最后,代码调用run_final_eval来评估测试数据的训练模型.在此功能,prediction
并且test_ground_truth
已经定义,所以你只需要包括代码来计算所需的指标:
test_accuracy, predictions = eval_session.run(
[evaluation_step, prediction],
feed_dict={
bottleneck_input: test_bottlenecks,
ground_truth_input: test_ground_truth
})
# calculate precision, recall and F1 score
(test_precision,
test_recall,
test_f1_score, _) = precision_recall_fscore_support(y_true=test_ground_truth,
y_pred=predictions,
average='micro')
# calculate confusion matrix
test_confusion_matrix = confusion_matrix(y_true=test_ground_truth,
y_pred=predictions)
Run Code Online (Sandbox Code Playgroud)
请注意,提供的代码通过设置计算全局 F1分数average='micro'
.用户指南中介绍了scikit-learn包支持的不同平均方法.
归档时间: |
|
查看次数: |
337 次 |
最近记录: |