Ric*_*ruz 21 python neural-network tensorflow
我试图使这个MNIST示例适应二进制分类.
但是当我将NLABELSfrom NLABELS=2改为时NLABELS=1,loss函数总是返回0(精度为1).
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
# Import data
mnist = input_data.read_data_sets('data', one_hot=True)
NLABELS = 2
sess = tf.InteractiveSession()
# Create the model
x = tf.placeholder(tf.float32, [None, 784], name='x-input')
W = tf.Variable(tf.zeros([784, NLABELS]), name='weights')
b = tf.Variable(tf.zeros([NLABELS], name='bias'))
y = tf.nn.softmax(tf.matmul(x, W) + b)
# Add summary ops to collect data
_ = tf.histogram_summary('weights', W)
_ = tf.histogram_summary('biases', b)
_ = tf.histogram_summary('y', y)
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, NLABELS], name='y-input')
# More name scopes will clean up the graph representation
with tf.name_scope('cross_entropy'):
cross_entropy = -tf.reduce_mean(y_ * tf.log(y))
_ = tf.scalar_summary('cross entropy', cross_entropy)
with tf.name_scope('train'):
train_step = tf.train.GradientDescentOptimizer(10.).minimize(cross_entropy)
with tf.name_scope('test'):
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
_ = tf.scalar_summary('accuracy', accuracy)
# Merge all the summaries and write them out to /tmp/mnist_logs
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter('logs', sess.graph_def)
tf.initialize_all_variables().run()
# Train the model, and feed in test data and record summaries every 10 steps
for i in range(1000):
if i % 10 == 0: # Record summary data and the accuracy
labels = mnist.test.labels[:, 0:NLABELS]
feed = {x: mnist.test.images, y_: labels}
result = sess.run([merged, accuracy, cross_entropy], feed_dict=feed)
summary_str = result[0]
acc = result[1]
loss = result[2]
writer.add_summary(summary_str, i)
print('Accuracy at step %s: %s - loss: %f' % (i, acc, loss))
else:
batch_xs, batch_ys = mnist.train.next_batch(100)
batch_ys = batch_ys[:, 0:NLABELS]
feed = {x: batch_xs, y_: batch_ys}
sess.run(train_step, feed_dict=feed)
Run Code Online (Sandbox Code Playgroud)
我检查了两者的尺寸batch_ys(送入y)并且_y它们都是1xN矩阵,NLABELS=1所以问题似乎在此之前.也许与矩阵乘法有关?
我实际上在一个真实的项目中遇到了同样的问题,所以任何帮助都会受到赞赏...谢谢!
mrr*_*rry 41
原始的MNIST示例使用单热编码来表示数据中的标签:这意味着如果存在NLABELS = 10类(如MNIST中),则目标输出[1 0 0 0 0 0 0 0 0 0]用于类0,[0 1 0 0 0 0 0 0 0 0]用于类1等.tf.nn.softmax()运算符转换logits通过tf.matmul(x, W) + b不同输出类别的概率分布计算,然后将其与输入值进行比较y_.
如果NLABELS = 1,这种行为,如果当时只有一个类,以及tf.nn.softmax()运算将计算的概率1.0为类,从而导致的交叉熵0.0,因为tf.log(1.0)是0.0对所有的例子.
您可以尝试(至少)两种方法进行二进制分类:
最简单的方法是NLABELS = 2为两个可能的类设置,并对[1 0]标签0和[0 1]标签1的训练数据进行编码.这个答案建议如何做到这一点.
你可以保持标签作为整数0和1和使用tf.nn.sparse_softmax_cross_entropy_with_logits(),如建议这个答案.
| 归档时间: |
|
| 查看次数: |
35013 次 |
| 最近记录: |