有没有办法在张量板上更改默认端口("6006"),这样我们可以打开多个张量板?也许像--port ="8008"这样的选项?
我想知道是否有办法从scikit学习包中实现不同的得分函数,如下所示:
from sklearn.metrics import confusion_matrix
confusion_matrix(y_true, y_pred)
Run Code Online (Sandbox Code Playgroud)
进入张量流模型以获得不同的分数.
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
init = tf.initialize_all_variables()
sess.run(init)
for epoch in xrange(1):
avg_cost = 0.
total_batch = len(train_arrays) / batch_size
for batch in range(total_batch):
train_step.run(feed_dict = {x: train_arrays, y: train_labels})
avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)
print "Optimization Finished!"
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Accuracy:", batch, accuracy.eval({x: test_arrays, y: …
Run Code Online (Sandbox Code Playgroud) 我试图通过TensorFlow tf.nn.embedding_lookup()
函数"从头开始"学习imdb数据集的单词表示.如果我理解正确,我必须在另一个隐藏层之前设置一个嵌入层,然后当我执行渐变下降时,该层将"学习"该层权重中的一个单词表示.但是,当我尝试这样做时,我的嵌入层和网络的第一个完全连接层之间出现了形状错误.
def multilayer_perceptron(_X, _weights, _biases):
with tf.device('/cpu:0'), tf.name_scope("embedding"):
W = tf.Variable(tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),name="W")
embedding_layer = tf.nn.embedding_lookup(W, _X)
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(embedding_layer, _weights['h1']), _biases['b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']))
return tf.matmul(layer_2, weights['out']) + biases['out']
x = tf.placeholder(tf.int32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
pred = multilayer_perceptron(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred,y))
train_step = tf.train.GradientDescentOptimizer(0.3).minimize(cost)
init = tf.initialize_all_variables()
Run Code Online (Sandbox Code Playgroud)
我得到的错误是:
ValueError: Shapes TensorShape([Dimension(None), Dimension(300), Dimension(128)])
and TensorShape([Dimension(None), Dimension(None)]) must have the same rank
Run Code Online (Sandbox Code Playgroud) python machine-learning python-2.7 tensorflow word-embedding
我正在寻找一个解决一个简单的文本分类问题与tensorflow.我用IMDB数据集建立了一个模型,知道评论是积极的还是消极的.数据是通过word2vec处理的,所以现在我有一堆矢量来分类.我认为我的问题是由于y_labels的形状不好,因为它们是一个维度,我想通过张量流对两个类输出进行分类,或者我错了.最后的信息,该模型运行良好,精度为1.0,也许太好了!谢谢您的帮助 !
X_train called train_vecs = (25000, 300) dtype: float64
X_test called test_vecs = (25000, 300) dtype: float64
y_test = shape (25000, 1) dtype: int64
y_train = shape: (25000, 1) dtype: int64
x = tf.placeholder(tf.float32, shape = [None, 300])
y = tf.placeholder(tf.float32, shape = [None, 2])
# Input -> Layer 1
W1 = tf.Variable(tf.zeros([300, 2]))
b1 = tf.Variable(tf.zeros([2]))
#h1 = tf.nn.sigmoid(tf.matmul(x, W1) + b1)
# Calculating difference between label and output
pred = tf.nn.softmax(tf.matmul(x, W1) + b1)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred,y)) …
Run Code Online (Sandbox Code Playgroud)