在TensorFlow中保存或导出权重和偏差以进行非Python复制

sur*_*erf 5 python machine-learning neural-network tensorflow

我建立了一个神经网络,该网络的性能相当好,我想在非Python环境中复制我的模型。我的网络设置如下:

sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 23])
y_ = tf.placeholder(tf.float32, shape=[None, 2])
W = tf.Variable(tf.zeros([23,2]))
b = tf.Variable(tf.zeros([2]))
sess.run(tf.initialize_all_variables())
y = tf.nn.softmax(tf.matmul(x,W) + b)
Run Code Online (Sandbox Code Playgroud)

如何获得我的权重和偏见可解译的.csv或.txt?

编辑:以下是我的完整脚本:

import csv
import numpy
import tensorflow as tf

data = list(csv.reader(open("/Users/sjayaram/developer/TestApp/out/production/TestApp/data.csv")))
[[float(j) for j in i] for i in data]
numpy.random.shuffle(data)
results=data

#delete results from data
data = numpy.delete(data, [23, 24], 1)
#delete data from results
results = numpy.delete(results, range(23), 1)

sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 23])
y_ = tf.placeholder(tf.float32, shape=[None, 2])
W = tf.Variable(tf.zeros([23,2]))
b = tf.Variable(tf.zeros([2]))
sess.run(tf.initialize_all_variables())
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

#train the model, saving 80 entries for testing
#batch-size: 40
for i in range(0, 3680, 40):
  train_step.run(feed_dict={x: data[i:i+40], y_: results[i:i+40]})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: data[3680:], y_: results[3680:]}))
Run Code Online (Sandbox Code Playgroud)

mrr*_*rry 5

您可以将变量作为NumPy数组获取,并用于numpy.savetxt()将内容写为文本或CSV:

import numpy as np

W_val, b_val = sess.run([W, b])

np.savetxt("W.csv", W_val, delimiter=",")
np.savetxt("b.csv", b_val, delimiter=",")
Run Code Online (Sandbox Code Playgroud)

请注意,这不太可能在分布式运行时中提供与使用TensorFlow的本机复制机制一样好的性能。