以下代码片段
import tensorflow as tf
from tensorflow.contrib import rnn
hidden_size = 100
batch_size = 100
num_steps = 100
num_layers = 100
is_training = True
keep_prob = 0.4
input_data = tf.placeholder(tf.float32, [batch_size, num_steps])
lstm_cell = rnn.BasicLSTMCell(hidden_size, forget_bias=0.0, state_is_tuple=True)
if is_training and keep_prob < 1:
lstm_cell = rnn.DropoutWrapper(lstm_cell)
cell = rnn.MultiRNNCell([lstm_cell for _ in range(num_layers)], state_is_tuple=True)
_initial_state = cell.zero_state(batch_size, tf.float32)
iw = tf.get_variable("input_w", [1, hidden_size])
ib = tf.get_variable("input_b", [hidden_size])
inputs = [tf.nn.xw_plus_b(i_, iw, ib) for i_ in tf.split(input_data, num_steps, 1)]
if is_training and keep_prob < 1:
inputs = [tf.nn.dropout(input_, keep_prob) for input_ in inputs]
outputs, states = rnn.static_rnn(cell, inputs, initial_state=_initial_state)
Run Code Online (Sandbox Code Playgroud)
产生以下错误:
ValueError:尝试使用与第一次使用不同的变量范围重用
RNNCell<tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.BasicLSTMCellobject at 0x10210d5c0>.第一次使用细胞是在范围内'rnn/multi_rnn_cell/cell_0/basic_lstm_cell',这种尝试的范围是''rnn/multi_rnn_cell/cell_1/basic_lstm_cell'``.如果您希望使用不同的权重集,请创建单元格的新实例.
如果您在使用之前:
MultiRNNCell([BasicLSTMCell(...)] * num_layers),请更改为:MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)]).如果在使用与双向RNN的正向和反向单元相同的单元实例之前,只需创建两个实例(一个用于转发,一个用于反向).
2017年5月,我们将开始转换此单元格的行为,以便在调用时使用现有存储的权重(如果有的话)
scope=None(这可能导致静默模型降级,因此此错误将保留到那时为止.)
如何解决这个问题呢?
我的Tensorflow版本是1.0.
dv3*_*dv3 10
正如评论中所建议的,我的解决方案是:
改变这一点
cell = tf.contrib.rnn.LSTMCell(state_size, state_is_tuple=True)
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
rnn_cells = tf.contrib.rnn.MultiRNNCell([cell for _ in range(num_layers)], state_is_tuple = True)
outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state, scope = "layer")
Run Code Online (Sandbox Code Playgroud)
成:
def lstm_cell():
cell = tf.contrib.rnn.LSTMCell(state_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state)
Run Code Online (Sandbox Code Playgroud)
这似乎解决了可重用性问题.我没有从根本上理解潜在的问题,但这解决了TF 1.1rc2
欢呼的问题!
| 归档时间: |
|
| 查看次数: |
3325 次 |
| 最近记录: |