VM_*_*_AI 17 time-series tensorflow recurrent-neural-network
有人可以澄清TF中RNN的初始状态是否为后续的小批量重置,或者是否使用了Ilya Sutskever等人ICLR 2015中提到的上一个小批量的最后状态?
dan*_*jar 20
的tf.nn.dynamic_rnn()或tf.nn.rnn()操作允许使用指定RNN的初始状态initial_state参数.如果未指定此参数,则隐藏状态将在每个训练批次开始时初始化为零向量.
在TensorFlow中,您可以包含张量,tf.Variable()以便在多个会话运行之间将其值保留在图表中.只需确保将它们标记为不可训练,因为优化器默认调整所有可训练变量.
data = tf.placeholder(tf.float32, (batch_size, max_length, frame_size))
cell = tf.nn.rnn_cell.GRUCell(256)
state = tf.Variable(cell.zero_states(batch_size, tf.float32), trainable=False)
output, new_state = tf.nn.dynamic_rnn(cell, data, initial_state=state)
with tf.control_dependencies([state.assign(new_state)]):
output = tf.identity(output)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
sess.run(output, {data: ...})
Run Code Online (Sandbox Code Playgroud)
我没有测试过这段代码,但它应该给你一个正确方向的提示.还有一个tf.nn.state_saving_rnn()可以提供状态保护程序对象,但我还没有使用它.
除了danijar的答案,这里是LSTM的代码,其状态是元组(state_is_tuple=True).它还支持多个层.
我们定义了两个函数 - 一个用于获取具有初始零状态的状态变量和一个用于返回操作的函数,我们可以传递给该函数session.run以便用LSTM的最后隐藏状态更新状态变量.
def get_state_variables(batch_size, cell):
# For each layer, get the initial state and make a variable out of it
# to enable updating its value.
state_variables = []
for state_c, state_h in cell.zero_state(batch_size, tf.float32):
state_variables.append(tf.contrib.rnn.LSTMStateTuple(
tf.Variable(state_c, trainable=False),
tf.Variable(state_h, trainable=False)))
# Return as a tuple, so that it can be fed to dynamic_rnn as an initial state
return tuple(state_variables)
def get_state_update_op(state_variables, new_states):
# Add an operation to update the train states with the last state tensors
update_ops = []
for state_variable, new_state in zip(state_variables, new_states):
# Assign the new state to the state variables on this layer
update_ops.extend([state_variable[0].assign(new_state[0]),
state_variable[1].assign(new_state[1])])
# Return a tuple in order to combine all update_ops into a single operation.
# The tuple's actual value should not be used.
return tf.tuple(update_ops)
Run Code Online (Sandbox Code Playgroud)
与danijar的答案类似,我们可以使用它来在每批后更新LSTM的状态:
data = tf.placeholder(tf.float32, (batch_size, max_length, frame_size))
cells = [tf.contrib.rnn.GRUCell(256) for _ in range(num_layers)]
cell = tf.contrib.rnn.MultiRNNCell(cells)
# For each layer, get the initial state. states will be a tuple of LSTMStateTuples.
states = get_state_variables(batch_size, cell)
# Unroll the LSTM
outputs, new_states = tf.nn.dynamic_rnn(cell, data, initial_state=states)
# Add an operation to update the train states with the last state tensors.
update_op = get_state_update_op(states, new_states)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run([outputs, update_op], {data: ...})
Run Code Online (Sandbox Code Playgroud)
主要区别在于,state_is_tuple=True使LSTM的状态为包含两个变量(单元状态和隐藏状态)的LSTMStateTuple,而不仅仅是单个变量.使用多个层然后使LSTM的状态成为LSTMStateTuples的元组 - 每层一个.
| 归档时间: |
|
| 查看次数: |
14561 次 |
| 最近记录: |