我正在使用张量流中的双向动态RNN进行文本标记.在加工输入维度后,我尝试运行一个Session.这是blstm设置部分:
fw_lstm_cell = BasicLSTMCell(LSTM_DIMS)
bw_lstm_cell = BasicLSTMCell(LSTM_DIMS)
(fw_outputs, bw_outputs), _ = bidirectional_dynamic_rnn(fw_lstm_cell,
bw_lstm_cell,
x_place,
sequence_length=SEQLEN,
dtype='float32')
Run Code Online (Sandbox Code Playgroud)
这是运行部分:
with tf.Graph().as_default():
# Placehoder Settings
x_place, y_place = set_placeholder(BATCH_SIZE, EM_DIMS, MAXLEN)
# BLSTM Model Building
hlogits = tf_kcpt.build_blstm(x_place)
# Compute loss
loss = tf_kcpt.get_loss(log_likelihood)
# Training
train_op = tf_kcpt.training(loss)
# load Eval method
eval_correct = tf_kcpt.evaluation(logits, y_place)
# Session Setting & Init
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
# tensor summary setting
summary = tf.summary.merge_all()
summary_writer = tf.summary.FileWriter(LOG_DIR, sess.graph)
# Save
saver …
Run Code Online (Sandbox Code Playgroud) python bidirectional deep-learning tensorflow recurrent-neural-network
I am applying transfer-learning on a pre-trained network using the GPU version of keras. I don't understand how to define the parameters max_queue_size
, workers
, and use_multiprocessing
. If I change these parameters (primarily to speed-up learning), I am unsure whether all data is still seen per epoch.
max_queue_size
:
maximum size of the internal training queue which is used to "precache" samples from the generator
Question: Does this refer to how many batches are prepared on CPU? How …