小编kbx*_*bxu的帖子

小批量梯度下降中 dropout 的混淆用法

我的问题是最后。

一个使用小批量 GD 训练并使用最后一个全连接层(第 60 行)中的 dropout的示例CNN

fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
Run Code Online (Sandbox Code Playgroud)

起初我认为tf.layers.dropoutortf.nn.dropout随机将列中的神经元设置为零。但我最近发现并非如此。下面的一段代码打印了它的dropout作用。我将fc0用作 4 个样本 x 10 的特征矩阵,并将fc用作退出版本。

import tensorflow as tf
import numpy as np

fc0 = tf.random_normal([4, 10])
fc = tf.nn.dropout(fc0, 0.5)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

a, b = sess.run([fc0, fc])
np.savetxt("oo.txt", np.vstack((a, b)), fmt="%.2f", delimiter=",")
Run Code Online (Sandbox Code Playgroud)

在输出中oo.txt(原始矩阵:第 1-4 行,删除矩阵:第 5-8 行):

0.10,1.69,0.36,-0.53,0.89,0.71,-0.84,0.24,-0.72,-0.44
0.88,0.32,0.58,-0.18,1.57,0.04,0.58,-0.56,-0.66,0.59
-1.65,-1.68,-0.26,-0.09,-1.35,-0.21,1.78,-1.69,-0.47,1.26
-1.52,0.52,-0.99,0.35,0.90,1.17,-0.92,-0.68,-0.27,0.68
0.20,0.00,0.71,-0.00,0.00,0.00,-0.00,0.47,-0.00,-0.87
0.00,0.00,0.00,-0.00,3.15,0.07,1.16,-0.00,-1.32,0.00
-0.00,-3.36,-0.00,-0.17,-0.00,-0.42,3.57,-3.37,-0.00,2.53
-0.00,1.05,-1.99,0.00,1.80,0.00,-0.00,-0.00,-0.55,1.35
Run Code Online (Sandbox Code Playgroud)

我的理解正确吗?dropout 是在小批量 …

machine-learning neural-network gradient-descent tensorflow mini-batch

3
推荐指数
1
解决办法
1878
查看次数