为什么 tensorflow 中的 conv2d 给出的输出与输入的形状相同

Cuo*_*how 4 python tensorflow

根据这个深度学习课程http://cs231n.github.io/convolutional-networks/#conv,它说如果有x形状[W,W](其中W = width = height)的输入通过具有过滤器形状和步幅卷积层,该将返回一个形状[F,F] Soutput[(W-F)/S +1, (W-F)/S +1]

但是,当我尝试按照 Tensorflow 的教程进行操作时:https ://www.tensorflow.org/versions/r0.11/tutorials/mnist/pros/index.html 。功能好像有区别tf.nn.conv2d(inputs, filter, stride)

无论我如何更改过滤器大小,conv2d都会不断返回一个与输入具有相同形状的值。

就我而言,我使用的MNIST数据集表明每个图像都有大小[28,28](忽略channel_num = 1

但是在我定义了第一conv1层之后,我用conv1.get_shape()来查看它的输出,它给了我[28,28, num_of_filters]

为什么是这样?我认为返回值应该遵循上面的公式。


附录:代码片段

#reshape x from 2d to 4d

x_image = tf.reshape(x, [-1, 28, 28, 1]) #[num_samples, width, height, channel_num]

## define the shape of weights and bias
w_shape = [5, 5, 1, 32] #patch_w, patch_h, in_channel, output_num(out_channel)
b_shape =          [32] #bias only need to be consistent with output_num

## init weights of conv1 layers
W_conv1 = weight_variable(w_shape)
b_conv1 = bias_variable(b_shape)

## first layer x_iamge->conv1/relu->pool1

#Our convolutions uses a stride of one 
#and are zero padded 
#so that the output is the same size as the input
h_conv1 = tf.nn.relu(
    conv2d(x_image, W_conv1) + b_conv1
                    )

print 'conv1.shape=',h_conv1.get_shape() 
## conv1.shape= (?, 28, 28, 32) 
## I thought conv1.shape should be (?, (28-5)/1+1, 24 ,32)

h_pool1 = max_pool_2x2(h_conv1) #output 32 num
print 'pool1.shape=',h_pool1.get_shape() ## pool1.shape= (?, 14, 14, 32)
Run Code Online (Sandbox Code Playgroud)