Keras model.summary()结果 - 理解参数数量

use*_*476 42 python machine-learning neural-network theano keras

我有一个简单的NN模型,用于检测使用Keras(Theano后端)在python中编写的28x28px图像的手写数字:

model0 = Sequential()

#number of epochs to train for
nb_epoch = 12
#amount of data each iteration in an epoch sees
batch_size = 128

model0.add(Flatten(input_shape=(1, img_rows, img_cols)))
model0.add(Dense(nb_classes))
model0.add(Activation('softmax'))
model0.compile(loss='categorical_crossentropy', 
         optimizer='sgd',
         metrics=['accuracy'])

model0.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
      verbose=1, validation_data=(X_test, Y_test))

score = model0.evaluate(X_test, Y_test, verbose=0)

print('Test score:', score[0])
print('Test accuracy:', score[1])
Run Code Online (Sandbox Code Playgroud)

运行良好,我的准确度达到了90%.然后,我执行以下命令,通过执行操作获取网络结构的摘要print(model0.summary()).这输出如下:

Layer (type)         Output Shape   Param #     Connected to                     
=====================================================================
flatten_1 (Flatten)   (None, 784)     0           flatten_input_1[0][0]            
dense_1 (Dense)     (None, 10)       7850        flatten_1[0][0]                  
activation_1        (None, 10)          0           dense_1[0][0]                    
======================================================================
Total params: 7850
Run Code Online (Sandbox Code Playgroud)

我不明白他们如何达到7850总参数以及这实际意味着什么?

Mar*_*jko 31

参数数量为7850,因为每个隐藏单元都有784个输入权重和一个带偏置的连接权重.这意味着每个隐藏单元都会为您提供785个参数.你有10个单位,所以总计达到7850.

更新:

这个额外偏见术语的作用非常重要.它显着增加了模型的容量.你可以在这里阅读详细信息:

偏差在神经网络中的作用

  • 关于784的简要说明。784 = 28x28,因为图像尺寸为28 * 28。 (4认同)

tau*_*Guy 16

我将一个514维实值输入馈送到SequentialKeras 的模型中.我的模型按以下方式构建:

    predictivemodel = Sequential()
    predictivemodel.add(Dense(514, input_dim=514, W_regularizer=WeightRegularizer(l1=0.000001,l2=0.000001), init='normal'))
    predictivemodel.add(Dense(257, W_regularizer=WeightRegularizer(l1=0.000001,l2=0.000001), init='normal'))
    predictivemodel.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
Run Code Online (Sandbox Code Playgroud)

当我打印时,model.summary()我得到以下结果:

Layer (type)    Output Shape  Param #     Connected to                   
================================================================
dense_1 (Dense) (None, 514)   264710      dense_input_1[0][0]              
________________________________________________________________
activation_1    (None, 514)   0           dense_1[0][0]                    
________________________________________________________________
dense_2 (Dense) (None, 257)   132355      activation_1[0][0]               
================================================================
Total params: 397065
________________________________________________________________ 
Run Code Online (Sandbox Code Playgroud)

对于dense_1层,参数的数量是264710.这被获得为:514(输入值)*514(第一层中的神经元)+ 514(偏差值)

对于dense_2层,参数的数量是132355.这被获得为:514(输入值)*257(第二层中的神经元)+ 257(第二层中的神经元的偏差值)


Ash*_*ran 12

对于密集层:

output_size * (input_size + 1) == number_parameters 
Run Code Online (Sandbox Code Playgroud)

对于转换层:

output_channels * (input_channels * window_size + 1) == number_parameters
Run Code Online (Sandbox Code Playgroud)

考虑下面的例子,

model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=input_shape),
Conv2D(64, (3, 3), activation='relu'),
Conv2D(128, (3, 3), activation='relu'),
Dense(num_classes, activation='softmax')
])

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 222, 222, 32)      896       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 220, 220, 64)      18496     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 218, 218, 128)     73856     
_________________________________________________________________
dense_9 (Dense)              (None, 218, 218, 10)      1290      
=================================================================
Run Code Online (Sandbox Code Playgroud)

计算参数,

assert 32 * (3 * (3*3) + 1) == 896
assert 64 * (32 * (3*3) + 1) == 18496
assert 128 * (64 * (3*3) + 1) == 73856
assert num_classes * (128 + 1) == 1290
Run Code Online (Sandbox Code Playgroud)

  • 阅读此答案的任何人请注意:这些层是基于 [28x28x3](RGB 输入)的输入大小构建的,而不是基于具有单个输入通道的 OP。 (4认同)
  • @Andrew你能在这里解释一下输出形状值的计算吗? (2认同)

Div*_*oML 7

形状中的“无”意味着它没有预定义的数字。例如,它可以是您在训练期间使用的批量大小,并且您希望通过不为其分配任何值来使其灵活,以便您可以更改批量大小。该模型将从层的上下文推断形状。

要将节点连接到每一层,您可以执行以下操作:

for layer in model.layers:
    print(layer.name, layer.inbound_nodes, layer.outbound_nodes)
Run Code Online (Sandbox Code Playgroud)