我通过keras为R在R中构建了一个图像分类模型.
精度达到98%左右,而python的准确性却很差.
R的Keras版本是2.1.3,而python是2.1.5
以下是R型号代码:
model=keras_model_sequential()
model=model %>%
layer_conv_2d(filters = 32,kernel_size = c(3,3),padding = 'same',input_shape = c(187,256,3),activation = 'elu')%>%
layer_max_pooling_2d(pool_size = c(2,2)) %>%
layer_dropout(.25) %>% layer_batch_normalization() %>%
layer_conv_2d(filters = 64,kernel_size = c(3,3),padding = 'same',activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2,2)) %>%
layer_dropout(.25) %>% layer_batch_normalization() %>% layer_flatten() %>%
layer_dense(128,activation = 'relu') %>%
layer_dropout(.25)%>%
layer_batch_normalization() %>%
layer_dense(6,activation = 'softmax')
model %>%compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics='accuracy'
)
Run Code Online (Sandbox Code Playgroud)
我尝试在python中使用相同的输入数据重建相同的模型.
虽然,性能完全不同.精度甚至低于30%
因为R keras正在为run keras调用python.使用相同的模型架构,它们应该获得类似的性能.
我想知道这个问题是否由preprocess引起,但仍然显示我的python代码:
model=Sequential()
model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=(187,256,3),padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Conv2D(64, (3, 3), activation='relu',padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2))) …Run Code Online (Sandbox Code Playgroud)