Keras 加载模型后精度较低

Raj*_*ajV 8 keras tensorflow

我看到一个非常奇怪的情况。训练卷积网络后,我在验证数据上获得了大约 95% 的准确率。我保存模型。后来我恢复模型并在同一验证数据集上运行验证。这次我勉强得到 10% 的准确率。我已阅读文档,但似乎没有任何帮助。我做错了什么吗?

def build_model_mnist(image_width, image_height, image_depth):
  model = keras.Sequential()
  model.add(keras.layers.Conv2D(5, (3, 3), activation='relu', input_shape=(image_width, image_height, image_depth)))
  model.add(keras.layers.MaxPooling2D((2, 2)))
  model.add(keras.layers.Conv2D(10, (3, 3), activation='relu'))
  model.add(keras.layers.MaxPooling2D((2, 2)))
  model.add(keras.layers.Conv2D(10, (3, 3), activation='relu'))

  model.add(keras.layers.Flatten())
  model.add(keras.layers.Dense(64, activation='relu'))
  model.add(keras.layers.Dense(10, activation='softmax'))

  model.compile(optimizer='adam',
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])
  
  return model

def train_mnist():
  model = build_model_mnist(image_width=train_images.shape[1], 
                    image_height=train_images.shape[2], 
                    image_depth=train_images.shape[3])
  # Start training              
  h = model.fit(train_images, train_labels, batch_size=500, epochs=5)

  model.save("minist")

  # Evaluate the model
  test_loss, test_acc = model.evaluate(test_images, test_labels)

  print("Accuracy:", test_acc)

train_mnist()
Run Code Online (Sandbox Code Playgroud)

上面将显示 95% 的准确率。但下面的代码显示准确率为 10%。

def evaluate_mnist():
  # Load the model
  model = keras.models.load_model("minist")

  # Evaluate the model
  test_loss, test_acc = model.evaluate(test_images, test_labels)

  print("Accuracy:", test_acc)

evaluate_mnist()
Run Code Online (Sandbox Code Playgroud)

如果我只保存并恢复权重,那么一切都会正常。在下面的代码中,我们仅保存权重。后来我们使用代码重新创建模型架构并恢复权重。这种方法可以产生正确的精度。

def train_mnist():
  #Create the network model
  model = build_model_mnist(image_width=train_images.shape[1], 
                    image_height=train_images.shape[2], 
                    image_depth=train_images.shape[3])
  # Start training              
  h = model.fit(train_images, train_labels, batch_size=500, epochs=5)

  # Evaluate the model
  test_loss, test_acc = model.evaluate(test_images, test_labels)

  print("Accuracy:", test_acc)

  model.save_weights("minist-weights")

train_mnist()

def evaluate_mnist():
  # Re-create the model architecture
  model = build_model_mnist(image_width=train_images.shape[1], 
                    image_height=train_images.shape[2], 
                    image_depth=train_images.shape[3])

  model.load_weights("minist-weights")
  
  # Evaluate the model
  test_loss, test_acc = model.evaluate(test_images, test_labels)

  print("Accuracy:", test_acc)

evaluate_mnist()
Run Code Online (Sandbox Code Playgroud)

小智 3

我在 tf 2.3.0 中也遇到了类似的问题。

问题解释了使用稀疏_分类_交叉熵时通用术语“准确度”指标的问题。在模型重新加载时,它会关联错误的准确性指标。解决方案是明确告诉 keras 使用正确的指标,而不是让它推断正确的指标(其中的错误),即使用指标='sparse_categorical_accuracy'进行编译。

我最初在训练阶段使用metrics='accuracy'作为指标,发现只有在重新加载后重新编译模型才能恢复预期的性能。