Ksh*_*rma 14 python keras tensorflow
考虑以下 TensorFlow 代码:
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
mnist_dataset, mnist_info = tfds.load(name = 'mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = mnist_dataset['train'], mnist_dataset['test']
num_validation_samples = 0.1 * mnist_info.splits['train'].num_examples
num_validation_samples = tf.cast(num_validation_samples, tf.int64)
num_test_samples = mnist_info.splits['test'].num_examples
num_test_samples = tf.cast(num_test_samples, tf.int64)
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.
return image, label
scaled_train_and_validation_data = mnist_train.map(scale)
test_data = mnist_test.map(scale)
BUFFER_SIZE = 10_000
shuffled_train_and_validation_data = scaled_train_and_validation_data.shuffle(BUFFER_SIZE)
validation_data = shuffled_train_and_validation_data.take(num_validation_samples)
train_data = shuffled_train_and_validation_data.skip(num_validation_samples)
BATCH_SIZE = 100
train_data = train_data.batch(BATCH_SIZE)
validation_data = validation_data.batch(num_validation_samples) # Single batch, having size equal to number of validation samples
test_data = test_data.batch(num_test_samples)
validation_inputs, validation_targets = next(iter(validation_data))
input_size = 784 # One for each pixel of the 28 * 28 image
output_size = 10
hidden_layer_size = 50 # Arbitrary chosen
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28,1)),
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # First hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
tf.keras.layers.Dense(output_size, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
NUM_EPOCHS = 5
model.fit(train_data, epochs = NUM_EPOCHS, validation_data=(validation_inputs, validation_targets), verbose=2)
Run Code Online (Sandbox Code Playgroud)
在运行它 tf 给出错误:
ValueError异常:
batch_size或steps需要Tensor或NumPy输入数据。
在调用中添加 batch_size 时fit():
model.fit(train_data, batch_size = BATCH_SIZE, epochs = NUM_EPOCHS, validation_data=(validation_inputs, validation_targets), verbose=2)
Run Code Online (Sandbox Code Playgroud)
然后它抱怨:
ValueError:
batch_size不能为给定的输入类型指定参数。接收输入:,batch_size:100
这里的错误是什么?
rvi*_*nas 13
发生这个错误是因为一个tf.Dataset提供给说法validation_data的Model.fit,但Keras不知道要走多少步来验证。要解决此问题,您只需设置参数即可validation_steps。例如:
model.fit(train_data,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_data=(validation_inputs, validation_targets),
validation_steps=10)
Run Code Online (Sandbox Code Playgroud)