Aru*_*run 2 python-3.x deep-learning tensorflow tf.keras tensorflow2.0
我正在阅读有关使用 TensorFlow 2.0 结合“GradientTape”API 创建神经网络的内容,并发现了以下代码:
model = tf.keras.Sequential((
tf.keras.layers.Reshape(target_shape=(28 * 28,), input_shape=(28, 28)),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(10)))
model.build()
optimizer = tf.keras.optimizers.Adam()
Run Code Online (Sandbox Code Playgroud)
在这段代码中,“model.build()”的用途/功能是什么?是编译设计好的神经网络吗?
其余的代码是:
compute_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
compute_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
def train_one_step(model, optimizer, x, y):
with tf.GradientTape() as tape:
logits = model(x)
loss = compute_loss(y, logits)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
compute_accuracy(y, logits)
return loss
@tf.function
def train(model, optimizer):
train_ds = mnist_dataset()
step = 0
loss = 0.0
accuracy = 0.0
for x, y in train_ds:
step += 1
loss = train_one_step(model, optimizer, x, y)
if step % 10 == 0:
tf.print('Step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
return step, loss, accuracy
step, loss, accuracy = train(model, optimizer)
print('Final step', step, ': loss', loss, '; accuracy', compute_accuracy.result())
Run Code Online (Sandbox Code Playgroud)
他们将此称为“延迟构建模式”,您可以在其中实际创建模型而无需定义其输入形状。
例如
model = Sequential()
model.add(Dense(32))
model.add(Dense(32))
model.build((None, 500))
Run Code Online (Sandbox Code Playgroud)
相当于
model = Sequential()
model.add(Dense(32, input_shape=(500,)))
model.add(Dense(32))
Run Code Online (Sandbox Code Playgroud)
在第二种情况下,您需要在定义模型的架构之前了解输入形状。model.build()允许您实际定义模型(即其架构)并稍后实际构建它(即初始化参数等)。
示例取自此处。
| 归档时间: |
|
| 查看次数: |
1416 次 |
| 最近记录: |