Keras 与 TensorFlow 代码比较源

Sea*_*ala 2 keras tensorflow

这实际上并不是一个特定于代码的问题,但我无法找到任何答案或资源。

我目前正在尝试自学一些“纯”TensorFlow,而不仅仅是使用 Keras,我觉得如果有一些源代码包含 TensorFlow 代码和等效的 Keras 代码并排进行比较,那将会非常有帮助。

不幸的是,我在互联网上找到的大多数结果都讨论了性能方面的差异或有非常简单的比较示例(例如“这就是为什么 Keras 使用起来更简单”)。我对这些细节并不像对代码本身那么感兴趣。

有谁知道是否有任何资源可以帮助解决这个问题?

Rub*_*res 6

这里有两个模型, inTensorflow和 in Keras,它们是对应的:

import tensorflow as tf
import numpy as np
import pandas as pd
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
Run Code Online (Sandbox Code Playgroud)

张量流

X = tf.placeholder(dtype=tf.float64)
Y = tf.placeholder(dtype=tf.float64)
num_hidden=128

# Build a hidden layer
W_hidden = tf.Variable(np.random.randn(784, num_hidden))
b_hidden = tf.Variable(np.random.randn(num_hidden))
p_hidden = tf.nn.sigmoid( tf.add(tf.matmul(X, W_hidden), b_hidden) )

# Build another hidden layer
W_hidden2 = tf.Variable(np.random.randn(num_hidden, num_hidden))
b_hidden2 = tf.Variable(np.random.randn(num_hidden))
p_hidden2 = tf.nn.sigmoid( tf.add(tf.matmul(p_hidden, W_hidden2), b_hidden2) )

# Build the output layer
W_output = tf.Variable(np.random.randn(num_hidden, 10))
b_output = tf.Variable(np.random.randn(10))
p_output = tf.nn.softmax( tf.add(tf.matmul(p_hidden2, W_output), b_output) )

loss = tf.reduce_mean(tf.losses.mean_squared_error(
        labels=Y,predictions=p_output))
accuracy=1-tf.sqrt(loss)
minimization_op = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)

feed_dict = {
    X: x_train.reshape(-1,784),
    Y: pd.get_dummies(y_train)
}
with tf.Session() as session:
    session.run(tf.global_variables_initializer())

    for step in range(10000):
        J_value = session.run(loss, feed_dict)
        acc = session.run(accuracy, feed_dict)
        if step % 100 == 0:
            print("Step:", step, " Loss:", J_value," Accuracy:", acc)

            session.run(minimization_op, feed_dict)
    pred00 = session.run([p_output], feed_dict={X: x_test.reshape(-1,784)})
Run Code Online (Sandbox Code Playgroud)

喀拉斯

import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from keras.models import Model

l = tf.keras.layers

model = tf.keras.Sequential([
    l.Flatten(input_shape=(784,)),
    l.Dense(128, activation='relu'),
    l.Dense(128, activation='relu'),
    l.Dense(10, activation='softmax')
])

model.compile(loss='categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])

model.summary()

model.fit(x_train.reshape(-1,784),pd.get_dummies(y_train),nb_epoch=15,batch_size=128,verbose=1)
Run Code Online (Sandbox Code Playgroud)