BBi*_*ell 5 python keras tensorflow
我正在尝试通过向图层的特征图添加标准差变量来提高 GAN 模型的稳定性。我正在遵循GANs-in-Action git中设置的示例。数学本身对我来说很有意义。我的模型的机制以及解决模式崩溃的原因对我来说很有意义。然而,该示例的一个缺点是它们从未实际显示该代码是如何执行的。
def minibatch_std_layer(layer, group_size=4):
group_size = keras.backend.minimum(group_size, tf.shape(layer)[0])
shape = list(keras.backend.int_shape(input))
shape[0] = tf.shape(input)[0]
minibatch = keras.backend.reshape(layer,(group_size, -1, shape[1], shape[2], shape[3]))
minibatch -= tf.reduce_mean(minibatch, axis=0, keepdims=True)
minibatch = tf.reduce_mean(keras.backend.square(minibatch), axis = 0)
minibatch = keras.backend.square(minibatch + 1e8)
minibatch = tf.reduce_mean(minibatch, axis=[1,2,4], keepdims=True)
minibatch = keras.backend.tile(minibatch,[group_size, 1, shape[2], shape[3]])
return keras.backend.concatenate([layer, minibatch], axis=1)
def build_discriminator():
const = ClipConstraint(0.01)
discriminator_input = Input(shape=(4000,3), batch_size=BATCH_SIZE, name='discriminator_input')
x = discriminator_input
x = Conv1D(64, 3, strides=1, padding="same", kernel_constraint=const)(x)
x = BatchNormalization()(x)
x = LeakyReLU(0.3)(x)
x = Dropout(0.25)(x)
x = Conv1D(128, 3, strides=2, padding="same", kernel_constraint=const)(x)
x = LeakyReLU(0.3)(x)
x = Dropout(0.25)(x)
x = Conv1D(256, 3, strides=3, padding="same", kernel_constraint=const)(x)
x = LeakyReLU(0.3)(x)
x = Dropout(0.25)(x)
# Trying to add it to the feature map here
x = minibatch_std_layer(Conv1D(256, 3, strides=3, padding="same", kernel_constraint=const)(x))
x = Flatten()(x)
x = Dense(1000)(x)
discriminator_output = Dense(1, activation='sigmoid')(x)
return Model(discriminator_input, discriminator_output, name='discriminator_model')
d = build_discriminator()
Run Code Online (Sandbox Code Playgroud)
无论我如何构建它,我都无法构建鉴别器。它继续返回不同类型的AttributeErrors 但我一直无法理解它想要什么。搜索这个问题时,有很多 Medium 帖子展示了渐进式 GAN 中的作用的高级概述,但我找不到任何展示其应用的帖子。
有人对如何将上述代码添加到图层有任何建议吗?
这是我的建议...
问题与minibatch_std_layer功能有关。首先,您的网络处理 3D 数据,而原始minibatch_std_layer网络处理 4D 数据,因此您需要对其进行调整。其次,input这个函数中定义的变量是未知的(也在您引用的源代码中),所以我认为最明显和合乎逻辑的解决方案是将其视为变量layer( 的输入minibatch_std_layer)。考虑到这一点,修改后的内容minibatch_std_layer变为:
def minibatch_std_layer(layer, group_size=4):
group_size = K.minimum(4, layer.shape[0])
shape = layer.shape
minibatch = K.reshape(layer,(group_size, -1, shape[1], shape[2]))
minibatch -= tf.reduce_mean(minibatch, axis=0, keepdims=True)
minibatch = tf.reduce_mean(K.square(minibatch), axis = 0)
minibatch = K.square(minibatch + 1e-8) #epsilon=1e-8
minibatch = tf.reduce_mean(minibatch, axis=[1,2], keepdims=True)
minibatch = K.tile(minibatch,[group_size, 1, shape[2]])
return K.concatenate([layer, minibatch], axis=1)
Run Code Online (Sandbox Code Playgroud)
我们可以这样将其放入模型中:
def build_discriminator():
# const = ClipConstraint(0.01)
discriminator_input = Input(shape=(4000,3), batch_size=32, name='discriminator_input')
x = discriminator_input
x = Conv1D(64, 3, strides=1, padding="same")(x)
x = BatchNormalization()(x)
x = LeakyReLU(0.3)(x)
x = Dropout(0.25)(x)
x = Conv1D(128, 3, strides=2, padding="same")(x)
x = LeakyReLU(0.3)(x)
x = Dropout(0.25)(x)
x = Conv1D(256, 3, strides=3, padding="same")(x)
x = LeakyReLU(0.3)(x)
x = Dropout(0.25)(x)
# Trying to add it to the feature map here
x = Conv1D(256, 3, strides=3, padding="same")(x)
x = Lambda(minibatch_std_layer)(x)
x = Flatten()(x)
x = Dense(1000)(x)
discriminator_output = Dense(1, activation='sigmoid')(x)
return Model(discriminator_input, discriminator_output, name='discriminator_model')
Run Code Online (Sandbox Code Playgroud)
我不知道这是什么ClipConstraint,但看起来没有问题。我使用 TF 2.2 运行代码,但也认为使用 TF 1 运行它非常容易(如果您正在使用它)。这里是运行代码:https://colab.research.google.com/drive/1A6UNYkveuHPF7r4-XAe8MuCHZJ-1vcpl ?usp=sharing
| 归档时间: |
|
| 查看次数: |
1425 次 |
| 最近记录: |