sba*_*aur 6 autoencoder deep-learning keras
我最近读了这篇文章,其中介绍了一种称为“热身”(WU)的过程,该过程包括将KL散度的损失乘以一个变量,该变量的值取决于历元数(它从0线性发展到1)
我想知道这是否是这样做的好方法:
beta = K.variable(value=0.0)
def vae_loss(x, x_decoded_mean):
# cross entropy
xent_loss = K.mean(objectives.categorical_crossentropy(x, x_decoded_mean))
# kl divergence
for k in range(n_sample):
epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.,
std=1.0) # used for every z_i sampling
# Sample several layers of latent variables
for mean, var in zip(means, variances):
z_ = mean + K.exp(K.log(var) / 2) * epsilon
# build z
try:
z = tf.concat([z, z_], -1)
except NameError:
z = z_
except TypeError:
z = z_
# sum loss (using a MC approximation)
try:
loss += K.sum(log_normal2(z_, mean, K.log(var)), -1)
except NameError:
loss = K.sum(log_normal2(z_, mean, K.log(var)), -1)
print("z", z)
loss -= K.sum(log_stdnormal(z) , -1)
z = None
kl_loss = loss / n_sample
print('kl loss:', kl_loss)
# result
result = beta*kl_loss + xent_loss
return result
# define callback to change the value of beta at each epoch
def warmup(epoch):
value = (epoch/10.0) * (epoch <= 10.0) + 1.0 * (epoch > 10.0)
print("beta:", value)
beta = K.variable(value=value)
from keras.callbacks import LambdaCallback
wu_cb = LambdaCallback(on_epoch_end=lambda epoch, log: warmup(epoch))
# train model
vae.fit(
padded_X_train[:last_train,:,:],
padded_X_train[:last_train,:,:],
batch_size=batch_size,
nb_epoch=nb_epoch,
verbose=0,
callbacks=[tb, wu_cb],
validation_data=(padded_X_test[:last_test,:,:], padded_X_test[:last_test,:,:])
)
Run Code Online (Sandbox Code Playgroud)
小智 6
这是行不通的。我对其进行了测试,以弄清其为何不起作用。要记住的关键是Keras在训练开始时就创建了一个静态图。
因此,该vae_loss函数仅被调用一次以创建损失张量,这意味着beta每次计算损失时对变量的引用将保持不变。但是,您的warmup函数会将beta重新分配给new K.variable。因此,beta用于计算损失的不同于beta被更新的,该值将始终为0。
这很容易解决。只需在warmup回调中更改此行:
beta = K.variable(value=value)
至:
K.set_value(beta, value)
这样,beta“中” 的实际值将被“就地”更新,而不是创建新变量,并且损失将被正确地重新计算。
| 归档时间: |
|
| 查看次数: |
1182 次 |
| 最近记录: |