Jus*_*erg 5 python tensorflow google-cloud-tpu
是否有可能使用tensorflow的tpu_estimator()训练生成模型(即具有自定义损失计算的变分自动编码器)?
我的VAE的简化版本:
def model_fn(features, labels, mode, params):
#Encoder layers
x = layers.Input()
h = conv1D()(x)
#BOTTLENECK LAYER
z_mean = Dense()(h)
z_log_var = Dense()(h)
def sampling(args):
z_mean_, z_log_var_ = args
epsilon = tf.random_normal()
return z_mean_ + tf.exp(z_log_var_/2)*epsilon
z = Lambda(sampling, name='lambda')([z_mean, z_log_var])
#Decoder Layers
h = Dense(z)
x_decoded = TimeDistributed(Dense(activation='softmax'))(h)
#VAE
vae = tf.keras.models.Model(x, x_decoded)
#VAE LOSS
def vae_loss(x,x_decoded_mean):
x = flatten(x)
x_decoded_mean = flatten(x_decoded_mean)
xent_loss = binary_crossentropy(x, x_decoded_mean)
kl_loss = mean(1 + z_log_var - square(z_mean) - exp(z_log_var))
return xent_loss + kl_loss
optimizer = tf.train.AdamOptimizer()
optimizer = tpu_optimizer.CrossShardOptimizer(optimizer)
train_op = optimizer.minimize(vae_loss, global_step=tf.train.get_global_step())
return tpu_estimator.TPUEstimatorSpec(mode=mode, loss=vae_loss, train_op=train_op)
Run Code Online (Sandbox Code Playgroud)
TPU配置初始化并使用我的input_fn正确加载数据集,但是获得由自定义丢失函数触发的以下错误:
VAE_LOSS() error:
File "TPUest.py", line 107, in model_fn
train_op = optimizer.minimize(vae_loss, global_step=tf.train.get_global_step())
File "/usr/local/lib/python2.7/dist- packages/tensorflow/python/training/optimizer.py", line 414, in minimize grad_loss=grad_loss)
File "/usr/local/lib/python2.7/distpackages/tensorflow/contrib/tpu/python/tpu/tpu_optimizer.py", line 84, in compute_gradients
loss *= scale
TypeError: unsupported operand type(s) for *=: 'function' and 'float'
Run Code Online (Sandbox Code Playgroud)
小智 1
对 Optimizer.minimize 的调用需要有一个损失Tensor,但您传递的是一个 Python 函数(具有适当的输入将计算出您想要的值)。请参阅https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer#minimize
您需要做的是在上面的代码中显式构造 vae_loss 张量。在执行过程中,数据将从输入层向下传播到该损失计算。
| 归档时间: |
|
| 查看次数: |
167 次 |
| 最近记录: |