tensorflow 2.0:传递函数构建代码之外的op

mat*_*ick 7 python tensorflow tensorflow2.0

我收到一个错误:

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2
Run Code Online (Sandbox Code Playgroud)

使用如下所示的NVP层:

import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
class NVPLayer(tf.keras.models.Model):

    def __init__(self, *, output_dim, num_masked, **kwargs):
        super().__init__(**kwargs)
        self.output_dim = output_dim
        self.num_masked = num_masked
        self.shift_and_log_scale_fn = tfb.real_nvp_default_template(
            hidden_layers=[2], # HERE HERE ADJUST THIS
            activation=None, # linear
            )
        self.loss = None

    def get_nvp(self):
        nvp = tfd.TransformedDistribution(
            distribution=tfd.MultivariateNormalDiag(loc=[0.] * self.output_dim),
            bijector=tfb.RealNVP(
                num_masked=self.num_masked,
                shift_and_log_scale_fn=self.shift_and_log_scale_fn)
            )
        return nvp

    def call(self, *inputs):
        nvp = self.get_nvp()
        self.loss = tf.reduce_mean(nvp.log_prob(*inputs)) # how else to do this?
        # return nvp.bijector.forward(*inputs)
        return nvp.bijector.inverse(*inputs)
Run Code Online (Sandbox Code Playgroud)

我什么都没打电话tf.init_scope。训练像这样的图层的简单版本似乎起作用。

我将尝试获得更详细的跟踪,但是我怀疑这与非紧急模式的东西有关。

更新:因此,这肯定是来自self.loss某些渐变胶带层中的。正确的做法是什么?

cur*_*s95 1

几分钟前刚刚遇到了同样的问题,在我的例子中,我想修改我的损失函数类中的状态,以下是我在你的例子中解决它的方法。

顺便说一句,@simon 给了我如何正确评估这一点的灵感。所以支持他!

看来您应该tf.Variable为训练时要更改的属性创建一个。self.output_dim请注意,您对其他属性(如、和其他)没有任何问题self.num_masked

尝试这个:

import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
class NVPLayer(tf.keras.models.Model):

def __init__(self, *, output_dim, num_masked, **kwargs):
    super().__init__(**kwargs)
    self.output_dim = output_dim
    self.num_masked = num_masked
    self.shift_and_log_scale_fn = tfb.real_nvp_default_template(
        hidden_layers=[2], # HERE HERE ADJUST THIS
        activation=None, # linear
        )

    ###CHANGE HERE
    self.loss = tf.Variable(0.0)

def get_nvp(self):
    nvp = tfd.TransformedDistribution(
        distribution=tfd.MultivariateNormalDiag(loc=[0.] * self.output_dim),
        bijector=tfb.RealNVP(
            num_masked=self.num_masked,
            shift_and_log_scale_fn=self.shift_and_log_scale_fn)
        )
    return nvp

def call(self, *inputs):
    nvp = self.get_nvp()

    ### CHANGE HERE
    self.loss.assign(tf.reduce_mean(nvp.log_prob(*inputs)))
    # return nvp.bijector.forward(*inputs)
    return nvp.bijector.inverse(*inputs)
Run Code Online (Sandbox Code Playgroud)

也可以在github问题上查看这个答案,类似的问题!