为操作编写自定义基于Python的渐变函数?(没有C++实现)

njk*_*njk 5 python gradient-descent tensorflow

我正在尝试为'my_op'编写一个自定义渐变函数,为了示例,它只包含对tf.identity()的调用(理想情况下,它可以是任何图形).

import tensorflow as tf
from tensorflow.python.framework import function


def my_op_grad(x):
    return [tf.sigmoid(x)]


@function.Defun(a=tf.float32, python_grad_func=my_op_grad)
def my_op(a):
    return tf.identity(a)


a = tf.Variable(tf.constant([5., 4., 3., 2., 1.], dtype=tf.float32))

sess = tf.Session()
sess.run(tf.initialize_all_variables())

grad = tf.gradients(my_op(a), [a])[0]

result = sess.run(grad)

print(result)

sess.close()
Run Code Online (Sandbox Code Playgroud)

不幸的是我收到以下错误:

Traceback (most recent call last):
  File "custom_op.py", line 19, in <module>
    grad = tf.gradients(my_op(a), [a])[0]
  File "/Users/njk/tfm/lib/python3.5/site-packages/tensorflow/python/framework/function.py", line 528, in __call__
    return call_function(self._definition, *args, **kwargs)
  File "/Users/njk/tfm/lib/python3.5/site-packages/tensorflow/python/framework/function.py", line 267, in call_function
    compute_shapes=False)
  File "/Users/njk/tfm/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2285, in create_op
    raise TypeError("Input #%d is not a tensor: %s" % (idx, a))
TypeError: Input #0 is not a tensor: <tensorflow.python.ops.variables.Variable object at 0x1080d2710>
Run Code Online (Sandbox Code Playgroud)

我知道可以创建自定义C++操作,但在我的情况下,我只需要为函数编写自定义渐变,可以使用标准的TensorFlow操作轻松地用Python编写,所以我想避免编写不必要的C++代码.

另外,我正在使用GitHub的TensorFlow的上游版本.

Yao*_*ang 3

请注意,python_grad_func 需要与 ops.RegisterGradient ( https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/function.py#L349 )相同的接口。

这是修改后的代码示例:

def my_op_grad(op, grad): ### instead of my_op_grad(x)                                                  
    return tf.sigmoid(op.inputs[0])                                              

@function.Defun(a=tf.float32, python_grad_func=my_op_grad)                       
def my_op(a):                                                                    
    return tf.identity(a)                                                        

def main(unused_argv):                                                           

  a = tf.Variable(tf.constant([-5., 4., -3., 2., 1.], dtype=tf.float32))         
  sess = tf.Session()                                                            
  sess.run(tf.initialize_all_variables())                                        

  a = tf.identity(a) #workaround for bug github.com/tensorflow/tensorflow/issues/3710

  grad = tf.gradients(my_op(a), [a])[0]                                          
  result = sess.run(grad)                                                        

  print(result)                                                                  

  sess.close()     
Run Code Online (Sandbox Code Playgroud)

输出:

[ 0.00669286  0.98201376  0.04742587  0.88079709  0.7310586 ]
Run Code Online (Sandbox Code Playgroud)