如何在Tensorflow 2.0中应用Guided BackProp?

Tai*_*ian 5 python backpropagation keras tensorflow tensorflow2.0

我首先Tensorflow 2.0尝试实现Guided BackProp以显示Saliency Map。我开始计算的损失y_predy_true图像,然后找到所有层的梯度,由于这方面的损失。

with tf.GradientTape() as tape:
    logits = model(tf.cast(image_batch_val, dtype=tf.float32))
    print('`logits` has type {0}'.format(type(logits)))
    xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=tf.cast(tf.one_hot(1-label_batch_val, depth=2), dtype=tf.int32), logits=logits)
    reduced = tf.reduce_mean(xentropy)
    grads = tape.gradient(reduced, model.trainable_variables)
Run Code Online (Sandbox Code Playgroud)

但是,我不知道如何使用渐变来获得引导传播。

这是我的模型。我使用Keras图层创建了它:

image_input = Input((input_size, input_size, 3))

conv_0 = Conv2D(32, (3, 3), padding='SAME')(image_input)
conv_0_bn = BatchNormalization()(conv_0)
conv_0_act = Activation('relu')(conv_0_bn)
conv_0_pool = MaxPool2D((2, 2))(conv_0_act)

conv_1 = Conv2D(64, (3, 3), padding='SAME')(conv_0_pool)
conv_1_bn = BatchNormalization()(conv_1)
conv_1_act = Activation('relu')(conv_1_bn)
conv_1_pool = MaxPool2D((2, 2))(conv_1_act)

conv_2 = Conv2D(64, (3, 3), padding='SAME')(conv_1_pool)
conv_2_bn = BatchNormalization()(conv_2)
conv_2_act = Activation('relu')(conv_2_bn)
conv_2_pool = MaxPool2D((2, 2))(conv_2_act)

conv_3 = Conv2D(128, (3, 3), padding='SAME')(conv_2_pool)
conv_3_bn = BatchNormalization()(conv_3)
conv_3_act = Activation('relu')(conv_3_bn)

conv_4 = Conv2D(128, (3, 3), padding='SAME')(conv_3_act)
conv_4_bn = BatchNormalization()(conv_4)
conv_4_act = Activation('relu')(conv_4_bn)
conv_4_pool = MaxPool2D((2, 2))(conv_4_act)

conv_5 = Conv2D(128, (3, 3), padding='SAME')(conv_4_pool)
conv_5_bn = BatchNormalization()(conv_5)
conv_5_act = Activation('relu')(conv_5_bn)

conv_6 = Conv2D(128, (3, 3), padding='SAME')(conv_5_act)
conv_6_bn = BatchNormalization()(conv_6)
conv_6_act = Activation('relu')(conv_6_bn)

flat = Flatten()(conv_6_act)

fc_0 = Dense(64, activation='relu')(flat)
fc_0_bn = BatchNormalization()(fc_0)

fc_1 = Dense(32, activation='relu')(fc_0_bn)
fc_1_drop = Dropout(0.5)(fc_1)

output = Dense(2, activation='softmax')(fc_1_drop)

model = models.Model(inputs=image_input, outputs=output)
Run Code Online (Sandbox Code Playgroud)

如果需要,我很高兴提供更多代码。

Hoa*_*yen 6

我尝试了@tf.RegisterGradientgradient_override_map按照 @Simdi 的建议,但它对TF2. 我不确定我是否在任何步骤中出错,但似乎Relu尚未被替换GuidedRelu。我认为这是因为:“TensorFlow 2.0 中没有内置机制可以覆盖某个范围内内置运算符的所有梯度。” mrry正如本次讨论中所回答的: https ://stackoverflow.com/a/55799378/11524628

@tf.custom_gradient按照mrry所说的使用,它对我来说非常有效:

@tf.custom_gradient
def guidedRelu(x):
  def grad(dy):
    return tf.cast(dy>0,"float32") * tf.cast(x>0, "float32") * dy
  return tf.nn.relu(x), grad

model = tf.keras.applications.resnet50.ResNet50(weights='imagenet', include_top=True)
gb_model = Model(
    inputs = [model.inputs],
    outputs = [model.get_layer("conv5_block3_out").output]
)
layer_dict = [layer for layer in gb_model.layers[1:] if hasattr(layer,'activation')]
for layer in layer_dict:
  if layer.activation == tf.keras.activations.relu:
    layer.activation = guidedRelu

with tf.GradientTape() as tape:
  inputs = tf.cast(preprocessed_input, tf.float32)
  tape.watch(inputs)
  outputs = gb_model(inputs)

grads = tape.gradient(outputs,inputs)[0]
Run Code Online (Sandbox Code Playgroud)

您可以在此 Google Colab Notebook 中查看上述两种方法的实现: https: //colab.research.google.com/drive/17tAC7xx2IJxjK700bdaLatTVeDA02GJn ?usp=sharing

  • @tf.custom_gradient工作过
  • @tf.RegisterGradient不起作用,因为relu没有被注册的GuidedRelu.


Sim*_*mdi 5

首先你得通过一个ReLU来改变梯度的计算,即 引导反向传播公式

这是论文中的一个图形示例。图例

这个公式可以用下面的代码实现:

@tf.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
   gate_f = tf.cast(op.outputs[0] > 0, "float32") #for f^l > 0
   gate_R = tf.cast(grad > 0, "float32") #for R^l+1 > 0
   return gate_f * gate_R * grad
Run Code Online (Sandbox Code Playgroud)

现在您必须使用以下内容覆盖 ReLU 的原始 TF 实现:

with tf.compat.v1.get_default_graph().gradient_override_map({'Relu': 'GuidedRelu'}):
   #put here the code for computing the gradient
Run Code Online (Sandbox Code Playgroud)

计算梯度后,您可以将结果可视化。然而,最后一句话。您计算单个类的可视化。这意味着,您将选定神经元的激活设为 Guided BackProp 的输入,并将其他神经元的所有激活设置为零。