Keras,如何获得每一层的输出?

Goi*_*Way 115 python deep-learning keras tensorflow

我已经使用CNN训练了二进制分类模型,这是我的代码

model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                        border_mode='valid',
                        input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
# (16, 16, 32)
model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
# (8, 8, 64) = (2048)
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))  # define a binary classification problem
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='adadelta',
              metrics=['accuracy'])
model.fit(x_train, y_train,
          batch_size=batch_size,
          nb_epoch=nb_epoch,
          verbose=1,
          validation_data=(x_test, y_test))
Run Code Online (Sandbox Code Playgroud)

在这里,我想像TensorFlow一样获得每一层的输出,我该怎么做?

ind*_*you 147

您可以使用以下方法轻松获取任何图层的输出: model.layers[index].output

对于所有图层使用此:

from keras import backend as K

inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs]    # evaluation functions

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test, 1.]) for func in functors]
print layer_outs
Run Code Online (Sandbox Code Playgroud)

注意:为了模拟差使用learning_phase1.layer_outs以其它方式使用0.

编辑:(根据评论)

K.function 创建theano/tensorflow张量函数,稍后用于从给定输入的符号图获得输出.

现在K.learning_phase()需要作为输入,因为Dropout/Batchnomalization等许多Keras层依赖于它来改变训练和测试时间的行为.

因此,如果您删除代码中的dropout图层,则只需使用:

from keras import backend as K

inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([inp], [out]) for out in outputs]    # evaluation functions

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test]) for func in functors]
print layer_outs
Run Code Online (Sandbox Code Playgroud)

编辑2:更优化

我刚刚意识到前面的答案不是针对每个功能评估而优化的,数据将被转移到CPU-> GPU内存,并且还需要对下层n-over进行张量计算.

相反,这是一个更好的方法,因为您不需要多个函数,但只有一个函数可以为您提供所有输出的列表:

from keras import backend as K

inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers]          # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs )   # evaluation function

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 1.])
print layer_outs
Run Code Online (Sandbox Code Playgroud)

  • 先生,您的回答很好,您的代码中的`K.function([inp] + [K.learning_phase()],[out])`是什么意思? (2认同)
  • @StavBodik模型使用`K.function` [此处](https://github.com/keras-team/keras/blob/master/keras/engine/training.py#L1007-L1011)构建预测函数,并进行预测在[此处](https://github.com/keras-team/keras/blob/master/keras/engine/training.py#L1800-L1803)的预测循环中使用它。Predict遍历批处理大小(如果未设置,则默认为32),但这可以减轻对GPU内存的限制。所以我不确定为什么您观察`model.predict`会更快。 (2认同)
  • 我得到这个: InvalidArgumentError: S_input_39:0 被馈送和提取。......任何有想法的人? (2认同)
  • 错误:ValueError:函数的输入张量必须来自“tf.keras.Input”。已收到:0(缺少上一层元数据)。简单模型:输入= tf.keras.layers.Input(shape=input_shape)x = tf.keras.layers.Dense(256,激活=无)(输入)模型= tf.keras.Model(输入=输入,输出= X)。tf 版本 2.5.0。只有第一种方法有效。 (2认同)

blu*_*sky 102

来自https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer

一种简单的方法是创建一个新模型,输出您感兴趣的图层:

from keras.models import Model

model = ...  # include here your original model

layer_name = 'my_layer'
intermediate_layer_model = Model(inputs=model.input,
                                 outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(data)
Run Code Online (Sandbox Code Playgroud)

或者,您可以构建一个Keras函数,该函数将在给定特定输入的情况下返回某个图层的输出,例如:

from keras import backend as K

# with a Sequential model
get_3rd_layer_output = K.function([model.layers[0].input],
                                  [model.layers[3].output])
layer_output = get_3rd_layer_output([x])[0]
Run Code Online (Sandbox Code Playgroud)


Phi*_*emy 12

基于此线程的所有良好答案,我编写了一个库来获取每一层的输出。它抽象了所有复杂性,并被设计为尽可能易于使用:

https://github.com/philipperemy/keract

它处理几乎所有边缘情况

希望能帮助到你!


小智 9

以前的解决方案对我不起作用。我按照如下所示处理了这个问题。

layer_outputs = []
for i in range(1, len(model.layers)):
    tmp_model = Model(model.layers[0].input, model.layers[i].output)
    tmp_output = tmp_model.predict(img)[0]
    layer_outputs.append(tmp_output)
Run Code Online (Sandbox Code Playgroud)


use*_*501 7

这个答案基于:https : //stackoverflow.com/a/59557567/2585501

要打印单层的输出:

from tensorflow.keras import backend as K
layerIndex = 1
func = K.function([model.get_layer(index=0).input], model.get_layer(index=layerIndex).output)
layerOutput = func([input_data])  # input_data is a numpy array
print(layerOutput)
Run Code Online (Sandbox Code Playgroud)

打印每一层的输出:

from tensorflow.keras import backend as K
for layerIndex, layer in enumerate(model.layers):
    func = K.function([model.get_layer(index=0).input], layer.output)
    layerOutput = func([input_data])  # input_data is a numpy array
    print(layerOutput)
Run Code Online (Sandbox Code Playgroud)


Mil*_*uss 6

我为自己写了这个函数(在Jupyter中),它的灵感来自于indraforyou的回答.它将自动绘制所有图层输出.您的图像必须具有(x,y,1)形状,其中1代表1个通道.你只需要调用plot_layer_outputs(...)来绘图.

%matplotlib inline
import matplotlib.pyplot as plt
from keras import backend as K

def get_layer_outputs():
    test_image = YOUR IMAGE GOES HERE!!!
    outputs    = [layer.output for layer in model.layers]          # all layer outputs
    comp_graph = [K.function([model.input]+ [K.learning_phase()], [output]) for output in outputs]  # evaluation functions

    # Testing
    layer_outputs_list = [op([test_image, 1.]) for op in comp_graph]
    layer_outputs = []

    for layer_output in layer_outputs_list:
        print(layer_output[0][0].shape, end='\n-------------------\n')
        layer_outputs.append(layer_output[0][0])

    return layer_outputs

def plot_layer_outputs(layer_number):    
    layer_outputs = get_layer_outputs()

    x_max = layer_outputs[layer_number].shape[0]
    y_max = layer_outputs[layer_number].shape[1]
    n     = layer_outputs[layer_number].shape[2]

    L = []
    for i in range(n):
        L.append(np.zeros((x_max, y_max)))

    for i in range(n):
        for x in range(x_max):
            for y in range(y_max):
                L[i][x][y] = layer_outputs[layer_number][x][y][i]


    for img in L:
        plt.figure()
        plt.imshow(img, interpolation='nearest')
Run Code Online (Sandbox Code Playgroud)


can*_*nin 6

来自:https : //github.com/philipperemy/keras-visualize-activations/blob/master/read_activations.py

import keras.backend as K

def get_activations(model, model_inputs, print_shape_only=False, layer_name=None):
    print('----- activations -----')
    activations = []
    inp = model.input

    model_multi_inputs_cond = True
    if not isinstance(inp, list):
        # only one input! let's wrap it in a list.
        inp = [inp]
        model_multi_inputs_cond = False

    outputs = [layer.output for layer in model.layers if
               layer.name == layer_name or layer_name is None]  # all layer outputs

    funcs = [K.function(inp + [K.learning_phase()], [out]) for out in outputs]  # evaluation functions

    if model_multi_inputs_cond:
        list_inputs = []
        list_inputs.extend(model_inputs)
        list_inputs.append(0.)
    else:
        list_inputs = [model_inputs, 0.]

    # Learning phase. 0 = Test mode (no dropout or batch normalization)
    # layer_outputs = [func([model_inputs, 0.])[0] for func in funcs]
    layer_outputs = [func(list_inputs)[0] for func in funcs]
    for layer_activations in layer_outputs:
        activations.append(layer_activations)
        if print_shape_only:
            print(layer_activations.shape)
        else:
            print(layer_activations)
    return activations
Run Code Online (Sandbox Code Playgroud)


dev*_*ail 5

以下对我来说看起来很简单:

model.layers[idx].output
Run Code Online (Sandbox Code Playgroud)

上面是张量对象,因此您可以使用可应用于张量对象的操作对其进行修改。

例如,获得形状 model.layers[idx].output.get_shape()

idx 是图层的索引,您可以从中找到它 model.summary()

  • 它返回一个张量对象,而不是数据帧。tf 对象使用起来很奇怪。 (4认同)
  • 发帖者说他们想要得到每一层的输出。给定一些数据,如何从“model.layers[idx].output”获取层输出? (3认同)

Kam*_*Kam 5

想将此作为评论(但没有足够高的代表。)添加到@indraforyou 的答案中,以纠正 @mathtick 评论中提到的问题。为了避免InvalidArgumentError: input_X:Y is both fed and fetched.异常,只需更换行outputs = [layer.output for layer in model.layers]outputs = [layer.output for layer in model.layers][1:],即

适应 indraforyou 的最小工作示例:

from keras import backend as K 
inp = model.input                                           # input placeholder
outputs = [layer.output for layer in model.layers][1:]        # all layer outputs except first (input) layer
functor = K.function([inp, K.learning_phase()], outputs )   # evaluation function

# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 1.])
print layer_outs
Run Code Online (Sandbox Code Playgroud)

ps 我尝试尝试诸如此类的事情outputs = [layer.output for layer in model.layers[1:]]没有奏效。