结合Keras功能模型

met*_*eto 4 python deep-learning keras

我试图模仿这个关于微调图像分类器的keras博客.我想使用fchollet repo上的Inceptionv3 .

初始是一个Model(功能API),所以我不能只做model.add(top_model)保留的Sequential.

如何添加组合两个功能Model?让我说我有

inputs = Input(shape=input_shape)
x = Flatten()(inputs)
predictions = Dense(4, name='final1')(x)

model1 = Model(input=inputs, output=predictions)
Run Code Online (Sandbox Code Playgroud)

对于第一个模型和

inputs_2 = Input(shape=(4,))
y = Dense(5)(l_inputs)
y = Dense(2, name='final2')(y)
predictions_2 = Dense(29)(y)
model2 = Model(input=inputs2, output=predictions2)
Run Code Online (Sandbox Code Playgroud)

为了第二个.我现在想结束到终端,从去inputspredicions_2和链接predictionsinputs_2.

我试过使用,model1.get_layer('final1').output但我与类型不匹配,我无法使其工作.

小智 7

我没试过这个,但根据文档功能模型是可调用的,所以你可以这样做:

y = model2(model1(x))
Run Code Online (Sandbox Code Playgroud)

x输入的数据在哪里,y是结果predictions_2


fil*_*chp 5

我在调整VGG16时遇到了这个问题.这对我有用,我想可以为Inception V3采取类似的方法.使用Tensorflow 1.2后端在Keras 2.0.5上进行测试.

# NOTE: define the following variables
#       top_model_weights_path
#       num_classes
#       dense_layer_1 = 4096
#       dense_layer_2 = 4096

vgg16 = applications.VGG16(
    include_top=False,
    weights='imagenet',
    input_shape=(224, 224, 3))

# Inspect the model
vgg16.summary()

# This shape has to match the last layer in VGG16 (without top)
dense_input  = Input(shape=(7, 7, 512))
dense_output = Flatten(name='flatten')(dense_input)
dense_output = Dense(dense_layer_1, activation='relu', name='fc1')(dense_output)
dense_output = Dense(dense_layer_2, activation='relu', name='fc2')(dense_output)
dense_output = Dense(num_classes, activation='softmax', name='predictions')(dense_output)

top_model = Model(inputs=dense_input, outputs=dense_output, name='top_model')

# from: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)

block5_pool = vgg16.get_layer('block5_pool').output

# Now combine the two models
full_output = top_model(block5_pool)
full_model  = Model(inputs=vgg16.input, outputs=full_output)

# set the first 15 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
# WARNING: this may not be applicable for Inception V3
for layer in full_model.layers[:15]:
    layer.trainable = False

# Verify things look as expected
full_model.summary()

# compile the model with a SGD/momentum optimizer
# and a very slow learning rate.
full_model.compile(
    loss='binary_crossentropy',
    optimizer=optimizers.SGD(lr=5e-5, momentum=0.9),
    metrics=['accuracy'])

# Train the model...
Run Code Online (Sandbox Code Playgroud)


bn2*_*nkm 4

我认为根据您的需要有两种选择:

(a) Predictions_1 和 Predictions_2 对您很重要。在这种情况下,您可以训练具有 2 个输出的网络。这是从您的帖子中得出的示例:

input_shape = [3, 20]
inputs = Input(shape=input_shape)
x = Flatten()(inputs)
predictions_1 = Dense(4, name='predictions_1')(x)

# here the predictions_1 just corresponds to your next layer's input
y = Dense(5)(predictions_1)
y = Dense(2)(y)
predictions_2 = Dense(29, name='predictions_2')(y)

# you specify here that you have 2 outputs
model = Model(input=inputs, output=[predictions_1, predictions_2])
Run Code Online (Sandbox Code Playgroud)

对于 .fit 和 .predict,您可以在https://keras.io/getting-started/function-api-guide/ 的“多输入和多输出模型”部分找到大量详细信息。

(b) 您只对 Predictions_2 感兴趣。在这种情况下,你可以这样做:

input_shape = [3, 20]
inputs = Input(shape=input_shape)
x = Flatten()(inputs)
predictions_1 = Dense(4, name='predictions_1')(x)

# here the predictions_1 just corresponds to your next layer's input
y = Dense(5)(predictions_1)
y = Dense(2)(y)
predictions_2 = Dense(29, name='predictions_2')(y)

# you specify here that your only output is predictions_2
model = Model(input=inputs, output=predictions_2)
Run Code Online (Sandbox Code Playgroud)

现在关于 inception_v3。您可以自己定义架构并根据需要修改内部的深层(为这些层指定特定名称以避免 keras 自动命名)。

之后,编译模型并加载权重(如https://keras.io/models/about-keras-models/中所示,请参阅函数 load_weights(..., by_name=True))

# you can load weights for only the part that corresponds to the true
# inception_v3 architecture. The other part will be initialized
# randomly
model.load_weights("inception_v3.hdf5", by_name=True)
Run Code Online (Sandbox Code Playgroud)

这应该可以解决你的问题。顺便说一下,您可以在这里找到更多信息: https: //www.gradientzoo.com。医生。解释了几个保存/加载/微调例程;)

更新:如果您不想从头开始重新定义模型,可以执行以下操作:

input_shape = [3, 20]

# define model1 and model2 as you want
inputs1 = Input(shape=input_shape)
x = Flatten()(inputs1)
predictions_1 = Dense(4, name='predictions_1')(x)
model1 = Model(input=inputs1, output=predictions_1)

inputs2 = Input(shape=(4,))
y = Dense(5)(inputs2)
y = Dense(2)(y)
predictions_2 = Dense(29, name='predictions_2')(y)
model2 = Model(input=inputs2, output=predictions_2)

# then define functions returning the image of an input through model1 or model2
def give_model1():
    def f(x):
        return model1(x)
    return f

def give_model2():
    def g(x):
        return model2(x)
    return g

# now you can create a global model as follows:
inputs = Input(shape=input_shape)
x = model1(inputs)
predictions = model2(x)
model = Model(input=inputs, output=predictions)
Run Code Online (Sandbox Code Playgroud)