Anh*_*ham 6 python machine-learning lstm keras recurrent-neural-network
基本上,我正在使用Keras训练LSTM模型,但是当我保存它时,它的大小需要100MB.但是,我的模型的目的是部署到Web服务器以作为API,我的Web服务器无法运行它,因为模型大小太大.在分析了我的模型中的所有参数之后,我发现我的模型有20,000,000
参数但15,000,000
参数未经训练,因为它们是字嵌入.有没有办法可以通过删除15,000,000
参数来最小化模型的大小,但仍保留模型的性能?这是我的模型代码:
def LSTModel(input_shape, word_to_vec_map, word_to_index):
sentence_indices = Input(input_shape, dtype="int32")
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
embeddings = embedding_layer(sentence_indices)
X = LSTM(256, return_sequences=True)(embeddings)
X = Dropout(0.5)(X)
X = LSTM(256, return_sequences=False)(X)
X = Dropout(0.5)(X)
X = Dense(NUM_OF_LABELS)(X)
X = Activation("softmax")(X)
model = Model(inputs=sentence_indices, outputs=X)
return model
Run Code Online (Sandbox Code Playgroud)
定义要在函数外部保存的图层并为其命名。然后创建两个函数foo()
和bar()
。foo()
将具有包括嵌入层的原始管道。bar()
将只有嵌入层之后的管道部分。相反,您将使用嵌入的尺寸定义新Input()
层:bar()
lstm1 = LSTM(256, return_sequences=True, name='lstm1')
lstm2 = LSTM(256, return_sequences=False, name='lstm2')
dense = Dense(NUM_OF_LABELS, name='Susie Dense')
def foo(...):
sentence_indices = Input(input_shape, dtype="int32")
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
embeddings = embedding_layer(sentence_indices)
X = lstm1(embeddings)
X = Dropout(0.5)(X)
X = lstm2(X)
X = Dropout(0.5)(X)
X = dense(X)
X = Activation("softmax")(X)
return Model(inputs=sentence_indices, outputs=X)
def bar(...):
embeddings = Input(embedding_shape, dtype="float32")
X = lstm1(embeddings)
X = Dropout(0.5)(X)
X = lstm2(X)
X = Dropout(0.5)(X)
X = dense(X)
X = Activation("softmax")(X)
return Model(inputs=sentence_indices, outputs=X)
foo_model = foo(...)
bar_model = bar(...)
foo_model.fit(...)
bar_model.save_weights(...)
Run Code Online (Sandbox Code Playgroud)
现在,您将训练原始foo()
模型。然后您可以保存简化bar()
模型的权重。加载模型时,不要忘记指定by_name=True
参数:
foo_model.load_weights('bar_model.h5', by_name=True)
Run Code Online (Sandbox Code Playgroud)