Bir*_*ish 8 autoencoder deep-learning lstm keras tensorflow
我正在尝试在keras中为时间序列开发一个编码器模型.数据的形状是(5039,28,1),这意味着我的seq_len是28,我有一个功能.对于编码器的第一层,我使用112个hunits,第二层将有56个并且能够返回到解码器的输入形状,我不得不添加第二层28个hunits(这个自动编码器应该重建它的输入).但我不知道将LSTM层连接在一起的正确方法是什么.AFAIK,我可以添加RepeatVector或return_seq=True.您可以在以下代码中查看我的两个模型.我想知道会有什么不同,哪种方法是正确的?
第一个模型使用return_sequence=True:
inputEncoder = Input(shape=(28, 1))
firstEncLayer = LSTM(112, return_sequences=True)(inputEncoder)
snd = LSTM(56, return_sequences=True)(firstEncLayer)
outEncoder = LSTM(28)(snd)
context = RepeatVector(1)(outEncoder)
context_reshaped = Reshape((28,1))(context)
encoder_model = Model(inputEncoder, outEncoder)
firstDecoder = LSTM(112, return_sequences=True)(context_reshaped)
outDecoder = LSTM(1, return_sequences=True)(firstDecoder)
autoencoder = Model(inputEncoder, outDecoder)
Run Code Online (Sandbox Code Playgroud)
第二个模型RepeatVector:
inputEncoder = Input(shape=(28, 1))
firstEncLayer = LSTM(112)(inputEncoder)
firstEncLayer = RepeatVector(1)(firstEncLayer)
snd = LSTM(56)(firstEncLayer)
snd = RepeatVector(1)(snd)
outEncoder = LSTM(28)(snd)
encoder_model = Model(inputEncoder, outEncoder)
context = RepeatVector(1)(outEncoder)
context_reshaped = Reshape((28, 1))(context)
firstDecoder = LSTM(112)(context_reshaped)
firstDecoder = RepeatVector(1)(firstDecoder)
sndDecoder = LSTM(28)(firstDecoder)
outDecoder = RepeatVector(1)(sndDecoder)
outDecoder = Reshape((28, 1))(outDecoder)
autoencoder = Model(inputEncoder, outDecoder)
Run Code Online (Sandbox Code Playgroud)
thu*_*v89 33
您可能必须亲眼看看哪一个更好,因为这取决于您正在解决的问题.但是为了给你两种方法的区别.
| 归档时间: |
|
| 查看次数: |
5033 次 |
| 最近记录: |