我正在尝试使用PyTorch构建一个非常简单的LSTM自动编码器。我总是用相同的数据训练它:
x = torch.Tensor([[0.0], [0.1], [0.2], [0.3], [0.4]])
Run Code Online (Sandbox Code Playgroud)
我建立我的模型下面这个链接:
inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(latent_dim)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
encoder = Model(inputs, encoded)
Run Code Online (Sandbox Code Playgroud)
我的代码正在运行,没有错误,但y_pred收敛到:
tensor([[[0.2]],
[[0.2]],
[[0.2]],
[[0.2]],
[[0.2]]], grad_fn=<StackBackward>)
Run Code Online (Sandbox Code Playgroud)
这是我的代码:
import torch
import torch.nn as nn
import torch.optim as optim
class LSTM(nn.Module):
def __init__(self, input_dim, latent_dim, batch_size, num_layers):
super(LSTM, self).__init__()
self.input_dim = input_dim
self.latent_dim = latent_dim
self.batch_size = batch_size
self.num_layers = num_layers
self.encoder = nn.LSTM(self.input_dim, self.latent_dim, self.num_layers) …Run Code Online (Sandbox Code Playgroud)