如何在pytorch中连接嵌入层

muo*_*uon 2 concatenation embedding deep-learning pytorch

我正在尝试将嵌入层与其他功能连接起来。它不会给我任何错误,但也不做任何训练。这个模型定义有问题吗,如何调试?

\n\n

注意:我的 X 中的最后一列(特征)是带有 word2ix(单个单词)的特征。\n注意:网络在没有嵌入特征/层的情况下也可以正常工作

\n\n

最初发布在 pytorch论坛上

\n\n
\nclass Net(torch.nn.Module):\n    def __init__(self, n_features, h_sizes, num_words, embed_dim, out_size, dropout=None):\n        super().__init__()\n\n\n        self.num_layers = len(h_sizes)  # hidden + input\n\n\n        self.embedding = torch.nn.Embedding(num_words, embed_dim)\n        self.hidden = torch.nn.ModuleList()\n        self.bnorm = torch.nn.ModuleList()\n        if dropout is not None:\n            self.dropout = torch.nn.ModuleList()\n        else:\n            self.dropout = None\n        for k in range(len(h_sizes)):\n            if k == 0:\n                self.hidden.append(torch.nn.Linear(n_features, h_sizes[0]))\n                self.bnorm.append(torch.nn.BatchNorm1d(h_sizes[0]))\n                if self.dropout is not None:\n                    self.dropout.append(torch.nn.Dropout(p=dropout))\n\n            else:\n                if k == 1:\n                    input_dim = h_sizes[0] + embed_dim\n                else:\n                    input_dim = h_sizes[k-1]\n\n                self.hidden.append(torch.nn.Linear(input_dim, h_sizes[k]))\n                self.bnorm.append(torch.nn.BatchNorm1d(h_sizes[k]))\n                if self.dropout is not None:\n                    self.dropout.append(torch.nn.Dropout(p=dropout))\n\n        # Output layer\n        self.out = torch.nn.Linear(h_sizes[-1], out_size)\n\n    def forward(self, inputs):\n\n        # Feedforward\n\n        for l in range(self.num_layers):\n            if l == 0:\n                x = self.hidden[l](inputs[:, :-1])\n                x = self.bnorm[l](x)\n                if self.dropout is not None:\n                    x= self.dropout[l](x)\n\n                embeds = self.embedding(inputs[:,-1])#.view((1, -1)\n                x = torch.cat((embeds, x),dim=1)\n\n            else:\n                x = self.hidden[l](x)\n                x = self.bnorm[l](x)\n                if self.dropout is not None:\n                    x = self.dropout[l](x)\n            x = F.relu(x)\n        output= self.out(x)\n\n        return output\n
Run Code Online (Sandbox Code Playgroud)\n

muo*_*uon 5

有一些问题。关键是数据类型。我混合了 float 特征和 int 索引。

修复前的样本数据和训练:

NUM_TARGETS = 4
NUM_FEATURES = 3
NUM_TEXT_FEATURES = 1

x = np.random.rand(5, NUM_FEATURES)
y = np.random.rand(5, NUM_TARGETS)

word_ix = np.arange(5).reshape(-1,1).astype(int)
x_train = np.append(x, word_ix, axis=1)

x_train = torch.from_numpy(x).float().to(device)
y_train = torch.from_numpy(y).float().to(device)

h_sizes = [2,2]

net = Net(x_train.shape[1] , h_sizes=h_sizes, num_words=5, embed_dim=2, out_size=y_train.shape[1],dropout=.01)     # define the network
print(net)  # net architecture
net = net.float()
net.to(device)

optimizer = torch.optim.Adam(net.parameters(), lr=0.0001, weight_decay=.01)
loss_func = torch.nn.MSELoss()  # this is for regression mean squared loss

# one training loop
prediction = net(x_train)     # input x and predict based on x

loss = loss_func(prediction, y_train)     # must be (1. nn output, 2. target)

optimizer.zero_grad()   # clear gradients for next train
loss.backward()         # backpropagation, compute gradients
optimizer.step()        # apply gradients       
# train_losses.append(loss.detach().to('cpu').numpy())
Run Code Online (Sandbox Code Playgroud)

为了解决这个问题,我将单词索引特征与 x 分开,并删除了net.float().

将 dtypes 转换更改为:

x_train = torch.from_numpy(x).float().to(device)
y_train = torch.from_numpy(y).float().to(device) 

# NOTE: word index needs to be long
word_ix = torch.from_numpy(word_ix).to(torch.long).to(device) 
Run Code Online (Sandbox Code Playgroud)

并将forward方法更改为:


    def forward(self, inputs, word_ix):

        # Feedforward

        for l in range(self.num_layers):
            if l == 0:
                x = self.hidden[l](inputs)
                x = self.bnorm[l](x)
                if self.dropout is not None:
                    x = self.dropout[l](x)

                embeds = self.embedding(word_ix)
                # NOTE:
                # embeds has a shape of (batch_size, 1, embed_dim)
                # inorder to merge this change this with x, reshape this to
                # (batch_size, embed_dim)
                embeds = embeds.view(embeds.shape[0], embeds.shape[2])
                x = torch.cat((x, embeds.view(x.shape)),dim=1)

            else:
                x = self.hidden[l](x)
                x = self.bnorm[l](x)
                if self.dropout is not None:
                    x = self.dropout[l](x)
            x = F.relu(x)
        output= self.out(x)

        return output
Run Code Online (Sandbox Code Playgroud)