我更改了标量类型 float 的预期对象,但在 Pytorch 中仍然得到 Long

Ray*_*hiu 3 python pytorch

做二元类分类。我使用二元交叉熵作为损失函数(nn.BCEloss()),最后一层的单位为1。

在将 (input, target) 放入损失函数之前,我将目标从 Long 转换为 float。只有DataLoader最后一步出现错误信息,错误信息如下。 "RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'target'" 代码中定义了 DataLoader(如果批处理大小不匹配,我将删除最后一个批处理),我不确定是否与错误相关。

我试图打印目标和输入(神经网络的输出)的类型,并且两个变量的类型都是浮点数。我把“类型结果”和代码放在下面。

trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE,
                                          shuffle=True, drop_last=True)
loss_func = nn.BCELoss() 

# training 
for epoch in range(EPOCH):
    test_loss = 0
    train_loss = 0

    for step, (b_x, b_y) in enumerate(trainloader):        # gives batch data
        b_x = b_x.view(-1, TIME_STEP, 1)              # reshape x to (batch, time_step, input_size)
        print("step: ", step)
        b_x = b_x.to(device) 
        print("BEFORE|b_y type: ",b_y.type())
        b_y = b_y.to(device, dtype=torch.float)
        print("AFTER|b_y type: ",b_y.type())
        output = rnn(b_x)                               # rnn output
        print("output type:", output.type())
        loss = loss_func(output, b_y)  # !!!error occurs when trainloader enumerate the final step!!!                 

        train_loss = train_loss + loss

        optimizer.zero_grad()                           
        loss.backward()                                 
        optimizer.step()  
Run Code Online (Sandbox Code Playgroud)
#### type result and the error message####
... 
step:  6
BEFORE|b_y type:  torch.LongTensor
AFTER|b_y type:  torch.cuda.FloatTensor
output type: torch.cuda.FloatTensor
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-18-e028fcb6b840> in <module>
     30         b_y = b_y.to(device)
     31         output = rnn(b_x)
---> 32         loss = loss_func(output, b_y)
     33         test_loss = test_loss + loss
     34         rnn.train()

~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
    502     @weak_script_method
    503     def forward(self, input, target):
--> 504         return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
    505 
    506 

~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
   2025 
   2026     return torch._C._nn.binary_cross_entropy(
-> 2027         input, target, weight, reduction_enum)
   2028 
   2029 

RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'target'
Run Code Online (Sandbox Code Playgroud)

mod*_*itt 5

似乎类型正在正确更改,因为您声明您在打印类型和 Pytorch 时观察到更改:

返回Tensor具有指定设备和(可选)的dtype。如果 dtype 为 None 则推断为self.dtype. 当 时non_blocking,如果可能,尝试相对于主机异步转换,例如,将具有固定内存的 CPU 张量转换为 CUDA 张量。设置复制后,即使张量已经与所需的转换匹配,也会创建一个新的张量。

和其他方法,如

b_y = b_y.to(device).float()
Run Code Online (Sandbox Code Playgroud)

不应该有明显的不同,因为 又.float()等于.to(..., torch.float32)。并且.float等价于.float32。你能b_y在抛出错误之前验证right的类型并编辑问题吗?(我会对此发表评论 - 但我想添加更多细节。我会在提供时尝试提供帮助)