pytorch进行多变量线性回归

joj*_*kim 5 pytorch

我正在使用Pytorch处理线性回归问题。
我在单变量情况下取得了成功,但是当我执行多变量线性回归时,出现以下错误。如何使用多个变量进行线性回归?

()中的TypeError Traceback(最近一次调用最后一次)(9)Optimizer.zero_grad()#梯度10输出=模型(输入)#输出---> 11损失=标准(输出,目标)#损失函数12 loss.backward() #向后传播13 Optimizer.step()#1步优化(gradeint下降)

/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/module.py在调用中(self,* input,** kwargs)204205 def 调用(self,* input,** kwargs):-> 206 result = self.forward(* input,** kwargs)207 for self中的hook._forward_hooks.values():208 hook_result = hook(self,input,result)

/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(自身,输入,目标)22 _assert_no_grad(目标)23 backend_fn = getattr(self._backend,类型(self .. name)---> 24 return backend_fn(self.size_average)(input,target)25 26

/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/_functions/thnn/auto.py in forward(自己,输入,目标)39 output = input.new(1)40 getattr(self ._backend,update_output.name)(self._backend.library_state,input,target,---> 41 output,* self.additional_args)42返回输出43

TypeError:FloatMSECriterion_updateOutput接收到无效的参数组合-得到(int,torch.FloatTensor,torch.DoubleTensor,torch.FloatTensor,bool),但是是预期的(int状态,torch.FloatTensor输入,torch.FloatTensor目标,torch.Float布尔值(平均)

这是代码

#import
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
from torch.autograd import Variable

#input_size = 1
input_size = 3
output_size = 1
num_epochs = 300
learning_rate = 0.002

#Data set
#x_train = np.array([[1.564],[2.11],[3.3],[5.4]], dtype=np.float32)
x_train = np.array([[73.,80.,75.],[93.,88.,93.],[89.,91.,90.],[96.,98.,100.],[73.,63.,70.]],dtype=np.float32)
#y_train = np.array([[8.0],[19.0],[25.0],[34.45]], dtype= np.float32)
y_train = np.array([[152.],[185.],[180.],[196.],[142.]])
print('x_train:\n',x_train)
print('y_train:\n',y_train)

class LinearRegression(nn.Module):
    def __init__(self,input_size,output_size):
        super(LinearRegression,self).__init__()
        self.linear = nn.Linear(input_size,output_size)

    def forward(self,x):
        out = self.linear(x) #Forward propogation 
        return out

model = LinearRegression(input_size,output_size)

#Lost and Optimizer
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate)

#train the Model
for epoch in range(num_epochs):
    #convert numpy array to torch Variable
    inputs = Variable(torch.from_numpy(x_train)) #convert numpy array to torch tensor
    #inputs = Variable(torch.Tensor(x_train))    
    targets = Variable(torch.from_numpy(y_train)) #convert numpy array to torch tensor

    #forward+ backward + optimize
    optimizer.zero_grad() #gradient
    outputs = model(inputs) #output
    loss = criterion(outputs,targets) #loss function
    loss.backward() #backward propogation
    optimizer.step() #1-step optimization(gradeint descent)

    if(epoch+1) %5 ==0:
        print('epoch [%d/%d], Loss: %.4f' % (epoch +1, num_epochs, loss.data[0]))
        predicted = model(Variable(torch.from_numpy(x_train))).data.numpy()
        plt.plot(x_train,y_train,'ro',label='Original Data')
        plt.plot(x_train,predicted,label='Fitted Line')
        plt.legend()
        plt.show()
Run Code Online (Sandbox Code Playgroud)

Rog*_*llo 4

您需要确保数据具有相同的类型。在本例中,x_train 是 32 位浮点型,而 y_train 是 Double 型。你必须使用:

y_train = np.array([[152.],[185.],[180.],[196.],[142.]],dtype=np.float32)
Run Code Online (Sandbox Code Playgroud)