迁移指南建议采取以下措施,使代码与CPU / GPU无关:
> # at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
Run Code Online (Sandbox Code Playgroud)
我这样做并在仅CPU的设备上运行了代码,但是当输入一个输入数组时,我的模型崩溃了,因为它说期望的是CPU张量而不是GPU张量。我的模型以某种方式自动将CPU输入阵列转换为GPU阵列。最后,我在代码中将其追溯到此命令:
model = torch.nn.DataParallel(model).to(device)
Run Code Online (Sandbox Code Playgroud)
即使我将模型转换为“ cpu”,但nn.DataParallel仍将其覆盖。我想出的最好的解决方案是有条件的:
if device.type=='cpu':
model = model.to(device)
else:
model = torch.nn.DataParallel(model).to(device)
Run Code Online (Sandbox Code Playgroud)
这看起来并不优雅。有没有更好的办法?
以下代码示例在Python中有效,但在Linux中的VSCode中有效(在Windows 中的VSCode中无效)。我想知道我的代码是否有问题,或者在Linux下VSCode是否有问题?
#Test of PyTorch DataLoader and Visual Studio Code
from torch.utils.data import Dataset, DataLoader
class SimpleData(Dataset):
"""Very simple dataset"""
def __init__(self):
self.data = range(20)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
if __name__ == '__main__':
#Initialize DataLoader with above Dataset:
dataloader = DataLoader(SimpleData(), batch_size=4, num_workers=1)
print('Using DataLoader to show data in batches: ')
for i, sample_batch in enumerate(dataloader): #This fails in VSCode in Linux
print('batch ', i, ':', sample_batch)
print("--- Done ---")
Run Code Online (Sandbox Code Playgroud)
预期的输出是:
Using …Run Code Online (Sandbox Code Playgroud)