我正在尝试使用 pytorch 来进行生成人工智能。如该视频所示: https://www.youtube.com/watch ?v=_pIMdDWK5sc
我导入所有内容,如图所示:
import os
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import MNIST
import matplotlib.pyplot as plt
import pytorch_lightning as pl
random_seed = 42
torch.manual_seed(random_seed)
BATCH_SIZE=128
AVAIL_GPUS = min(1, torch.cuda.device_count())
NUM_WORKERS=int(os.cpu_count() / 2)
Run Code Online (Sandbox Code Playgroud)
但是当我运行训练器时:
trainer = pl.Trainer(max_epochs=20, gpus=AVAIL_GPUS)
trainer.fit(model, dm)
Run Code Online (Sandbox Code Playgroud)
它向我显示了错误:
__init__() got an unexpected keyword argument 'gpus'
Run Code Online (Sandbox Code Playgroud)
小智 5
https://lightning.ai/docs/pytorch/stable/common/trainer.html
使用 trainer = pl.Trainer(max_epochs=20,accelerator="auto")