fast.ai 不使用 GPU

use*_*651 6 deep-learning pytorch fast-ai

当我使用 fast.ai 运行训练时,仅使用 CPU,即使

import torch; print(torch.cuda.is_available())
Run Code Online (Sandbox Code Playgroud)

显示 CUDA 可用,并且 GPU 上的一些内存被我的训练过程占用。

from main import DefectsImagesDataset
from fastai.vision.all import *
import numpy as np

NUM_ELEMENTS = 1e5
CSV_FILES = {
    'events_path':
        './data/events.csv',
    'defects_path':
        './data/defects2020_all.csv',
    }

defects_dataset = DefectsImagesDataset(CSV_FILES['defects_path'], CSV_FILES['events_path'], NUM_ELEMENTS, window_size=10000)
model = models.resnet34
BATCH_SIZE = 16
NUMBER_WORKERS = 8
dls = DataLoaders.from_dsets(defects_dataset, defects_dataset, bs=BATCH_SIZE, num_workers=NUMBER_WORKERS)

import torch; print(torch.cuda.is_available())

loss_func = nn.CrossEntropyLoss()
learn = cnn_learner(dls, models.resnet34, metrics=error_rate, n_out=30, loss_func=loss_func)

learn.fit_one_cycle(1)

Run Code Online (Sandbox Code Playgroud)

CUDA 版本:11.5

Fast.ai-版本:2.5.3

如何让 fast.ai 使用 GPU?

use*_*651 6

创建数据加载器时我必须指定设备。代替

dls = DataLoaders.from_dsets(
    defects_dataset, 
    defects_dataset, 
    bs=BATCH_SIZE, 
    num_workers=NUMBER_WORKERS)
Run Code Online (Sandbox Code Playgroud)

我知道有

dls = DataLoaders.from_dsets(
    defects_dataset, 
    defects_dataset, 
    bs=BATCH_SIZE, 
    num_workers=NUMBER_WORKERS, 
    device=torch.device('cuda'))
Run Code Online (Sandbox Code Playgroud)