如何在 PyTorch 中释放 GPU 内存

Pen*_*uin 27 python memory pytorch huggingface-transformers

我有一个句子列表,我正在尝试使用以下代码使用多个模型来计算其困惑度:

from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
import numpy as np
model_name = 'cointegrated/rubert-tiny'
model = AutoModelForMaskedLM.from_pretrained(model_name).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_name)

def score(model, tokenizer, sentence):
    tensor_input = tokenizer.encode(sentence, return_tensors='pt')
    repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
    mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
    masked_input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id)
    labels = repeat_input.masked_fill( masked_input != tokenizer.mask_token_id, -100)
    with torch.inference_mode():
        loss = model(masked_input.cuda(), labels=labels.cuda()).loss
    return np.exp(loss.item())


print(score(sentence='London is the capital of Great Britain.', model=model, tokenizer=tokenizer)) 
# 4.541251105675365
Run Code Online (Sandbox Code Playgroud)

大多数模型都运行良好,但有些句子似乎会抛出错误:

RuntimeError: CUDA out of memory. Tried to allocate 10.34 GiB (GPU 0; 23.69 GiB total capacity; 10.97 GiB already allocated; 6.94 GiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

这是有道理的,因为有些很长。所以我所做的就是添加类似的东西try, except RuntimeError, pass

这似乎一直有效到大约 210 个句子,然后它只输出错误:

CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

我发现这篇文章有很多讨论和想法,其中一些是关于潜在的 GPU 故障?但我知道我的 GPU 的工作原理与此代码适用于其他模型的工作原理一样。这里还讨论了批量大小,这就是为什么我认为它可能与释放内存有关。

我尝试在每隔几个时期后像这里torch.cuda.empty_cache()一样运行以释放内存,但它不起作用(引发相同的错误)。

更新: 我过滤了长度超过 550 的句子,这似乎消除了错误CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Abh*_*25t 25

您需要gc.collect()先申请torch.cuda.empty_cache() ,然后我才能将模型拉至 CPU,然后删除该模型及其检查点。尝试一下对你有用的方法:

import gc

model.cpu()
del model, checkpoint
gc.collect()
torch.cuda.empty_cache()
Run Code Online (Sandbox Code Playgroud)


小智 11

我没有确切的答案,但我可以分享一些我在类似情况下采用的故障排除技术......希望它可能有所帮助。

  1. 首先,CUDA error不幸的是有时很模糊,因此您应该考虑在 CPU 上运行代码,看看是否确实发生了其他事情(请参阅此处

  2. 如果问题与内存有关,这里有我使用的两个自定义实用程序:

from torch import cuda


def get_less_used_gpu(gpus=None, debug=False):
    """Inspect cached/reserved and allocated memory on specified gpus and return the id of the less used device"""
    if gpus is None:
        warn = 'Falling back to default: all gpus'
        gpus = range(cuda.device_count())
    elif isinstance(gpus, str):
        gpus = [int(el) for el in gpus.split(',')]

    # check gpus arg VS available gpus
    sys_gpus = list(range(cuda.device_count()))
    if len(gpus) > len(sys_gpus):
        gpus = sys_gpus
        warn = f'WARNING: Specified {len(gpus)} gpus, but only {cuda.device_count()} available. Falling back to default: all gpus.\nIDs:\t{list(gpus)}'
    elif set(gpus).difference(sys_gpus):
        # take correctly specified and add as much bad specifications as unused system gpus
        available_gpus = set(gpus).intersection(sys_gpus)
        unavailable_gpus = set(gpus).difference(sys_gpus)
        unused_gpus = set(sys_gpus).difference(gpus)
        gpus = list(available_gpus) + list(unused_gpus)[:len(unavailable_gpus)]
        warn = f'GPU ids {unavailable_gpus} not available. Falling back to {len(gpus)} device(s).\nIDs:\t{list(gpus)}'

    cur_allocated_mem = {}
    cur_cached_mem = {}
    max_allocated_mem = {}
    max_cached_mem = {}
    for i in gpus:
        cur_allocated_mem[i] = cuda.memory_allocated(i)
        cur_cached_mem[i] = cuda.memory_reserved(i)
        max_allocated_mem[i] = cuda.max_memory_allocated(i)
        max_cached_mem[i] = cuda.max_memory_reserved(i)
    min_allocated = min(cur_allocated_mem, key=cur_allocated_mem.get)
    if debug:
        print(warn)
        print('Current allocated memory:', {f'cuda:{k}': v for k, v in cur_allocated_mem.items()})
        print('Current reserved memory:', {f'cuda:{k}': v for k, v in cur_cached_mem.items()})
        print('Maximum allocated memory:', {f'cuda:{k}': v for k, v in max_allocated_mem.items()})
        print('Maximum reserved memory:', {f'cuda:{k}': v for k, v in max_cached_mem.items()})
        print('Suggested GPU:', min_allocated)
    return min_allocated


def free_memory(to_delete: list, debug=False):
    import gc
    import inspect
    calling_namespace = inspect.currentframe().f_back
    if debug:
        print('Before:')
        get_less_used_gpu(debug=True)

    for _var in to_delete:
        calling_namespace.f_locals.pop(_var, None)
        gc.collect()
        cuda.empty_cache()
    if debug:
        print('After:')
        get_less_used_gpu(debug=True)
Run Code Online (Sandbox Code Playgroud)

2.1free_memory允许您从命名空间中组合gc.collectcuda.empty_cache删除一些所需的对象并释放它们的内存(您可以传递变量名称列表作为参数to_delete)。这很有用,因为您可能有未使用的对象占用内存。例如,假设您循环遍历 3 个模型,那么当您进行第二次迭代时,第一个模型可能仍会占用一些 GPU 内存(我不知道为什么,但我在笔记本中经历过这种情况,这是我能找到的唯一解决方案是重新启动笔记本或显式释放内存)。然而,我不得不说这并不总是实用,因为您需要知道哪些变量正在占用 GPU 内存......而且情况并非总是如此,特别是当您有很多与模型内部关联的梯度时(请参阅此处了解更多信息)信息)。您还可以尝试的一件事是使用with torch.no_grad():而不是with torch.inference_mode():; 它们应该是等效的,但可能值得一试......

2.2 如果您有一个多 GPU 环境,您可以考虑通过其他实用程序交替切换到较少使用的 GPU,get_less_used_gpu

  1. 此外,您可以尝试跟踪 GPU 使用情况以查看错误何时发生并从那里进行调试。如果您使用的是 Linux 平台,我建议的最好/最简单的方法是使用nvtop

希望有些东西是有用的:)