Pytorch张量到numpy数组

Duk*_*ver 13 python numpy pytorch

我有一个pytorch尺寸张量torch.Size([4, 3, 966, 1296])

我想numpy使用以下代码将其转换为数组:

imgs = imgs.numpy()[:, ::-1, :, :]

任何人都可以解释这段代码在做什么?

She*_*zod 16

这对我有用:

np_arr = torch_tensor.cpu().detach().numpy()
Run Code Online (Sandbox Code Playgroud)

  • 我认为顺序有所不同,也许这样更好?`x.detach().cpu().numpy()` (6认同)
  • 这里cpu()有什么用呢? (4认同)

Maa*_*usa 12

您要转换的张量有4个维度.

[:, ::-1, :, :] 
Run Code Online (Sandbox Code Playgroud)

: 表示第一个维度应按原样复制并转换,第三维和第四维相同.

::-1 意味着对于第二轴,它会反转轴

  • 真正的答案:`x.detach().cpu().numpy()` (41认同)
  • 当转换为 numpy 时,您应该在 cpu 之前调用 detach 以防止多余的梯度复制。请参阅https://discuss.pytorch.org/t/should-it-really-be-necessary-to-do-var-detach-cpu-numpy/35489/5 (3认同)

pro*_*sti 11

While other answers perfectly explained the question I will add some real life examples converting tensors to numpy array:

Example: Shared storage

PyTorch tensor residing on CPU shares the same storage as numpy array na

import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
na[0][0]=10
print(na)
print(a)
Run Code Online (Sandbox Code Playgroud)

Output:

tensor([[1., 1.]])
[[10.  1.]]
tensor([[10.,  1.]])
Run Code Online (Sandbox Code Playgroud)

Example: Eliminate effect of shared storage, copy numpy array first

To avoid the effect of shared storage we need to copy() the numpy array na to a new numpy array nac. Numpy copy() method creates the new separate storage.

import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
na[0][0]=10
print(na)
print(a)
Run Code Online (Sandbox Code Playgroud)

Output:

tensor([[1., 1.]])
[[10.  1.]]
[[1. 1.]]
tensor([[1., 1.]])
Run Code Online (Sandbox Code Playgroud)

Now, just the nac numpy array will be altered with the line nac[0][0]=10, na and a will remain as is.

Example: CPU tensor with requires_grad=True

tensor([[1., 1.]])
[[10.  1.]]
tensor([[10.,  1.]])
Run Code Online (Sandbox Code Playgroud)

Output:

tensor([[1., 1.]], requires_grad=True)
[[10.  1.]]
tensor([[10.,  1.]], requires_grad=True)
Run Code Online (Sandbox Code Playgroud)

In here we call:

import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
nac = na.copy()
nac[0][0]=10
?print(nac)
print(na)
print(a)
Run Code Online (Sandbox Code Playgroud)

This would cause: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead., because tensors that require_grad=True are recorded by PyTorch AD. Note that tensor.detach() is the new way for tensor.data.

This explains why we need to detach() them first before converting using numpy().

Example: CUDA tensor with requires_grad=False

tensor([[1., 1.]])
[[10.  1.]]
[[1. 1.]]
tensor([[1., 1.]])
Run Code Online (Sandbox Code Playgroud)

Output:

tensor([[1., 1.]], device='cuda:0')
[[10.  1.]]
tensor([[1., 1.]], device='cuda:0')
Run Code Online (Sandbox Code Playgroud)

?

Example: CUDA tensor with requires_grad=True

import torch
a = torch.ones((1,2), requires_grad=True)
print(a)
na = a.detach().numpy()
na[0][0]=10
print(na)
print(a)
Run Code Online (Sandbox Code Playgroud)

Output:

tensor([[1., 1.]], device='cuda:0', requires_grad=True)
[[10.  1.]]
tensor([[1., 1.]], device='cuda:0', requires_grad=True)
Run Code Online (Sandbox Code Playgroud)

Without detach() method the error RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. will be set.

Without .to('cpu') method TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. will be set.

You could use cpu() but instead of to('cpu') but I prefer the newer to('cpu').


Azi*_*bro 7

我相信您也必须使用.detach()。我必须在使用CUDA和GPU的Colab上将Tensor转换为numpy数组。我这样做如下:

# this is just my embedding matrix which is a Torch tensor object
embedding = learn.model.u_weight

embedding_list = list(range(0, 64382))

input = torch.cuda.LongTensor(embedding_list)
tensor_array = embedding(input)
# the output of the line bwlow is a numpy array
tensor_array.cpu().detach().numpy()
Run Code Online (Sandbox Code Playgroud)

  • 当转换为“numpy”时,您应该在“cpu”之前调用“detach”,以防止多余的梯度复制。请参阅https://discuss.pytorch.org/t/should-it-really-be-necessary-to-do-var-detach-cpu-numpy/35489/5 (5认同)
  • 当然,您必须使用 `detach`,因为您最初在 GPU 上创建了一个 PyTorch Tensor。如果它是在 CPU 中创建的,则这不适用,如原始帖子中所示。 (2认同)

Muh*_*lal 6

如果您的变量附加了一些 grads,您可以使用此语法。

y=torch.Tensor.cpu(x).detach().numpy()[:,:,:,-1]

  • 与此处的其他答案相比,这对这个问题没有任何帮助。 (2认同)