标签: pytorch

如何将输入和目标与 Pytorch Fashion MNIST 分开?

Fashion MNIST 数据集在 Pytorch 中的实现非常奇怪。我想做类似的事情:

X, y = FashionMNIST
Run Code Online (Sandbox Code Playgroud)

但实际上,情况要复杂一些。这就是我所拥有的:

from torchvision.datasets import FashionMNIST
train = FashionMNIST(root='.', download=True, train=True)
print(train)
Run Code Online (Sandbox Code Playgroud)

输出:

Dataset FashionMNIST
    Number of datapoints: 60000
    Root location: c:/users/nicolas/documents/data/fashionmnist
    Split: Train
Run Code Online (Sandbox Code Playgroud)

什么一个观察的样子:

print(train[0])
Run Code Online (Sandbox Code Playgroud)
(<PIL.Image.Image image mode=L size=28x28 at 0x20868074780>, 9)
Run Code Online (Sandbox Code Playgroud)

我只能做一次观察。

X, y = train[0]
Run Code Online (Sandbox Code Playgroud)

那么如何分离输入和目标呢?

python pytorch

1
推荐指数
1
解决办法
361
查看次数

Pytorch - 如何使用 weightedrandomsampler 进行欠采样

我有一个不平衡的数据集,想对代表性过高的类进行不足采样。我该怎么做。我想使用 weightedrandomsampler 但我也愿意接受其他建议。

到目前为止,我假设我的代码必须具有如下结构。但我不知道如何精确地做到这一点。

trainset = datasets.ImageFolder(path_train,transform=transform) ... sampler = data.WeightedRandomSampler(weights=..., num_samples=..., replacement=...) ... trainloader = data.DataLoader(trainset, batchsize = batchsize, sampler=sampler)

我希望有人能帮帮忙。非常感谢

neural-network conv-neural-network pytorch imbalanced-data cnn

1
推荐指数
1
解决办法
2726
查看次数

Torchvision.transforms 的 Flatten() 实现

我有灰度图像,但我需要将其转换为一维向量的数据集我该怎么做?我在转换中找不到合适的方法:

train_dataset = torchvision.datasets.ImageFolder(root='./data',train=True, transform=transforms.ToTensor())
test_dataset = torchvision.datasets.ImageFolder(root='./data',train=False, transform=transforms.ToTensor())

train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=4, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=4, shuffle=False)
Run Code Online (Sandbox Code Playgroud)

python flatten pytorch torchvision

1
推荐指数
1
解决办法
1155
查看次数

我们如何将线性层的输出提供给 PyTorch 中的 Conv2D?

我正在构建一个自动编码器,我需要将图像编码为长度为 100 的潜在表示。我的模型使用以下架构。

        self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size=3)
        self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size=3,stride=2)
        self.conv3 = nn.Conv2d(in_channels=64,out_channels=128,kernel_size=3,stride=2)

        self.linear = nn.Linear(in_features=128*30*30,out_features=100)

        self.conv1_transpose = nn.ConvTranspose2d(in_channels=128,out_channels=64,kernel_size=3,stride=2,output_padding=1)
        self.conv2_transpose = nn.ConvTranspose2d(in_channels=64,out_channels=32,kernel_size=3,stride=2,output_padding=1)
        self.conv3_transpose = nn.ConvTranspose2d(in_channels=32,out_channels=3,kernel_size=3,stride=1)  
Run Code Online (Sandbox Code Playgroud)

有什么方法可以将Linear图层的输出提供给一个Conv2D或一个ConvTranspose2D图层,以便我可以重建我的图像?如果我删除Linear图层,输出将恢复。我想知道如何重建我的图像保留Linear

任何帮助,将不胜感激。谢谢!

python autoencoder torch pytorch torchvision

1
推荐指数
1
解决办法
284
查看次数

Dataloader 对象不可下标的问题

我现在正在使用 Pytorch 运行 Python 程序。我使用自己的数据集,而不是torch.data.dataset. 我从从特征提取中提取的泡菜文件下载数据。但是出现以下错误:

Traceback (most recent call last):
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 326, in <module>
    fire.Fire(demo)
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 138, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 468, in _Fire
    target=component.__name__)
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 304, in demo
    train(model,train_set1, valid_set=valid_set, test_set=test1, save=save, n_epochs=n_epochs,batch_size=batch_size,seed=seed)
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 172, in train
    n_epochs=n_epochs,
  File "C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demo-emotion.py", line 37, in train_epoch
    loader=np.asarray(list(loader))
  File "C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 345, in …
Run Code Online (Sandbox Code Playgroud)

python tensorflow pytorch

1
推荐指数
1
解决办法
7636
查看次数

多项式与 argmax 精度评估的重点是什么?

使用multinomial而不是直线上升来评估预测准确性的目的是什么argmax

probs_Y = torch.softmax(model(test_batch, feature_1, feature_2), 1)

sampled_Y = torch.multinomial(probs_Y, 1)
argmax_Y = torch.max(probs_Y, 1)[1].view(-1, 1)

print('Accuracy of sampled predictions on the test set: {:.4f}%'.format(
    (test_Y == sampled_Y.float()).sum().item() / len(test_Y) * 100))
print('Accuracy of argmax predictions on the test set: {:4f}%'.format(
    (test_Y == argmax_Y.float()).sum().item() / len(test_Y) * 100))
Run Code Online (Sandbox Code Playgroud)

结果:

Accuracy of sampled predictions on the test set: 88.8889%

Accuracy of argmax predictions on the test set: 97.777778%
Run Code Online (Sandbox Code Playgroud)

阅读 pytorch 文档,看起来多项式是根据某种分布进行采样的 - 只是不确定这与评估准确性有什么关系。

我注意到多项式是非确定性的——这意味着它每次运行时都会输出不同的准确度,大概是通过包含不同的样本。

machine-learning deep-learning pytorch

1
推荐指数
1
解决办法
552
查看次数

'Net' 对象没有属性 'parameters'

我对机器学习相当陌生。我学会了从 youtube 教程中编写此代码,但我不断收到此错误

Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Users/aniket/Desktop/DeepLearning/PythonLearningPyCharm/CatVsDogs.py", line 109, in <module>
    optimizer = optim.Adam(net.parameters(), lr=0.001) # tweaks the weights from what I understand
AttributeError: 'Net' object has no attribute 'parameters'
Run Code Online (Sandbox Code Playgroud)

这是网络课

class Net():
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1,32,5)
        self.conv2 = nn.Conv2d(32,64,5)
        self.conv3 = nn.Conv2d(64,128,5)
        self.to_linear = …
Run Code Online (Sandbox Code Playgroud)

neural-network deep-learning conv-neural-network pytorch

1
推荐指数
1
解决办法
3371
查看次数

在 PyTorch 中就地添加多个张量

我可以像这样添加两个张量xy就地

x = x.add(y)
Run Code Online (Sandbox Code Playgroud)

鉴于所有张量都具有相同的维度,有没有办法对三个或更多张量做同样的事情?

pytorch tensor

1
推荐指数
1
解决办法
1976
查看次数

由于“RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False”,如何使用 map_location='cpu'

我试图在https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/下载以下模型

import torch
tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_tacotron2')
Run Code Online (Sandbox Code Playgroud)

我收到了:

>>> import torch
>>> tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_tacotron2')
Using cache found in .cache\torch\hub\nvidia_DeepLearningExamples_torchhub
...
  File "Anaconda3\envs\env3_pytorch\lib\site-packages\torch\serialization.py", line 79, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
Run Code Online (Sandbox Code Playgroud)

所以我使用了以下 with map_location='cpu',但仍然得到相同的错误。

>>> tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', …
Run Code Online (Sandbox Code Playgroud)

neural-network conda pytorch

1
推荐指数
1
解决办法
1612
查看次数

torch.optim 为多维张量返回“ValueError:无法优化非叶张量”

我正在尝试使用torch.optim.adam. 它是redner教程系列中的一段代码,在初始设置下运行良好。它尝试通过将所有顶点移动相同的值来优化场景,称为translation。这是原始代码:

vertices = []
for obj in base:
    vertices.append(obj.vertices.clone())

def model(translation):
    for obj, v in zip(base, vertices):
        obj.vertices = v + translation
    # Assemble the 3D scene.
    scene = pyredner.Scene(camera = camera, objects = objects)
    # Render the scene.
    img = pyredner.render_albedo(scene)
    return img

# Initial guess
# Set requires_grad=True since we want to optimize them later

translation = torch.tensor([10.0, -10.0, 10.0], device = pyredner.get_device(), requires_grad=True)

init = model(translation)
# Visualize the initial …
Run Code Online (Sandbox Code Playgroud)

optimization pytorch tensor

1
推荐指数
1
解决办法
3317
查看次数