标签: torch

火炬调整张量

如何在火炬中调整Tensor的大小?https://github.com/torch/torch7/blob/master/doc/tensor.md#resizing中记录的方法似乎不起作用.

images = image.load('image.png',1,'float')
print(images:size()) 
-- result: 224x224 [torch.LongStorage of size 2] 

images.resize(torch.FloatTensor(224,224,1,1))
print(images:size()) 
-- result: 224x224 [torch.LongStorage of size 2] 
-- expected: 224x224x1x1 [torch.LongStorage of size 4]
Run Code Online (Sandbox Code Playgroud)

为什么这种方法不起作用?

lua resize machine-learning torch

7
推荐指数
1
解决办法
5897
查看次数

nnGraph多GPU Torch

这个问题是关于使任何nnGraph网络在多个GPU上运行而不是特定于以下网络实例

我正在尝试训练一个用nnGraph构建的网络.后面的图表是附上的.我试图在多GPU设置中运行parallelModel(请参阅代码或图9).如果我将并行模型附加到nn.Sequential容器然后创建DataParallelTable,它将在多GPU设置中工作(没有nnGraph).但是,在将它附加到nnGraph后,我收到一个错误.如果我在单个GPU上训练(在if语句中将true设置为false),则向后传递有效,但在多GPU设置中,我得到一个错误"gmodule.lua:418:尝试索引本地'gradInput'(一个零值)".我认为后向传递中的节点9应该在多GPU上运行,但是这种情况并没有发生.在nnGraph上创建DataParallelTable并不适合我,但我认为至少将内部顺序网络放在DataParallelTable中会起作用.有没有其他方法来分割传递给nnGraph的初始数据,以便它在多GPU上运行?

require 'torch'
require 'nn'
require 'cudnn'
require 'cunn'
require 'cutorch'
require 'nngraph'

data1 = torch.ones(4,20):cuda()
data2 = torch.ones(4,10):cuda()

tmodel = nn.Sequential()
tmodel:add(nn.Linear(20,10))
tmodel:add(nn.Linear(10,10))
parallelModel = nn.ParallelTable()
parallelModel:add(tmodel)
parallelModel:add(nn.Identity())
parallelModel:add(nn.Identity())

model = parallelModel
if true then
  local function sharingKey(m)
     local key = torch.type(m)
     if m.__shareGradInputKey then
        key = key .. ':' .. m.__shareGradInputKey
     end
     return key
  end

  -- Share gradInput for memory efficient backprop
  local cache = {}
  model:apply(function(m)
     local moduleType = torch.type(m)
     if torch.isTensor(m.gradInput) and moduleType ~= 'nn.ConcatTable' …
Run Code Online (Sandbox Code Playgroud)

multi-gpu deep-learning torch

7
推荐指数
0
解决办法
707
查看次数

火炬,为什么我的人工神经网络总是预测为零?

我在Linux CentOS 7机器上使用Torch7.我正在尝试将人工神经网络(ANN)应用于我的数据集,以解决二进制分类问题.我正在使用一个简单的多层感知器.

我正在使用以下火炬包:optim,torch.

问题是我的感知器总是预测零值(被归类为零的元素),我无法理解为什么......

这是我的数据集("dataset_file.csv").有34个功能和1个标签目标(最后一列,可能是0或1):

0.55,1,0,1,0,0.29,1,0,1,0.46,1,1,0,0.67,1,0.37,0.41,1,0.08,0.47,0.23,0.13,0.82,0.46,0.25,0.04,0,0,0.52,1,0,0,0,0.33,0
0.65,1,0,1,0,0.64,1,0,0,0.02,1,1,1,1,0,0.52,0.32,0,0.18,0.67,0.47,0.2,0.64,0.38,0.23,1,0.24,0.18,0.04,1,1,1,1,0.41,0
0.34,1,0.13,1,0,0.33,0,0.5,0,0.02,0,0,0,0.67,1,0.25,0.55,1,0.06,0.23,0.18,0.15,0.82,0.51,0.22,0.06,0,0,0.6,1,0,0,0,0.42,1
0.46,1,0,1,0,0.14,1,0,0,0.06,0,1,1,0,1,0.37,0.64,1,0.14,0.22,0.17,0.1,0.94,0.65,0.22,0.06,0.75,0.64,0.3,1,1,0,0,0.2,0
0.55,1,0,1,0,0.14,1,0.5,1,0.03,1,1,0,1,1,0.42,0.18,0,0.16,0.55,0.16,0.12,0.73,0.55,0.2,0.03,0.54,0.44,0.35,1,1,0,0,0.11,0
0.67,1,0,1,0,0.71,0,0.5,0,0.46,1,0,1,1,1,0.74,0.41,0,0.1,0.6,0.15,0.15,0.69,0.42,0.27,0.04,0.61,0.48,0.54,1,1,0,0,0.22,1
0.52,1,0,1,0,0.21,1,0.5,0,0.01,1,1,1,0.67,0,0.27,0.64,0,0.08,0.34,0.14,0.21,0.85,0.51,0.2,0.05,0.51,0.36,0.36,1,1,0,0,0.23,0
0.58,1,0.38,1,0,0.36,1,0.5,1,0.02,0,1,0,1,1,0.38,0.55,1,0.13,0.57,0.21,0.23,0.73,0.52,0.19,0.03,0,0,0.6,1,0,0,0,0.42,0
0.66,1,0,1,0,0.07,1,0,0,0.06,1,0,0,1,1,0.24,0.32,1,0.06,0.45,0.16,0.13,0.92,0.57,0.27,0.06,0,0,0.55,1,0,0,0,0.33,0
0.39,1,0.5,1,0,0.29,1,0,1,0.06,0,0,0,1,1,0.34,0.45,1,0.1,0.31,0.12,0.16,0.81,0.54,0.21,0.02,0.51,0.27,0.5,1,1,0,0,0.32,0
0.26,0,0,1,0,0.21,1,0,0,0.02,1,1,1,0,1,0.17,0.36,0,0.19,0.41,0.24,0.26,0.73,0.55,0.22,0.41,0.46,0.43,0.42,1,1,0,0,0.52,0
0.96,0,0.63,1,0,0.86,1,0,1,0.06,1,1,1,0,0,0.41,0.5,1,0.08,0.64,0.23,0.19,0.69,0.45,0.23,0.06,0.72,0.43,0.45,1,1,0,0,0.53,0
0.58,0,0.25,1,0,0.29,1,0,1,0.04,1,0,0,0,1,0.4,0.27,1,0.09,0.65,0.21,0.16,0.8,0.57,0.24,0.02,0.51,0.28,0.5,1,1,1,0,0.63,0
0.6,1,0.5,1,0,0.73,1,0.5,1,0.04,1,0,1,0,1,0.85,0.64,1,0.16,0.71,0.24,0.21,0.72,0.45,0.23,0.1,0.63,0.57,0.13,1,1,1,1,0.65,0
0.72,1,0.25,1,0,0.29,1,0,0,0.06,1,0,0,1,1,0.31,0.41,1,0.17,0.78,0.24,0.16,0.75,0.54,0.27,0.09,0.78,0.68,0.19,1,1,1,1,0.75,0
0.56,0,0.13,1,0,0.4,1,0,0,0.23,1,0,0,1,1,0.42,1,0,0.03,0.14,0.15,0.13,0.85,0.52,0.24,0.06,0,0,0.56,1,0,0,0,0.33,0
0.67,0,0,1,0,0.57,1,0,1,0.02,0,0,0,1,1,0.38,0.36,0,0.08,0.12,0.11,0.14,0.8,0.49,0.22,0.05,0,0,0.6,1,0,0,0,0.22,0
0.67,0,0,1,0,0.36,1,0,0,0.23,0,1,0,0,0,0.32,0.73,0,0.25,0.86,0.26,0.16,0.62,0.35,0.25,0.02,0.46,0.43,0.45,1,1,1,0,0.76,0
0.55,1,0.5,1,0,0.57,0,0.5,1,0.12,1,1,1,0.67,1,1,0.45,0,0.19,0.94,0.19,0.22,0.88,0.41,0.35,0.15,0.47,0.4,0.05,1,1,1,0,0.56,1
0.61,0,0,1,0,0.43,1,0.5,1,0.04,1,0,1,0,0,0.68,0.23,1,0.12,0.68,0.25,0.29,0.68,0.45,0.29,0.13,0.58,0.41,0.11,1,1,1,1,0.74,0
0.59,1,0.25,1,0,0.23,1,0.5,0,0.02,1,1,1,0,1,0.57,0.41,1,0.08,0.05,0.16,0.15,0.87,0.61,0.25,0.04,0.67,0.61,0.45,1,1,0,0,0.65,0
0.74,1,0.5,1,0,0.26,1,0,1,0.01,1,1,1,1,0,0.76,0.36,0,0.14,0.72,0.12,0.13,0.68,0.54,0.54,0.17,0.93,0.82,0.12,1,1,0,0,0.18,0
0.64,0,0,1,0,0.29,0,0,1,0.15,0,0,1,0,1,0.33,0.45,0,0.11,0.55,0.25,0.15,0.75,0.54,0.27,0.05,0.61,0.64,0.43,1,1,0,0,0.23,1
0.36,0,0.38,1,0,0.14,0,0.5,0,0.02,1,1,1,0.33,1,0.18,0.36,0,0.17,0.79,0.21,0.12,0.75,0.54,0.24,0.05,0,0,0.52,1,0,0,0,0.44,1
0.52,0,0.75,1,0,0.14,1,0.5,0,0.04,1,1,1,0,1,0.36,0.68,1,0.08,0.34,0.12,0.13,0.79,0.59,0.22,0.02,0,0,0.5,1,0,0,0,0.23,0
0.59,0,0.75,1,0,0.29,1,0,0,0.06,1,1,0,0,1,0.24,0.27,0,0.12,0.7,0.2,0.16,0.74,0.45,0.26,0.02,0.46,0.32,0.52,1,0,0,0,0.33,0
0.72,1,0.38,1,0,0.43,0,0.5,0,0.06,1,0,1,0.67,1,0.53,0.32,0,0.2,0.68,0.16,0.13,0.79,0.45,0.25,0.09,0.61,0.57,0.15,1,1,0,0,0.22,1
Run Code Online (Sandbox Code Playgroud)

这是我的Torch Lua代码:

-- add comma to separate thousands
function comma_value(amount)
  local formatted = amount
  while true do  
    formatted, k = string.gsub(formatted, "^(-?%d+)(%d%d%d)", '%1,%2')
    if (k==0) then
      break
    end
  end
  return formatted
end

-- function that computes the confusion matrix …
Run Code Online (Sandbox Code Playgroud)

lua neural-network torch

7
推荐指数
1
解决办法
1716
查看次数

培训CNN-LSTLM端到端?

已经有许多论文(特别是图像字幕)将CNN和LSTM架构联合用于预测和生成任务.然而,他们似乎都独立于LSTM训练CNN.我正在浏览Torch和TensorFlow(与Keras一起),并且无法找到为什么它不可能进行端到端培训(至少从架构设计的角度来看),但那里似乎不是这种模型的任何文档.

那么,可以做到吗?Torch或TensorFlow(甚至是Theanos或Caffe)是否支持联合训练端到端的CNN-LSTM神经网络?如果是这样,它是否只是简单地将CNN的输出链接到LSTM的输入并运行SGD?或者是否更复杂?

neural-network deep-learning torch tensorflow

7
推荐指数
1
解决办法
1249
查看次数

获取pytorch数据集的子集

我有一个网络,我想在一些数据集上训练(例如,说CIFAR10).我可以通过创建数据加载器对象

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)
Run Code Online (Sandbox Code Playgroud)

我的问题如下:假设我想进行几次不同的训练迭代.假设我首先想要在奇数位置的所有图像上训练网络,然后在偶数位置的所有图像上训练网络,依此类推.为此,我需要能够访问这些图像.不幸的是,它似乎trainset不允许这种访问.也就是说,尝试做trainset[:1000]或更多一般trainset[mask]会抛出错误.

我可以做

trainset.train_data=trainset.train_data[mask]
trainset.train_labels=trainset.train_labels[mask]
Run Code Online (Sandbox Code Playgroud)

然后

trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                              shuffle=True, num_workers=2)
Run Code Online (Sandbox Code Playgroud)

但是,这将迫使我在每次迭代中创建完整数据集的新副本(因为我已经更改,trainset.train_data所以我需要重新定义trainset).有没有办法避免它?

理想情况下,我希望有一些"等同"的东西

trainloader = torch.utils.data.DataLoader(trainset[mask], batch_size=4,
                                              shuffle=True, num_workers=2)
Run Code Online (Sandbox Code Playgroud)

python machine-learning neural-network torch pytorch

7
推荐指数
2
解决办法
9886
查看次数

PyTorch:错误消息“火炬没有成员”

晚上好,我刚刚安装了PyTorch 0.4.0,并且正在尝试执行第一个教程“什么是PyTorch?”。我写了一个Tutorial.py文件,尝试使用Visual Studio Code执行

这是代码:

from __future__ import print_function
import torch

print (torch.__version__)

x = x = torch.rand(5, 3)
print(x)
Run Code Online (Sandbox Code Playgroud)

不幸的是,当我尝试调试它时,出现错误消息:“ torch没有rand成员”

我尝试使用的任何手电筒功能都是如此

有人可以帮我吗?

torch pytorch

7
推荐指数
2
解决办法
6088
查看次数

BertTokenizer - 当编码和解码序列出现额外空格时

使用 HuggingFace 的 Transformers 时,我遇到了编码和解码方法的问题。

我有以下字符串:

test_string = 'text with percentage%'
Run Code Online (Sandbox Code Playgroud)

然后我运行以下代码:

import torch
from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-cased')

test_string = 'text with percentage%'

# encode Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary.
input_ids = tokenizer.encode(test_string)
output = tokenizer.decode(input_ids)
Run Code Online (Sandbox Code Playgroud)

输出如下所示:

'text with percentage %'
Run Code Online (Sandbox Code Playgroud)

在 % 前有一个额外的空格。我已经尝试了额外的参数,clean_up_tokenization_spaces 但这是不同的。

我应该如何在解码和编码中使用什么来获得前后完全相同的文本。这也发生在其他特殊标志上。

python tokenize torch pytorch bert-language-model

7
推荐指数
1
解决办法
4265
查看次数

将脚本转换为 exe 后出现“TorchScript 需要访问源代码才能执行编译”错误

我正在尝试使用 pyinstaller 将脚本转换为 exe 脚本使用的是 tim esler 在此处找到的来自 facenet_pytorch 的 inception resnet v1 模型

运行转换后的脚本 exe 后出现以下错误

回溯(最近一次调用最后一次):

文件“site-packages\torch_utils_internal.py”,第 46 行,在 get_source_lines_and_file 中

getsourcelines 中的文件“inspect.py”,第 955 行

findsource 中的文件“inspect.py”,第 786 行

OSError:无法获取源代码

在处理上述异常的过程中,又发生了一个异常:

回溯(最近一次调用最后一次):

文件“Fcenet-Pytorch\Test Rec2.py”,第 1 行,在

文件“”,第 983 行,在 _find_and_load 中

文件“”,第 967 行,在 _find_and_load_unlocked

文件“”,第 677 行,在 _load_unlocked 中

文件“c:\users\jorda\appdata\local\programs\python\python37\lib\site-packages\PyInstaller\loader\pyimod03_importers.py”,第 623 行,在 exec_module exec(bytecode, module. dict )

文件“site-packages\facenet_pytorch__init__.py”,第 1 行,在

# -*- coding: utf-8 -*-
Run Code Online (Sandbox Code Playgroud)

文件“”,第 983 行,在 _find_and_load 中

文件“”,第 967 行,在 …

python pyinstaller torch pytorch torchvision

7
推荐指数
0
解决办法
824
查看次数

与 torch.no_grad:AttributeError:__enter__

with torch.no_grad:AttributeError: __enter__
Run Code Online (Sandbox Code Playgroud)

我在运行 pytorch 代码时遇到此错误。

我有 torch==0.4.1 torchvision==0.3.0,我在 google colab 中运行代码。

machine-learning deep-learning torch pytorch

7
推荐指数
1
解决办法
5164
查看次数

torch.nn.Softmax、tor​​ch.nn.funtional.softmax、tor​​ch.softmax 和 torch.nn.function.log_softmax 有什么区别

我尝试查找文档,但找不到有关 torch.softmax 的任何内容。

torch.nn.Softmax、tor​​ch.nn.funtional.softmax、tor​​ch.softmax 和 torch.nn.function.log_softmax 之间有什么区别?

示例值得赞赏。

python torch softmax pytorch

7
推荐指数
2
解决办法
1万
查看次数