标签: torch

试图理解计算 Torch 中 LogSoftMax 输入的梯度的代码

代码来自:https : //github.com/torch/nn/blob/master/lib/THNN/generic/LogSoftMax.c

我没有看到这段代码如何计算模块 LogSoftMax 输入的梯度。我感到困惑的是两个 for 循环在做什么。

for (t = 0; t < nframe; t++)
{
sum = 0;
gradInput_data = gradInput_data0 + dim*t;
output_data = output_data0 + dim*t;
gradOutput_data = gradOutput_data0 + dim*t;

for (d = 0; d < dim; d++)
  sum += gradOutput_data[d];

for (d = 0; d < dim; d++)
  gradInput_data[d] = gradOutput_data[d] - exp(output_data[d])*sum;
 }
}
Run Code Online (Sandbox Code Playgroud)

mathematical-optimization gradient-descent torch softmax

0
推荐指数
1
解决办法
1549
查看次数

给出输入示例获取中间层的激活值

假设我已经定义了我的顺序模型如下:

require 'nn'
net = nn.Sequential()
net:add(nn.SpatialConvolution(1, 6, 5, 5)) -- 1 input image channel, 6 output channels, 5x5 convolution kernel
net:add(nn.ReLU())                       -- non-linearity 
net:add(nn.SpatialMaxPooling(2,2,2,2))     -- A max-pooling operation that looks at 2x2 windows and finds the max.
net:add(nn.SpatialConvolution(6, 16, 5, 5))
net:add(nn.ReLU())                       -- non-linearity 
net:add(nn.SpatialMaxPooling(2,2,2,2))
net:add(nn.View(16*5*5))                    -- reshapes from a 3D tensor of 16x5x5 into 1D tensor of 16*5*5
net:add(nn.Linear(16*5*5, 120))             -- fully connected layer (matrix multiplication between input and weights)
net:add(nn.ReLU())                       -- non-linearity 
net:add(nn.Linear(120, 84))
net:add(nn.ReLU())                       -- non-linearity …
Run Code Online (Sandbox Code Playgroud)

machine-learning neural-network torch conv-neural-network

0
推荐指数
1
解决办法
670
查看次数

在iTorch中的包装

iTorch我运行一段代码时,require nn我在Jupyter中遇到以下错误:

[string "require 'nn'..."]:1: module 'nn' not found:
no field package.preload['nn']
no file '/usr/local/share/lua/5.2/nn.lua'
no file '/usr/local/share/lua/5.2/nn/init.lua'
no file '/usr/local/lib/lua/5.2/nn.lua'
no file '/usr/local/lib/lua/5.2/nn/init.lua'
no file './nn.lua'
no file '/usr/local/lib/lua/5.2/nn.so'
no file '/usr/local/lib/lua/5.2/loadall.so'
no file './nn.so'
stack traceback:
/usr/local/share/lua/5.2/itorch/main.lua:166: in function        </usr/local/share/lua/5.2/itorch/main.lua:159>
[C]: in function 'require'
[string "require 'nn'..."]:1: in main chunk
[C]: in function 'xpcall'
/usr/local/share/lua/5.2/itorch/main.lua:209: in function     </usr/local/share/lua/5.2/itorch/main.lua:173>
(...tail calls...)
/usr/local/share/lua/5.2/lzmq/poller.lua:75: in function 'poll'
/usr/local/share/lua/5.2/lzmq/impl/loop.lua:307: in function 'poll'
/usr/local/share/lua/5.2/lzmq/impl/loop.lua:325: in function 'sleep_ex'
/usr/local/share/lua/5.2/lzmq/impl/loop.lua:370: in …
Run Code Online (Sandbox Code Playgroud)

lua torch jupyter jupyter-notebook

0
推荐指数
1
解决办法
2625
查看次数

Pytorch Tensor 如何获取元素的索引?

我有 2 个名为xlist 的张量,它们的定义如下:

x = torch.tensor(3)
list = torch.tensor([1,2,3,4,5])
Run Code Online (Sandbox Code Playgroud)

现在我想从list 中获取元素x的索引。预期的输出是一个整数:

2
Run Code Online (Sandbox Code Playgroud)

我怎样才能以简单的方式做到这一点?

python torch pytorch tensor

0
推荐指数
1
解决办法
4952
查看次数

I define a loss function but backward present error to me could someone tell me how to fix it

class loss(Function):
    @staticmethod
    def forward(ctx,x,INPUT):

        batch_size = x.shape[0]
        X = x.detach().numpy()
        input = INPUT.detach().numpy()
        Loss = 0
        for i in range(batch_size):
            t_R_r = input[i,0:4]
            R_r = t_R_r[np.newaxis,:]
            t_R_i = input[i,4:8]
            R_i = t_R_i[np.newaxis,:]
            t_H_r = input[i,8:12]
            H_r = t_H_r[np.newaxis,:]
            t_H_i = input[i,12:16]
            H_i = t_H_i[np.newaxis,:]

            t_T_r = input[i, 16:32]
            T_r = t_T_r.reshape(4,4)
            t_T_i = input[i, 32:48]
            T_i = t_T_i.reshape(4,4)

            R = np.concatenate((R_r, R_i), axis=1)
            H = np.concatenate((H_r, H_i), axis=1)


            temp_t1 = np.concatenate((T_r,T_i),axis=1)
            temp_t2 = np.concatenate((-T_i,T_r),axis=1)
            T = np.concatenate((temp_t1,temp_t2),axis=0)
            phi_r = …
Run Code Online (Sandbox Code Playgroud)

torch pytorch

0
推荐指数
1
解决办法
2828
查看次数

在pytorch中连接两个不同形状的火炬张量

我有两个火炬张量。一种有形[64, 4, 300],一种有形[64, 300]。如何连接这两个张量以获得 shape 的合成张量[64, 5, 300]。我知道tensor.cat用于此的函数,但为了使用该函数,我需要重塑第二个张量以匹配张量的维数。我听说不应该对张量进行整形,因为它可能会弄乱张量中的数据。我该如何进行这种连接?

我试过重塑,但接下来的部分让我对这种重塑更加怀疑。

a = torch.rand(64,300)

a1 = a.reshape(64,1,300)

list(a1[0]) == list(a)
Out[32]: False
Run Code Online (Sandbox Code Playgroud)

concat concatenation torch pytorch tensor

0
推荐指数
1
解决办法
2111
查看次数

lua:15: '[' 附近有意外的符号

我正在尝试编写一个函数来创建 CNN 模型。每当我运行脚本时,我都会收到以下错误:

lua:15: '[' 附近有意外的符号

require('torch')

require('nn')

function CeateNvidiaModel()

    --The Nvidia model
    --Input dimensions
    local n_channels = 3
    local height = 66
    local width = 200
    local nvidia_model = nn.Sequential();
    --nvida_model:add(nn.Normalize()
    --Convolutional Layers
    nvidia_model:add(nn.SpatialConvolution(n_channels, 24, 5, 5, [2], [2]))
    nvidia_model:add(nn.ELU(true))
    nvidia_model:add(nn.SpatialConvolution(24, 36, 5, 5, [2], [2]))
    nvidia_model:add(nn.ELU(true))
    nvidia_model:add(nn.SpatialConvolution(36, 48, 5, 5, [2], [2]))
    nvidia_model:add(nn.ELU(true))
    nvidia_model:add(nn.SpatialConvolution(48, 64, 3, 3))
    nvidia_model:add(nn.ELU(true))
    nvidia_model:add(nn.SpatialConvolution(64, 64, 3, 3))
    nvidia_model:add(nn.ELU(true))
    -- Flatten Layer
    nvidia_model:add(nn.Reshape(1164))
    -- FC Layers
    nvida_model:add(nn.Linear(1164, 100))
    nvidia_model:add(nn.ELU(true))
    nvida_model:add(nn.Linear(100, 50))
    nvidia_model:add(nn.ELU(true))
    nvida_model:add(nn.Linear(50, 10)) …
Run Code Online (Sandbox Code Playgroud)

lua torch

0
推荐指数
1
解决办法
3796
查看次数

尽管在 CMake 中指定了库,但未定义的引用错误(与 libtorch 链接的问题(C++11 ABI?)

我正在尝试从我制作的库中创建一个测试可执行文件。我们将它们命名为 lib1 和 lib2。lib1 的构建和测试都很好。lib2 的构建也没有任何问题。但是,每当我尝试将 lib2 与其测试可执行文件(即使用 lib2 的示例程序)链接时,我都会收到以下错误:

usr/bin/ld: CMakeFiles/Lib2_Test.dir/Lib2_Test.cpp.o: in function `main':
Lib2_Test.cpp:(.text+0xf3): undefined reference to `Lib2::Lib2(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, int)'
/usr/bin/ld: Lib2_Test.cpp:(.text+0x3f5): undefined reference to `Lib2::Evaluate(bool&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, float&, cv::Mat&, cv::Mat&, bool)'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/Lib2_Test.dir/build.make:130: Lib2_Test] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/Lib2_Test.dir/all] Error 2
make: *** [Makefile:130: all] Error 2
Run Code Online (Sandbox Code Playgroud)

readelf -d我尝试使用命令查看标题ldd,这两个库似乎都有所有必要的引用。但是 lib1 没有任何问题,而 lib2 在链接到使用它的可执行文件时会生成未引用的相关错误。

下面是我为它们制作的 cmakeList,后来我还包含了readelf …

c++ linux cmake torch libtorch

0
推荐指数
1
解决办法
1767
查看次数

如何确保 .nonzero() 返回一个元素张量?

[编辑以包含原始源代码]

我尝试运行在这里找到的代码:https://colab.research.google.com/drive/1roZqqhsdpCXZr8kgV_Bx_ABVBPgea3lX ?usp=sharing(链接自:https: //www.youtube.com/watch ?v=-lz30by8 -sU )

!pip install transformers diffusers lpips accelerate
from huggingface_hub import notebook_login
notebook_login()

import torch
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, UNet2DConditionModel, LMSDiscreteScheduler
from tqdm.auto import tqdm
from torch import autocast
from PIL import Image
from matplotlib import pyplot as plt
import numpy
from torchvision import transforms as tfms

# For video display:
from IPython.display import HTML
from base64 import b64encode

# Set device
torch_device = "cuda" if torch.cuda.is_available() else "cpu" …
Run Code Online (Sandbox Code Playgroud)

python torch stable-diffusion

0
推荐指数
1
解决办法
319
查看次数

在colab中,cuda不能用于割炬

错误消息如下:

RuntimeError                              Traceback (most recent call last)
<ipython-input-24-06e96beb03a5> in <module>()
     11 
     12 x_test = np.array(test_features)
---> 13 x_test_cuda = torch.tensor(x_test, dtype=torch.float).cuda()
     14 test = torch.utils.data.TensorDataset(x_test_cuda)
     15 test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False)

/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
    160 class CudaError(RuntimeError):
    161     def __init__(self, code):
--> 162         msg = cudart().cudaGetErrorString(code).decode('utf-8')
    163         super(CudaError, self).__init__('{0} ({1})'.format(msg, code))
    164 

RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:51
Run Code Online (Sandbox Code Playgroud)

torch pytorch google-colaboratory

-2
推荐指数
1
解决办法
2052
查看次数