CUBLAS 矩阵乘法与行主数据无转置

Ere*_*rel 1 c++ cuda cublas

我目前正在尝试在我的 GPU 上使用 CUBLAS 实现矩阵乘法。

它适用于方阵和某些大小的输入,但对于其他人,最后一行不会返回(并且包含 0,因为这是我实现它的方式)。

我认为这是 的分配或语法问题cublasSgemm,但我找不到它的位置。

注意:如果您不熟悉 CUBLAS:它是column-majored,这就是为什么看起来操作以另一种方式执行的原因。

任何帮助,将不胜感激。


编码:

请注意,gpuErrchkcublasErrchk在这里当然无关紧要。

#include <cuda.h>
#include <cuda_runtime.h>
#include <cublas_v2.h>

#include <vector>

std::vector<float> CUDA_mult_MAT(const std::vector<float> &data_1 , const uint64_t data_1_rows, const uint64_t data_1_columns,
                                 const std::vector<float> &data_2 , const uint64_t data_2_rows, const uint64_t data_2_columns){

    cublasHandle_t handle;

    cublasErrchk(cublasCreate(&handle));

    std::vector<float> result(data_1_rows * data_2_columns); //Vector holding the result of the multiplication

    /*----------------------------------------------------------------------------------------------*/

    float* GPU_data_1 = NULL;
    gpuErrchk(cudaMalloc((void**)&GPU_data_1 , data_1.size()*sizeof(float))); //Allocate memory on the GPU
    gpuErrchk(cudaMemcpy(GPU_data_1, data_1.data(), data_1.size()*sizeof(float), cudaMemcpyHostToDevice)); //Copy data from data_1 to GPU_data_1

    float* GPU_data_2 = NULL;
    gpuErrchk(cudaMalloc((void**)&GPU_data_2 ,data_2.size()*sizeof(float))); //Allocate memory on the GPU
    gpuErrchk(cudaMemcpy(GPU_data_2, data_2.data(), data_2.size()*sizeof(float), cudaMemcpyHostToDevice));//Copy data from data_2 to GPU_data_2

    float* GPU_result = NULL;
    gpuErrchk(cudaMalloc((void**)&GPU_result , result.size()*sizeof(float))); //Allocate memory on the GPU

    /*----------------------------------------------------------------------------------------------*/


    const float alpha = 1.f; 
    const float beta = 0.f;

    cublasErrchk(
               cublasSgemm(handle , CUBLAS_OP_N , CUBLAS_OP_N,
                           data_2_columns , data_2_rows ,data_1_columns,
                           &alpha , GPU_data_2 , data_2_columns,
                           GPU_data_1 , data_1_columns,
                           &beta , GPU_result , data_1_rows)
             ); //Perform multiplication 



    gpuErrchk(cudaMemcpy(result.data() , GPU_result , result.size() * sizeof(float) , cudaMemcpyDeviceToHost)); //Copy back to the vector 'result'

    gpuErrchk(cudaFree(GPU_data_1)); //Free GPU memory
    gpuErrchk(cudaFree(GPU_data_2)); //Free GPU memory
    gpuErrchk(cudaFree(GPU_result)); //Free GPU memory

    cublasErrchk(cublasDestroy_v2(handle)); 


    return result;


}

Run Code Online (Sandbox Code Playgroud)

输入:


#include <iostream>

#include <vector>

int main(){

    const std::vector<float> r1 =  CUDA_mult_MAT({1 , 2 , 3 , 4 , 5 , 6} , 2 , 3 ,
                                           {7 , 8 , 9 , 10 , 11 , 12} , 3 , 2);
/*
Product :
         7  8
1 2 3    9  10
4 5 6    11 12

*/

    for(auto & value: r1){std::cout << value << " " ;}
    std::cout << std::endl;

    const std::vector<float> r2 =  CUDA_mult_MAT({7 , 8 , 9 , 10 , 11 , 12} , 3 , 2 ,
                                           {1 , 2 , 3 , 4 , 5 , 6} , 2 , 3);
/*
Product :
7  8   
9  10   1  2  3
11 12   4  5  6
*/


    for(auto & value: r2){std::cout << value << " " ;}
    std::cout << std::endl;

    return 0;
}
Run Code Online (Sandbox Code Playgroud)

输出:

由程序打印:

58 64 139 154 
39 54 69 49 68 87 0 0 0
                  ^~~~~~~
Run Code Online (Sandbox Code Playgroud)

预期的:

58 64 139 154 
39 54 69 49 68 87 59 82 105
                  ^~~~~~~
Run Code Online (Sandbox Code Playgroud)

Rob*_*lla 5

我们可以通过不同的方式观察您使用 CUBLAS 的问题。

首先,研究CUBLAS SGEMM文档,我们可以看到3个参数mnk出现的顺序转置符后:

cublasStatus_t cublasSgemm(cublasHandle_t handle,
                       cublasOperation_t transa, cublasOperation_t transb,
                       int m, int n, int k, 
                           ^      ^      ^
Run Code Online (Sandbox Code Playgroud)

我们还观察到矩阵维度由下式给出:

A 、 B 和 C 是以列优先格式存储的矩阵,维度为 op ( A ) m × k 、 op ( B ) k × n 和 C m × n ,

所以第一个输入矩阵的维数m x k 第二个输入矩阵的维数k x n,输出矩阵的维数m x n

让我们暂时关注一下输出矩阵。鉴于其尺寸是使用mn参数指定的,因此 传递data_2这些尺寸是不可能的(假设在非正方形情况下):

           cublasSgemm(handle , CUBLAS_OP_N , CUBLAS_OP_N,
                       data_2_columns , data_2_rows ,data_1_columns,
                       ^^^^^^^^^^^^^^   ^^^^^^^^^^^
Run Code Online (Sandbox Code Playgroud)

其次,从错误检查的角度来看,您可以通过使用cuda-memcheck. 报告的第一个错误如下:

$ cuda-memcheck ./t23
========= CUDA-MEMCHECK
========= Invalid __global__ read of size 4
=========     at 0x000006f0 in void gemmSN_NN_kernel<float, int=256, int=4, int=2, int=8, int=3, int=4, bool=0, cublasGemvTensorStridedBatched<float const >, cublasGemvTensorStridedBatched<float>>(cublasGemmSmallNParams<float const , cublasGemvTensorStridedBatched<float const >, float>)
=========     by thread (64,0,0) in block (0,0,0)
=========     Address 0x7f9c30a2061c is out of bounds
=========     Device Frame:void gemmSN_NN_kernel<float, int=256, int=4, int=2, int=8, int=3, int=4, bool=0, cublasGemvTensorStridedBatched<float const >, cublasGemvTensorStridedBatched<float>>(cublasGemmSmallNParams<float const , cublasGemvTensorStridedBatched<float const >, float>) (void gemmSN_NN_kernel<float, int=256, int=4, int=2, int=8, int=3, int=4, bool=0, cublasGemvTensorStridedBatched<float const >, cublasGemvTensorStridedBatched<float>>(cublasGemmSmallNParams<float const , cublasGemvTensorStridedBatched<float const >, float>) : 0x6f0)
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib/x86_64-linux-gnu/libcuda.so.1 (cuLaunchKernel + 0x2b8) [0x1e5cc8]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x1063c8b]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x10a9965]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x6bfacc]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x5fc7af]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 [0x436c35]
=========     Host Frame:/usr/local/cuda/lib64/libcublasLt.so.11 (cublasLtMatmul + 0x60f) [0x43484f]
=========     Host Frame:/usr/local/cuda/lib64/libcublas.so.11 [0x9ef6db]
=========     Host Frame:/usr/local/cuda/lib64/libcublas.so.11 [0x50e4f0]
=========     Host Frame:/usr/local/cuda/lib64/libcublas.so.11 (cublasSgemm_v2 + 0x1ee) [0x50f29e]
=========     Host Frame:./t23 [0x7986]
=========     Host Frame:./t23 [0x7b4c]
=========     Host Frame:/lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main + 0xe7) [0x21b97]
=========     Host Frame:./t23 [0x744a]
=========
Run Code Online (Sandbox Code Playgroud)

当然,一种可能的解决方案是转置输入矩阵,因此它们按列主序排列,而 CUBLAS 提供Sgemm了执行此操作的选项(见上文)。但是,在我看来,您要做的是在不转置输入数组的情况下进行 C 样式的行主乘法。有文章在这里它提供了一个说明如何做到这一点。

当我将该启发式应用于您的cublasSgemm()电话时,我得到以下信息:

           cublasSgemm(handle , CUBLAS_OP_N , CUBLAS_OP_N,
                       data_2_columns , data_1_rows ,data_1_columns,
                       &alpha , GPU_data_2 , data_2_columns,
                       GPU_data_1 , data_1_columns,
                       &beta , GPU_result , data_2_columns)
Run Code Online (Sandbox Code Playgroud)

当我使用这些更改编译并运行您的代码时,我得到以下信息:

$ cuda-memcheck ./t23
========= CUDA-MEMCHECK
58 64 139 154
39 54 69 49 68 87 59 82 105
========= ERROR SUMMARY: 0 errors
Run Code Online (Sandbox Code Playgroud)