犰狳+ NVBLAS进入RcppArmadillo + NVBLAS

Foo*_*ant 4 c++ cuda armadillo rcpp

TLDR; 对于那些想要避免阅读整个故事的人:有没有办法将RcppArmadillo与NVBLAS连接起来使用GPU,更像是使用纯粹的c ++代码而不是R来将Armadillo与NVBLAS接口?

我正在尝试利用NVBLAS库(http://docs.nvidia.com/cuda/nvblas/)来加速我的项目中的线性代数部分(主要是计算统计,MCMC,粒子滤波器和所有那些好东西) )通过将一些计算转移到GPU.

我主要使用C++代码,特别是用于矩阵计算的Armadillo库,通过他们的常见问题我知道我可以通过正确的方式链接犰狳来使用NVBLAS(http://arma.sourceforge.net/faq.html).

所以我设置了我的库安装并编写了以下虚拟编程:

#include <armadillo>
int main(){

arma::mat A = arma::randn<arma::mat>(3000,2000);
arma::mat B = cov(A);
arma::vec V = arma::randn(2000);
arma::mat C; arma::mat D;

for(int i = 0; i<20; ++i){ C = solve(V,B); D = inv(B);  }

return 0;
}
Run Code Online (Sandbox Code Playgroud)

用它编译它

g++ arma_try.cpp -o arma_try.so -larmadillo
Run Code Online (Sandbox Code Playgroud)

和配置文件

nvprof ./arma_try.so
Run Code Online (Sandbox Code Playgroud)

分析器输出显示:

==11798== Profiling application: ./arma_try.so
==11798== Profiling result:
Time(%)      Time     Calls       Avg       Min       Max  Name
 72.15%  4.41253s       580  7.6078ms  1.0360ms  14.673ms  void magma_lds128_dgemm_kernel<bool=0, bool=0, int=5, int=5, int=3, int=3, int=3>(int, int, int, double const *, int, double const *, int, double*, int, int, int, double const *, double const *, double, double, int)
 20.75%  1.26902s      1983  639.95us  1.3440us  2.9929ms  [CUDA memcpy HtoD]
  4.06%  248.17ms         1  248.17ms  248.17ms  248.17ms  void fermiDsyrk_v2_kernel_core<bool=1, bool=1, bool=0, bool=1>(double*, int, int, int, int, int, int, double const *, double const *, double, double, int)
  1.81%  110.54ms         1  110.54ms  110.54ms  110.54ms  void fermiDsyrk_v2_kernel_core<bool=0, bool=1, bool=0, bool=1>(double*, int, int, int, int, int, int, double const *, double const *, double, double, int)
  1.05%  64.023ms       581  110.19us  82.913us  12.211ms  [CUDA memcpy DtoH]
  0.11%  6.9438ms         1  6.9438ms  6.9438ms  6.9438ms  void gemm_kernel2x2_tile_multiple_core<double, bool=1, bool=0, bool=0, bool=1, bool=0>(double*, double const *, double const *, int, int, int, int, int, int, double*, double*, double, double, int)
  0.06%  3.3712ms         1  3.3712ms  3.3712ms  3.3712ms  void gemm_kernel2x2_core<double, bool=0, bool=0, bool=0, bool=1, bool=0>(double*, double const *, double const *, int, int, int, int, int, int, double*, double*, double, double, int)
  0.02%  1.3192ms         1  1.3192ms  1.3192ms  1.3192ms  void syherk_kernel_core<double, double, int=256, int=4, bool=1, bool=0, bool=0, bool=1, bool=0, bool=1>(cublasSyherkParams<double, double>)
  0.00%  236.03us         1  236.03us  236.03us  236.03us  void syherk_kernel_core<double, double, int=256, int=4, bool=0, bool=0, bool=0, bool=1, bool=0, bool=1>(cublasSyherkParams<double, double>)
Run Code Online (Sandbox Code Playgroud)

我认识dgemm和其他人......所以它正在工作!精彩.

现在我想运行相同的代码但是与R接口,因为我有时需要做输入/输出和绘图.RcppArmadillo一直为我创造奇迹,与Rcpp一起提供我需要的所有工具.我这样写cpp:

#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]

// [[Rcpp::export]]
int arma_call(){

  arma::mat A = arma::randn<arma::mat>(3000,2000);
  arma::mat B = cov(A);
  arma::vec V = arma::randn(2000);
  arma::mat C; arma::mat D;

  for(int i = 0; i<20; ++i){ C = solve(V,B); D = inv(B);  }

  return 0;
}
Run Code Online (Sandbox Code Playgroud)

和R脚本:

Rcpp::sourceCpp('arma_try_R.cpp')
arma_call()
Run Code Online (Sandbox Code Playgroud)

并尝试通过在控制台上运行来执行它

nvprof R CMD BATCH arma_try_R.R 
Run Code Online (Sandbox Code Playgroud)

(编辑:注意使用Rscript而不是R CMD BATCH产生相同的结果)但是

[NVBLAS] Cannot open default config file 'nvblas.conf'
Run Code Online (Sandbox Code Playgroud)

很奇怪......也许R因为某种原因无法访问该文件,所以我将其复制到工作目录并重新运行代码:

==12662== NVPROF is profiling process 12662, command: /bin/sh /usr/bin/R CMD BATCH arma_try_R.R
==12662== Profiling application: /bin/sh /usr/bin/R CMD BATCH arma_try_R.R
==12662== Profiling result: No kernels were profiled.
Run Code Online (Sandbox Code Playgroud)

我不知道是什么导致了它.我在安装了Bumblebee的Linux系统上,所以作为我尝试的最后一次机会:

nvprof optirun R CMD BATCH arma_try_R.R 
Run Code Online (Sandbox Code Playgroud)

排序强制R与Nvidia卡一起运行,这次是输出

==10900== Profiling application: optirun R CMD BATCH arma_try_R.R
==10900== Profiling result:
Time(%)      Time     Calls       Avg       Min       Max  Name
100.00%  1.3760us         1  1.3760us  1.3760us  1.3760us  [CUDA memcpy HtoD]
Run Code Online (Sandbox Code Playgroud)

因此,根据我对分析器的判断,根本没有调用cuda库,也没有任何委托给GPU的计算.现在问题实际上很多,而不仅仅是一个问题.

  • 这只是一个无法跟踪R内部呼叫的探查器问题吗?(我对此表示怀疑)
  • 这是因为代码在R中编译的方式?详细模式显示

    /usr/lib64/R/bin/R CMD SHLIB -o 'sourceCpp_27457.so' --preclean 'arma_try_R.cpp'

    g++ -I/usr/include/R/ -DNDEBUG -D_FORTIFY_SOURCE=2 -I"/home/marco/R/x86_64-unknown-linux-gnu-library/3.2/Rcpp/include" -I"/home/marco/R/x86_64-unknown-linux-gnu-library/3.2/RcppArmadillo/include" -I"/home/marco/prova_cuda" -fpic -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 -c arma_try_R.cpp -o arma_try_R.o

    g++ -shared -L/usr/lib64/R/lib -Wl,-O1,--sort-common,--as-needed,-z,relro -lblas -llapack -o sourceCpp_27457.so arma_try_R.o -llapack -lblas -lgfortran -lm -lquadmath -L/usr/lib64/R/lib -lR

即使我强制-larmadillo而不是-lblas标志(通过PKG_LIBS env var)也没有任何变化.

  • 有没有办法使它工作?我错过了什么吗?

如果您需要更多输出,我可以提供所需的信息,感谢您无论如何都要阅读!

编辑:

ldd /usr/lib/R/lib/libR.so 
[NVBLAS] Using devices :0 
    linux-vdso.so.1 (0x00007ffdb5bd6000)
    /opt/cuda/lib64/libnvblas.so (0x00007f4afaccd000)
    libblas.so => /usr/lib/libblas.so (0x00007f4afa6ea000)
    libm.so.6 => /usr/lib/libm.so.6 (0x00007f4afa3ec000)
    libreadline.so.6 => /usr/lib/libreadline.so.6 (0x00007f4afa1a1000)
    libpcre.so.1 => /usr/lib/libpcre.so.1 (0x00007f4af9f31000)
    liblzma.so.5 => /usr/lib/liblzma.so.5 (0x00007f4af9d0b000)
    libbz2.so.1.0 => /usr/lib/libbz2.so.1.0 (0x00007f4af9afa000)
    libz.so.1 => /usr/lib/libz.so.1 (0x00007f4af98e4000)
    librt.so.1 => /usr/lib/librt.so.1 (0x00007f4af96dc000)
    libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f4af94d7000)
    libgomp.so.1 => /usr/lib/libgomp.so.1 (0x00007f4af92b5000)
    libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f4af9098000)
    libc.so.6 => /usr/lib/libc.so.6 (0x00007f4af8cf3000)
    /usr/lib64/ld-linux-x86-64.so.2 (0x0000556509792000)
    libcublas.so.7.5 => /opt/cuda/lib64/libcublas.so.7.5 (0x00007f4af7414000)
    libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f4af7092000)
    libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f4af6e7b000)
    libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007f4af6c0e000)
Run Code Online (Sandbox Code Playgroud)

所以除了奇怪之外 [NVBLAS] Using devices :0,似乎至少R知道cuda nvblas库...

Foo*_*ant 5

回答我自己的问题:是的,这是可能的,只需将R指向右侧(NV)BLAS库并且RcppArmadillo将在正确的位置获取例程(您可能想要阅读Dirk Eddelbuettel对该问题的评论)为什么)


现在谈谈我的问题的具体细节和自我回答的原因:

我认为这个问题并不是我想象的那样.

例如,当我nvidia-smi在另一个终端上运行而不是运行的终端时Rscript arma_try_R.R

+------------------------------------------------------+                       
| NVIDIA-SMI 352.41     Driver Version: 352.41         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 860M    Off  | 0000:01:00.0     Off |                  N/A |
| N/A   64C    P0    N/A /  N/A |    945MiB /  2047MiB |     21%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0     20962    C   /usr/lib64/R/bin/exec/R                         46MiB |
|    0     21598    C   nvidia-smi                                      45MiB |
+-----------------------------------------------------------------------------+
Run Code Online (Sandbox Code Playgroud)

这意味着GPU确实在工作!

因此问题在于nvprof例程,它无法检测到它,有时会冻结我的Rscript.但这是另一个完全不相关的问题.

(我会等待接受它作为答案,看看是否有其他人来更聪明地解决它...)