我可以调用哪种实用程序/二进制文件来确定nVIDIA GPU的计算能力?

ein*_*ica 5 cuda utility compute-capability

假设我的系统安装了单个GPU,并且还安装了最新版本的CUDA。

我想确定我的GPU的计算能力是什么。如果我可以编译代码,那将很容易:

#include <stdio.h>
int main() {
    cudaDeviceProp prop;
    cudaGetDeviceProperties(&prop, 0);
    printf("%d", prop.major * 10 + prop.minor);
}
Run Code Online (Sandbox Code Playgroud)

但是-假设我想编译就这样做。我可以吗?我认为nvidia-smi可能会有所帮助,因为它可以让您查询有关设备的各种信息,但似乎并不能让您获得计算能力。也许我还能做些什么?也许通过/proc或系统日志可见的东西?

编辑:这旨在在构建之前在我无法控制的系统上运行。因此,它必须具有最小的依赖性,可以在命令行上运行并且不需要root特权。

idy*_*002 33

我们可以用

$ nvidia-smi --query-gpu=compute_cap --format=csv
Run Code Online (Sandbox Code Playgroud)

获得计算能力:

compute_cap
8.6
Run Code Online (Sandbox Code Playgroud)

它适用于 cuda 工具包 11.6。


ein*_*ica 9

不幸的是,目前的答案似乎是“否”,并且需要编译程序或使用在其他地方编译的二进制文件。

编辑: 我已经针对这个问题调整了一个解决方法 - 一个独立的bash 脚本,它编译一个小的内置 C 程序来确定计算能力。(使用 CMake 调用特别有用,但可以独立运行。)

此外,我已经在nVIDIA提交了一份关于此的功能请求错误报告

这是脚本,假设nvcc在您的路径上的版本中:

//usr/bin/env nvcc --run "$0" ${1:+--run-args "${@:1}"} ; exit $?
#include <cstdio>
#include <cstdlib>
#include <cuda_runtime_api.h>

int main(int argc, char *argv[])
{
    cudaDeviceProp prop;
    cudaError_t status;
    int device_count;
    int device_index = 0;
    if (argc > 1) {
        device_index = atoi(argv[1]);
    }

    status = cudaGetDeviceCount(&device_count);
    if (status != cudaSuccess) {
        fprintf(stderr,"cudaGetDeviceCount() failed: %s\n", cudaGetErrorString(status));
        return -1;
    }
    if (device_index >= device_count) {
        fprintf(stderr, "Specified device index %d exceeds the maximum (the device count on this system is %d)\n", device_index, device_count);
        return -1;
    }
    status = cudaGetDeviceProperties(&prop, device_index);
    if (status != cudaSuccess) {
        fprintf(stderr,"cudaGetDeviceProperties() for device device_index failed: %s\n", cudaGetErrorString(status));
        return -1;
    }
    int v = prop.major * 10 + prop.minor;
    printf("%d\n", v);
}
Run Code Online (Sandbox Code Playgroud)

  • 有谁知道这个(--query-gpu=compute_capability)是否曾经实现过? (3认同)
  • 我建议向 NVIDIA 提交 RFE,以便将计算能力报告添加到 `nvidia-smi`。`--query-gpu` 可以报告许多设备属性,但不能报告计算能力,这似乎是一个疏忽。他们应该支持`--query-gpu=compute_capability`,这会让你的脚本任务变得微不足道。 (2认同)

Hon*_*oog 6

您可以使用deviceQuerycuda 安装中包含的实用程序

# change cwd into utility source directoy
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery

# build deviceQuery utility with make as root
$ sudo make

# run deviceQuery
$ ./deviceQuery  | grep Capability
  CUDA Capability Major/Minor version number:    7.5

# optionally copy deviceQuery in ~/bin for future use
$ cp ./deviceQuery ~/bin
Run Code Online (Sandbox Code Playgroud)

RTX2080Ti 的 deviceQuery 的完整输出如下:

 $ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce RTX 2080 Ti"
  CUDA Driver Version / Runtime Version          11.2 / 10.2
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 11016 MBytes (11551440896 bytes)
  (68) Multiprocessors, ( 64) CUDA Cores/MP:     4352 CUDA Cores
  GPU Max Clock rate:                            1770 MHz (1.77 GHz)
  Memory Clock rate:                             7000 Mhz
  Memory Bus Width:                              352-bit
  L2 Cache Size:                                 5767168 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS
Run Code Online (Sandbox Code Playgroud)

谢谢。