每个CUDA设备都有多个流式多处理器(SM).每个SM可以具有多个warp调度程序和多个执行单元.CUDA核心是执行单元而不是"核心",因此我将在其余讨论中避免使用它们.
NVIDIA分析工具
支持收集CUDA网格启动的持续时间和PM计数器的能力.可以按SM收集PM计数器的子集.
我已经为nvprof提供了命令行来收集这两条信息.两个示例都在具有15个SM的GTX480上运行matrixMul示例的调试版本.
收集网格执行时间
上面列出的每个工具都有简化模式,用于收集每个内核网格启动的执行持续时间.图形工具可以在时间轴或表格中显示.
nvprof --print-gpu-trace matrixMul.exe
======== NVPROF is profiling matrixMul.exe...
======== Command: matrixMul.exe
[Matrix Multiply Using CUDA] - Starting...
GPU Device 0: "GeForce GTX 480" with compute capability 2.0
MatrixA(320,320), MatrixB(640,320)
Computing result using CUDA Kernel...
done
Performance= 39.40 GFlop/s, Time= 3.327 msec, Size= 131072000 Ops, WorkgroupSize= 1024 threads/block
Checking computed result for correctness: OK
Note: For peak performance, please refer to the matrixMulCUBLAS example.
======== Profiling result:
Start Duration Grid Size Block Size Regs* SSMem* DSMem* Size Throughput Device Context Stream Name
267.83ms 71.30us - - - - - 409.60KB 5.74GB/s 0 1 2 [CUDA memcpy HtoD]
272.72ms 139.20us - - - - - 819.20KB 5.88GB/s 0 1 2 [CUDA memcpy HtoD]
272.86ms 3.33ms (20 10 1) (32 32 1) 20 8.19KB 0B - - 0 1 2 void matrixMulCUDA<int=32>(float*, float*, float*, int, int)
277.29ms 3.33ms (20 10 1) (32 32 1) 20 8.19KB 0B - - 0 1 2 void matrixMulCUDA<int=32>(float*, float*, float*, int, int)
Run Code Online (Sandbox Code Playgroud)
为了收集其他工具
收集SM活动
您的问题表明您需要每GPU核心的执行时间.这可能意味着每个GPU(见上文)或每个SM.可以使用SM PM计数器active_cycles收集SM执行时间.active_cycles计算SM具有至少一个活动warp的周期数.
对于输出中的每一行,将有15个值(每个SM一个).
nvprof --events active_cycles --aggregate-mode-off matrixMul.exe
======== NVPROF is profiling matrixMul.exe...
======== Command: matrixMul.exe
[Matrix Multiply Using CUDA] - Starting...
GPU Device 0: "GeForce GTX 480" with compute capability 2.0
MatrixA(320,320), MatrixB(640,320)
Computing result using CUDA Kernel...
done
Performance= 12.07 GFlop/s, Time= 10.860 msec, Size= 131072000 Ops, WorkgroupSize= 1024 threads/block
Checking computed result for correctness: OK
Note: For peak performance, please refer to the matrixMulCUBLAS example.
======== Profiling result:
Device Context Stream, Event Name, Kernel, Values
0 1 2, active_cycles, void matrixMulCUDA<int=32>(float*, float*, float*, int, int), 2001108 2001177 2000099 2002857 2152562 2153254 2001086 2153043 2001015 2001192 2000065 2154293 2000071 2000238 2154905
0 1 2, active_cycles, void matrixMulCUDA<int=32>(float*, float*, float*, int, int), 2155340 2002145 2155289 2002374 2003336 2002498 2001865 2155503 2156271 2156429 2002108 2002836 2002461 2002695 2002098
Run Code Online (Sandbox Code Playgroud)