如何在没有块等待安排的情况下最大限度地利用GPU?

问题描述 投票:0回答:1

我的Titan-XP上的设备查询显示我有30个多处理器,每个多处理器最多有2048个线程。认为可以同时在硬件上物理执行的最大线程数是30 * 2048是否正确?即:像以下内核配置会利用这个吗?

kernel<<<60, 1024>>>(...);

我真的希望物理上有最大数量的块执行,同时避免让块等待安排。这是设备查询的完整输出:

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "TITAN Xp"
  CUDA Driver Version / Runtime Version          9.0 / 9.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 12190 MBytes (12781682688 bytes)
  (30) Multiprocessors, (128) CUDA Cores/MP:     3840 CUDA Cores
  GPU Max Clock rate:                            1582 MHz (1.58 GHz)
  Memory Clock rate:                             5705 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 3145728 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 4 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 9.0, NumDevs = 1, Device0 = TITAN Xp
Result = PASS
cuda gpu
1个回答
3
投票

是的,你的结论是正确的。对于CUDA 9或CUDA 9.1支持的所有GPU,可以“在飞行中”的最大线程数为2048 *。 (由CUDA 8支持的Fermi GPU在1536 * SM的情况下略低一些)

这是一个上限,你的内核的细节(资源利用率)可能意味着少于这个数字实际上可以“驻留”或“在飞行中”。这是GPU占用的一般主题。 CUDA包括一个占用计算器电子表格和一个程序化的occupancy API,以帮助确定您的特定内核。

通常的内核策略是使用有限数量的线程(例如,在你的情况下为60 * 1024)处理任意数据集大小是使用某种形式的称为grid striding loop的构造。

© www.soinside.com 2019 - 2024. All rights reserved.