我可以调用什么 utility/binary 来确定 nVIDIA GPU 计算能力?

What utility/binary can I call to determine an nVIDIA GPU's Compute Capability?

假设我有一个安装了单个 GPU 的系统,并且假设我还安装了最新版本的 CUDA。

我想确定我的 GPU 的计算能力是多少。如果我可以编译代码,那会很容易:

#include <stdio.h>
int main() {
    cudaDeviceProp prop;
    cudaGetDeviceProperties(&prop, 0);
    printf("%d", prop.major * 10 + prop.minor);
}

但是 - 假设我想 无需 编译。我可以吗?我认为 nvidia-smi 可能对我有帮助,因为它可以让您查询有关设备的各种信息,但它似乎不会让您获得计算能力。也许我还能做些什么?也许可以通过 /proc 或系统日志看到某些内容?

编辑: 这旨在 运行 在构建之前,在我无法控制的系统上。因此它必须具有最小的依赖性,运行 在命令行上并且不需要 root 权限。

不幸的是,目前看来答案是“否”,需要编译程序或使用在别处编译的二进制文件。

编辑: 我已经针对这个问题采用了一种解决方法 - 一个自包含的 bash script,它编译一个小型内置 C 程序以确定计算能力。 (用 CMake 调用特别有用,但只能 运行 独立。)

此外,我已经就此事提交了 feature-requesting bug report at nVIDIA

这是脚本,假设 nvcc 在您的路径上:

//usr/bin/env nvcc --run "[=10=]" ${1:+--run-args "${@:1}"} ; exit $?
#include <cstdio>
#include <cstdlib>
#include <cuda_runtime_api.h>

int main(int argc, char *argv[])
{
    cudaDeviceProp prop;
    cudaError_t status;
    int device_count;
    int device_index = 0;
    if (argc > 1) {
        device_index = atoi(argv[1]);
    }

    status = cudaGetDeviceCount(&device_count);
    if (status != cudaSuccess) {
        fprintf(stderr,"cudaGetDeviceCount() failed: %s\n", cudaGetErrorString(status));
        return -1;
    }
    if (device_index >= device_count) {
        fprintf(stderr, "Specified device index %d exceeds the maximum (the device count on this system is %d)\n", device_index, device_count);
        return -1;
    }
    status = cudaGetDeviceProperties(&prop, device_index);
    if (status != cudaSuccess) {
        fprintf(stderr,"cudaGetDeviceProperties() for device device_index failed: %s\n", cudaGetErrorString(status));
        return -1;
    }
    int v = prop.major * 10 + prop.minor;
    printf("%d\n", v);
}

您可以使用 cuda 安装中包含的 deviceQuery 实用程序

# change cwd into utility source directoy
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery

# build deviceQuery utility with make as root
$ sudo make

# run deviceQuery
$ ./deviceQuery  | grep Capability
  CUDA Capability Major/Minor version number:    7.5

# optionally copy deviceQuery in ~/bin for future use
$ cp ./deviceQuery ~/bin

使用 RTX 2080 Ti 的 deviceQuery 的完整输出如下:

 $ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce RTX 2080 Ti"
  CUDA Driver Version / Runtime Version          11.2 / 10.2
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 11016 MBytes (11551440896 bytes)
  (68) Multiprocessors, ( 64) CUDA Cores/MP:     4352 CUDA Cores
  GPU Max Clock rate:                            1770 MHz (1.77 GHz)
  Memory Clock rate:                             7000 Mhz
  Memory Bus Width:                              352-bit
  L2 Cache Size:                                 5767168 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

谢谢。

我们可以使用nvidia-smi --query-gpu=compute_cap --format=csv来获取计算能力。

示例输出:

compute_cap
8.6

可用于 cuda 工具包 11.6。