CUDA 分析工具 "no kernels were profiled"
CUDA profiling tools "no kernels were profiled"
我无法使用正在运行的 CUDA 分析工具。我的华硕笔记本电脑有两个视频卡。一个集成(英特尔)和另一个,Nvidia GTX 960M。
我怀疑visual profiler使用的是集成显卡,所以我更改了这个特定应用程序的默认显卡,在“Nvidia Control Panel”和“Manager 3d Settings->Program Settings”下使用“高性能 NVidia 处理器”。
没有任何变化。 运行 Visual Profiler,在“Overall GPU usage”选项卡中,我得到“No GPU devices in Session”,这意味着据我所知没有使用 GPU。
此外,我注意到通知区域中的 Nvidia 显示图标未报告任何正在使用视频卡的应用程序。
这里似乎有什么问题?如何同时为 Visual Profiler 和命令行 nvprof.exe 应用程序启用 Nvidia GPU? Nsight 似乎都不适合我。
我正在测试的代码如下:
#include<stdio.h>
#include<iostream>
#include<stdlib.h>
#include<string.h>
#define NUM_THREADS 256
#define IMG_SIZE 1048576
struct Coefficients_SOA {
int r;
int b;
int g;
int hue;
int saturation;
int maxVal;
int minVal;
int finalVal;
};
__global__
void complicatedCalculation(Coefficients_SOA* data)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
int grayscale = (data[i].r + data[i].g + data[i].b)/data[i].maxVal;
int hue_sat = data[i].hue * data[i].saturation / data[i].minVal;
data[i].finalVal = grayscale*hue_sat;
}
void complicatedCalculation()
{
Coefficients_SOA* d_x;
cudaMalloc(&d_x, IMG_SIZE*sizeof(Coefficients_SOA));
int num_blocks = IMG_SIZE/NUM_THREADS;
complicatedCalculation<<<num_blocks,NUM_THREADS>>>(d_x);
cudaFree(d_x);
}
int main(int argc, char*argv[])
{
complicatedCalculation();
return 0;
}
此致,
PS:我在win10/64bit
下安装了CUDA Version 11
另外,我根据https://docs.nvidia.com/cuda/pdf/CUDA_Installation_Guide_Windows.pdf
验证了CUDA的安装
为了方便起见,我附上了 DeviceQuery 和 BandwidthTest CUDA 示例程序。
deviceQuery 示例报告
D:\Program Files\nVidia\CUDA Samples\v11.0\bin\win64\Release>deviceQuery
deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 960M"
CUDA Driver Version / Runtime Version 11.0 / 11.0
CUDA Capability Major/Minor version number: 5.0
Total amount of global memory: 4096 MBytes (4294967296 bytes)
( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores
GPU Max Clock rate: 1176 MHz (1.18 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 4 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.0, CUDA Runtime Version = 11.0, NumDevs = 1
Result = PASS
带宽测试示例报告
D:\Program Files\nVidia\CUDA Samples\v11.0\bin\win64\Release>bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: GeForce GTX 960M
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 12.2
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 11.8
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 68.9
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
问题已解决。作为CUDA世界的初学者,我不知道我应该在命令行下添加参数gencode来编译我的CUDA文件(Visual Studio of CUDA SDK sample projects已经有了这些参数,这就是为什么我有 GPU activity).
所以,对于我的具有CUDA Capability Major/Minor版本号5.0的maxwell架构,命令行下的完整参数列表应该是这样的。
nvcc -run -m64 -gencode arch=compute_50,code=sm_50 -o aos_soa.exe aos_soa.cu
不幸的是,我的第一本书“学习 CUDA 编程”,来自 Packt Publishing,第 49 页提到我应该只使用以下参数编译,除了在源代码文件包含一个包含上述所有参数的“Makefile”(仅适用于 linux,所以我忽略了它)。
$ nvcc -o aos_soa ./aos_soa.cu
现在我可以在 nvprof 下查看我的 GPU 统计信息。
nvprof aos_soa.exe
==18308== NVPROF is profiling process 18308, command: aos_soa.exe
==18308== Profiling application: aos_soa.exe
==18308== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 100.00% 1.1421ms 1 1.1421ms 1.1421ms 1.1421ms complicatedCalculation(Coefficients_SOA*)
API calls: 83.40% 226.57ms 1 226.57ms 226.57ms 226.57ms cudaMalloc
15.90% 43.183ms 1 43.183ms 43.183ms 43.183ms cuDevicePrimaryCtxRelease
0.58% 1.5790ms 1 1.5790ms 1.5790ms 1.5790ms cudaFree
0.07% 198.40us 1 198.40us 198.40us 198.40us cuModuleUnload
0.03% 70.100us 1 70.100us 70.100us 70.100us cudaLaunchKernel
0.01% 26.800us 1 26.800us 26.800us 26.800us cuDeviceTotalMem
0.01% 20.200us 101 200ns 100ns 3.3000us cuDeviceGetAttribute
0.00% 11.600us 1 11.600us 11.600us 11.600us cuDeviceGetPCIBusId
0.00% 1.4000us 3 466ns 200ns 700ns cuDeviceGetCount
0.00% 1.4000us 2 700ns 200ns 1.2000us cuDeviceGet
0.00% 600ns 1 600ns 600ns 600ns cuDeviceGetName
0.00% 400ns 1 400ns 400ns 400ns cuDeviceGetLuid
0.00% 300ns 1 300ns 300ns 300ns cuDeviceGetUuid
我无法使用正在运行的 CUDA 分析工具。我的华硕笔记本电脑有两个视频卡。一个集成(英特尔)和另一个,Nvidia GTX 960M。
我怀疑visual profiler使用的是集成显卡,所以我更改了这个特定应用程序的默认显卡,在“Nvidia Control Panel”和“Manager 3d Settings->Program Settings”下使用“高性能 NVidia 处理器”。
没有任何变化。 运行 Visual Profiler,在“Overall GPU usage”选项卡中,我得到“No GPU devices in Session”,这意味着据我所知没有使用 GPU。
此外,我注意到通知区域中的 Nvidia 显示图标未报告任何正在使用视频卡的应用程序。
这里似乎有什么问题?如何同时为 Visual Profiler 和命令行 nvprof.exe 应用程序启用 Nvidia GPU? Nsight 似乎都不适合我。
我正在测试的代码如下:
#include<stdio.h>
#include<iostream>
#include<stdlib.h>
#include<string.h>
#define NUM_THREADS 256
#define IMG_SIZE 1048576
struct Coefficients_SOA {
int r;
int b;
int g;
int hue;
int saturation;
int maxVal;
int minVal;
int finalVal;
};
__global__
void complicatedCalculation(Coefficients_SOA* data)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
int grayscale = (data[i].r + data[i].g + data[i].b)/data[i].maxVal;
int hue_sat = data[i].hue * data[i].saturation / data[i].minVal;
data[i].finalVal = grayscale*hue_sat;
}
void complicatedCalculation()
{
Coefficients_SOA* d_x;
cudaMalloc(&d_x, IMG_SIZE*sizeof(Coefficients_SOA));
int num_blocks = IMG_SIZE/NUM_THREADS;
complicatedCalculation<<<num_blocks,NUM_THREADS>>>(d_x);
cudaFree(d_x);
}
int main(int argc, char*argv[])
{
complicatedCalculation();
return 0;
}
此致,
PS:我在win10/64bit
下安装了CUDA Version 11另外,我根据https://docs.nvidia.com/cuda/pdf/CUDA_Installation_Guide_Windows.pdf
验证了CUDA的安装为了方便起见,我附上了 DeviceQuery 和 BandwidthTest CUDA 示例程序。
deviceQuery 示例报告
D:\Program Files\nVidia\CUDA Samples\v11.0\bin\win64\Release>deviceQuery
deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 960M"
CUDA Driver Version / Runtime Version 11.0 / 11.0
CUDA Capability Major/Minor version number: 5.0
Total amount of global memory: 4096 MBytes (4294967296 bytes)
( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores
GPU Max Clock rate: 1176 MHz (1.18 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 4 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.0, CUDA Runtime Version = 11.0, NumDevs = 1
Result = PASS
带宽测试示例报告
D:\Program Files\nVidia\CUDA Samples\v11.0\bin\win64\Release>bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: GeForce GTX 960M
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 12.2
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 11.8
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 68.9
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
问题已解决。作为CUDA世界的初学者,我不知道我应该在命令行下添加参数gencode来编译我的CUDA文件(Visual Studio of CUDA SDK sample projects已经有了这些参数,这就是为什么我有 GPU activity).
所以,对于我的具有CUDA Capability Major/Minor版本号5.0的maxwell架构,命令行下的完整参数列表应该是这样的。
nvcc -run -m64 -gencode arch=compute_50,code=sm_50 -o aos_soa.exe aos_soa.cu
不幸的是,我的第一本书“学习 CUDA 编程”,来自 Packt Publishing,第 49 页提到我应该只使用以下参数编译,除了在源代码文件包含一个包含上述所有参数的“Makefile”(仅适用于 linux,所以我忽略了它)。
$ nvcc -o aos_soa ./aos_soa.cu
现在我可以在 nvprof 下查看我的 GPU 统计信息。
nvprof aos_soa.exe
==18308== NVPROF is profiling process 18308, command: aos_soa.exe
==18308== Profiling application: aos_soa.exe
==18308== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 100.00% 1.1421ms 1 1.1421ms 1.1421ms 1.1421ms complicatedCalculation(Coefficients_SOA*)
API calls: 83.40% 226.57ms 1 226.57ms 226.57ms 226.57ms cudaMalloc
15.90% 43.183ms 1 43.183ms 43.183ms 43.183ms cuDevicePrimaryCtxRelease
0.58% 1.5790ms 1 1.5790ms 1.5790ms 1.5790ms cudaFree
0.07% 198.40us 1 198.40us 198.40us 198.40us cuModuleUnload
0.03% 70.100us 1 70.100us 70.100us 70.100us cudaLaunchKernel
0.01% 26.800us 1 26.800us 26.800us 26.800us cuDeviceTotalMem
0.01% 20.200us 101 200ns 100ns 3.3000us cuDeviceGetAttribute
0.00% 11.600us 1 11.600us 11.600us 11.600us cuDeviceGetPCIBusId
0.00% 1.4000us 3 466ns 200ns 700ns cuDeviceGetCount
0.00% 1.4000us 2 700ns 200ns 1.2000us cuDeviceGet
0.00% 600ns 1 600ns 600ns 600ns cuDeviceGetName
0.00% 400ns 1 400ns 400ns 400ns cuDeviceGetLuid
0.00% 300ns 1 300ns 300ns 300ns cuDeviceGetUuid