不同的线程是否可以将不同的GPU设置为当前的CUDA设备?
Can different threads set different GPUs as their current CUDA device?
比如我有2个GPU和2个主机线程。我无法检查它,因为 multigpu PC 离我很远。我想让第一个主机线程与第一个 GPU 一起工作,第二个主机线程与第二个 GPU 一起工作。所有主机线程都包含许多 cublas 调用。那么是否可以通过 cudaSetDevice() 调用从第一个主机线程选择第一个 GPU,从第二个主机线程选择第二个 gpu?
例如,对于第二个主机线程,我将调用 cudaSetDevice(1)
,对于第一个线程,我将调用 cudaSetDevice(0)
。
So is it possible to choose the fisrt GPU from the first host thread and the second gpu from the second host thread by cudaSetDevice() call?
是的,这是可能的。 cudaOpenMP
sample code 中给出了此类用法的示例,(摘录):
....
omp_set_num_threads(num_gpus); // create as many CPU threads as there are CUDA devices
//omp_set_num_threads(2*num_gpus);// create twice as many CPU threads as there are CUDA devices
#pragma omp parallel
{
unsigned int cpu_thread_id = omp_get_thread_num();
unsigned int num_cpu_threads = omp_get_num_threads();
// set and check the CUDA device for this CPU thread
int gpu_id = -1;
--> checkCudaErrors(cudaSetDevice(cpu_thread_id % num_gpus)); // "% num_gpus" allows more CPU threads than GPU devices
...,
比如我有2个GPU和2个主机线程。我无法检查它,因为 multigpu PC 离我很远。我想让第一个主机线程与第一个 GPU 一起工作,第二个主机线程与第二个 GPU 一起工作。所有主机线程都包含许多 cublas 调用。那么是否可以通过 cudaSetDevice() 调用从第一个主机线程选择第一个 GPU,从第二个主机线程选择第二个 gpu?
例如,对于第二个主机线程,我将调用 cudaSetDevice(1)
,对于第一个线程,我将调用 cudaSetDevice(0)
。
So is it possible to choose the fisrt GPU from the first host thread and the second gpu from the second host thread by cudaSetDevice() call?
是的,这是可能的。 cudaOpenMP
sample code 中给出了此类用法的示例,(摘录):
....
omp_set_num_threads(num_gpus); // create as many CPU threads as there are CUDA devices
//omp_set_num_threads(2*num_gpus);// create twice as many CPU threads as there are CUDA devices
#pragma omp parallel
{
unsigned int cpu_thread_id = omp_get_thread_num();
unsigned int num_cpu_threads = omp_get_num_threads();
// set and check the CUDA device for this CPU thread
int gpu_id = -1;
--> checkCudaErrors(cudaSetDevice(cpu_thread_id % num_gpus)); // "% num_gpus" allows more CPU threads than GPU devices
...,