openCL长溢出

openCL Long Overflowing

在我开始之前,我是一个 C 初学者,我正在尝试做一些 openCL 工作,这可能是一个错误。下面是我的内核代码:

__kernel void collatz(__global int* in, __global int* out)
{
    uint id = get_global_id(0);
    unsigned long n = (unsigned long)id;
    uint count = 0;

    while (n > 1) { 
        if (n % 2 == 0) {
            n = n / 2; 
        } else { 
            if(n == 1572066143) {
                unsigned long test = n;
                printf("BEFORE - %lu\n", n);
                test = (3 * test) + 1; 
                printf("AFTER  - %lu\n", test);

                n = (3 * n) + 1; 
             } else {
                 n = (3 * n) + 1; 
            }

       }

       count = count + 1;
    }

    out[id] = count;

}

和输出:

BEFORE - 1572066143
AFTER  - 421231134

对我来说,n 似乎溢出了,但我不明白为什么会这样。

有趣的是,如果我创建一个新变量来存储与 n 相同的值,那么它似乎可以正常工作。

unsigned long test = 1572066143;
printf("BEFORE - %lu\n", test);
test = (3 * test) + 1; 
printf("AFTER  - %lu\n", test);

输出:

 BEFORE - 1572066143
 AFTER  - 4716198430

正如我所说,我是 C 初学者,所以我可能会做一些非常愚蠢的事情!任何帮助将不胜感激,因为我已经拔头发几个小时了!

谢谢, 斯蒂芬

更新:

这是我的主机代码,以防我在这方面做一些愚蠢的事情:

int _tmain(int argc, _TCHAR* argv[])
{
    /*Step1: Getting platforms and choose an available one.*/
    cl_uint numPlatforms;   //the NO. of platforms
    cl_platform_id platform = NULL; //the chosen platform
    cl_int  status = clGetPlatformIDs(0, NULL, &numPlatforms);

    cl_platform_id* platforms = (cl_platform_id*)malloc(numPlatforms*   sizeof(cl_platform_id));
    status = clGetPlatformIDs(numPlatforms, platforms, NULL);
    platform = platforms[0];
    free(platforms);

    /*Step 2:Query the platform and choose the first GPU device if has one.*/
    cl_device_id        *devices;
    devices = (cl_device_id*)malloc(1 * sizeof(cl_device_id));
    clGetDeviceIDs(platform, CL_DEVICE_TYPE_GPU, 1, devices, NULL);

    /*Step 3: Create context.*/
    cl_context context = clCreateContext(NULL, 1, devices, NULL, NULL, NULL);

    /*Step 4: Creating command queue associate with the context.*/
    cl_command_queue commandQueue = clCreateCommandQueue(context, devices[0], 0, NULL);

    /*Step 5: Create program object */
    const char *filename = "HelloWorld_Kernel.cl";
    std::string sourceStr;
    status = convertToString(filename, sourceStr);
    const char *source = sourceStr.c_str();
    size_t sourceSize[] = { strlen(source) };
    cl_program program = clCreateProgramWithSource(context, 1, &source, sourceSize, NULL);

    status = clBuildProgram(program, 1, devices, NULL, NULL, NULL);

    /*Step 7: Initial input,output for the host and create memory objects for the kernel*/
    cl_ulong max = 2000000;
    cl_ulong *numbers = NULL;
    numbers = new cl_ulong[max];
    for (int i = 1; i <= max; i++) {
        numbers[i] = i;
    }

    int *output = (int*)malloc(sizeof(cl_ulong) * max);

    cl_mem inputBuffer = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, max * sizeof(cl_ulong), (void *)numbers, NULL);
    cl_mem outputBuffer = clCreateBuffer(context, CL_MEM_WRITE_ONLY, max * sizeof(cl_ulong), NULL, NULL);

    /*Step 8: Create kernel object */
    cl_kernel kernel = clCreateKernel(program, "collatz", NULL);

    /*Step 9: Sets Kernel arguments.*/
    status = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&inputBuffer);


    // Determine the size of the log
    size_t log_size;
    clGetProgramBuildInfo(program, devices[0], CL_PROGRAM_BUILD_LOG, 0, NULL, &log_size);

    // Allocate memory for the log
    char *log = (char *)malloc(log_size);

    // Get the log
    clGetProgramBuildInfo(program, devices[0], CL_PROGRAM_BUILD_LOG, log_size, log, NULL);

    // Print the log
    printf("%s\n", log);


    status = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&outputBuffer);

    /*Step 10: Running the kernel.*/
    size_t global_work_size[] = { max };
    status = clEnqueueNDRangeKernel(commandQueue, kernel, 1, NULL, global_work_size, NULL, 0, NULL, NULL);

   /*Step 11: Read the data put back to host memory.*/
   status = clEnqueueReadBuffer(commandQueue, outputBuffer, CL_TRUE, 0, max * sizeof(cl_ulong), output, 0, NULL, NULL);


return SUCCESS;

}

主机端和设备大小值有不同的大小。在主机中,long 可以从 32 位到 64 位不等,具体取决于平台。在设备中,long 仅指 64 位。

printf() 函数,如 C 中定义的那样,%ld 是打印长(主机端长)数字。您在内核中使用 printf,所以....可能是使用了类 C 的解析器,因此将变量打印为 32 位长。

你可以尝试将其打印为 %lld 或浮点数吗?

我终于弄清了问题的真相。

我是 运行 我的 Intel HD Graphics 4600 芯片上的代码,它产生了原始问题中显示的奇怪行为。我改用我的 AMD 卡,然后它开始正常工作了!

很奇怪。感谢大家的帮助!