Pycuda 向量算法 - 内核中的 Id

Pycuda Vector arithmetic - Id inside Kernel

我正在尝试用 pycuda 编写一个简单的程序来测试它,然后将它与我的 opencl 实现进行比较。然而,我在添加 2 个一维数组时遇到了问题。问题是我似乎无法找到每个元素的正确 ID。

我的代码很简单:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy as np

#Host variables
a = np.array([[1.0, 2,0 , 3.0]], dtype=np.float32)
b = np.array([[4.0, 5,0 , 6.0]], dtype=np.float32)
k = np.float32(2.0)

#Device Variables
a_d = cuda.mem_alloc(a.nbytes)
b_d = cuda.mem_alloc(b.nbytes)
cuda.memcpy_htod(a_d, a)
cuda.memcpy_htod(b_d, b)
s_d = cuda.mem_alloc(a.nbytes)
m_d = cuda.mem_alloc(a.nbytes)

#Device Source
mod = SourceModule("""
    __global__ void S(float *s, float *a, float *b)
    {
        int bx = blockIdx.x;
        int by = blockIdx.y;
        int tx = threadIdx.x;
        int ty = threadIdx.y;
        int row = by * blockDim.y + ty;
        int col = bx * blockDim.x + tx;
        int dim = gridDim.x * blockDim.x;
        const int id = row * dim + col;
        s[id] = a[id] + b[id];
    }

    __global__ void M(float *m, float *a, float k)
    {
        int bx = blockIdx.x;
        int by = blockIdx.y;
        int tx = threadIdx.x;
        int ty = threadIdx.y;
        int row = by * blockDim.y + ty;
        int col = bx * blockDim.x + tx;
        int dim = gridDim.x * blockDim.x;
        const int id = row * dim + col;
        m[id] = k * a[id];
    }
""")

#Vector addition
func = mod.get_function("S")
func(s_d, a_d, b_d, block=(1,3,1))
s = np.empty_like(a)
cuda.memcpy_dtoh(s, s_d)

#Vector multiplication by constant
func = mod.get_function("M")
func(m_d, a_d, k, block=(1,3,1))
m = np.empty_like(a)
cuda.memcpy_dtoh(m, m_d)

print "Vector Addition"
print "Expected: " + str(a+b)
print "Result: " + str(s) + "\n"
print "Vector Multiplication"
print "Expected: " + str(k*a)
print "Result: " + str(m)

我的输出是:

Vector Addition
Expected: [[ 5.  7.  0.  9.]]
Result: [[ 5.  7.  0.  6.]]

Vector Multiplication
Expected: [[ 2.  4.  0.  6.]]
Result: [[ 2.  4.  0.  6.]]

我真的不明白这个索引在 CUDA 中是如何工作的。我在网上找到了一些文档,让我对网格、块和线程的工作方式有了一些了解,但我仍然无法让它正常工作。我肯定错过了什么。 非常感谢每一条信息。

你的索引似乎很好,即使对于这个小例子来说有点过载(考虑一维就足够了)。

问题是,您的数组 ab 各有 4 个元素。但是你的内核函数只对前 3 个元素进行操作。所以是不是第4个元素的结果不符合预期

您是指以下内容吗?

a = np.array([[1.0, 2.0, 3.0]], dtype=np.float32)
b = np.array([[4.0, 5.0, 6.0]], dtype=np.float32)