如何在GPU中计算对数(python3.5+numba+CUDA8.0)

How to calculate logarithm in GPU(python3.5+numba+CUDA8.0)

我用 math.log 在 GPU 中计算对数,这是 Supported Python features in CUDA Python 之一。但是失败了。

我的代码:

import os,sys,time,math
import pandas as pd
import numpy as np

from numba import cuda, float32

import os

bpg = (3,1) 
tpb = (2,3) 

@cuda.jit
def calcu_T(D,T):


    bx = cuda.blockIdx.x

    tx = cuda.threadIdx.x
    ty = cuda.threadIdx.y

    c_num = D.shape[1]

    ml = math.log(D[tx,ty],2)

D = np.array([[ 0.42487645,0.41607881,0.42027071,0.43751907,0.43512794,0.43656972,0.43940639,0.43864551,0.43447691,0.43120232],
              [2.989578,2.834707,2.942902,3.294948,2.868170,2.975180,3.066900,2.712719,2.835360,2.607334]], dtype=np.float32)
T = np.empty([1,1])

dD = cuda.to_device(D)
dT = cuda.device_array_like(T)
calcu_T[bpg, tpb](dD,dT)

错误报告:

Traceback (most recent call last):
  File "G:\myworkspace\python3.5\forte\forte170327\test7.py", line 104, in <module>
    calcu_T[bpg, tpb](dD,dT)
  File "D:\python3.5.3\lib\site-packages\numba\cuda\compiler.py", line 701, in __call__
    kernel = self.specialize(*args)
  File "D:\python3.5.3\lib\site-packages\numba\cuda\compiler.py", line 712, in specialize
    kernel = self.compile(argtypes)
  File "D:\python3.5.3\lib\site-packages\numba\cuda\compiler.py", line 727, in compile
    **self.targetoptions)
  File "D:\python3.5.3\lib\site-packages\numba\cuda\compiler.py", line 36, in core
    return fn(*args, **kwargs)
  File "D:\python3.5.3\lib\site-packages\numba\cuda\compiler.py", line 75, in compile_kernel
    cres = compile_cuda(pyfunc, types.void, args, debug=debug, inline=inline)
  File "D:\python3.5.3\lib\site-packages\numba\cuda\compiler.py", line 36, in core
    return fn(*args, **kwargs)
  File "D:\python3.5.3\lib\site-packages\numba\cuda\compiler.py", line 64, in compile_cuda
    locals={})
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 699, in compile_extra
    return pipeline.compile_extra(func)
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 352, in compile_extra
    return self._compile_bytecode()
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 660, in _compile_bytecode
    return self._compile_core()
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 647, in _compile_core
    res = pm.run(self.status)
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 238, in run
    raise patched_exception
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 230, in run
    stage()
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 444, in stage_nopython_frontend
    self.locals)
  File "D:\python3.5.3\lib\site-packages\numba\compiler.py", line 800, in type_inference_stage
    infer.propagate()
  File "D:\python3.5.3\lib\site-packages\numba\typeinfer.py", line 767, in propagate
    raise errors[0]
  File "D:\python3.5.3\lib\site-packages\numba\typeinfer.py", line 128, in propagate
    constraint(typeinfer)
  File "D:\python3.5.3\lib\site-packages\numba\typeinfer.py", line 379, in __call__
    self.resolve(typeinfer, typevars, fnty)
  File "D:\python3.5.3\lib\site-packages\numba\typeinfer.py", line 401, in resolve
    raise TypingError(msg, loc=self.loc)
numba.errors.TypingError: Failed at nopython (nopython frontend)
Invalid usage of Function(<built-in function log>) with parameters (float32, int64)
Known signatures:
 * (int64,) -> float64
 * (uint64,) -> float64
 * (float32,) -> float32
 * (float64,) -> float64
File "G:\myworkspace\python3.5\forte\forte170327\test7.py", line 28
[1] During: resolving callee type: Function(<built-in function log>)
[2] During: typing of call at G:\myworkspace\python3.5\forte\forte170327\test7.py (28)

这是打字错误?我该如何纠正它?

我运行这些代码用CUDA模拟器(here有详细说明),没有错误。为什么?

numba 运行时告诉你问题所在

Invalid usage of Function(<built-in function log>) with parameters (float32, int64)
Known signatures:
 * (int64,) -> float64
 * (uint64,) -> float64
 * (float32,) -> float32
 * (float64,) -> float64

即唯一可用的签名有一个参数。基本参数未实现。如果您在此处查看 source,您会发现 math.log 似乎直接绑定到 CUDA log 函数,该函数仅计算自然对数。

我猜这是 Numba 中的 documentation error。如果觉得困扰,建议举报