在我的 gpu 上使用 numba 和 jit 到 运行 python 时出错

Error when using numba and jit to run python with my gpu

此代码来自 geeksforgeeks,通常用于 运行(GPU 的时间较短):

from numba import jit, cuda, errors
import numpy as np
# to measure exec time
from timeit import default_timer as timer   

  
# normal function to run on cpu
def func(a):                                
    for i in range(10000000):
        a[i]+= 1      
  
# function optimized to run on gpu 
@jit(target ="cuda")                         
def func2(a):
    for i in range(10000000):
        a[i]+= 1
if __name__=="__main__":
    n = 10000000                            
    a = np.ones(n, dtype = np.float64)
    b = np.ones(n, dtype = np.float32)
      
    start = timer()
    func(a)
    print("without GPU:", timer()-start)    
      
    start = timer()
    func2(a)
    print("with GPU:", timer()-start)

但我在 'def func2(a)' 行中收到一条错误消息:

__init__() got an unexpected keyword argument 'locals'

终端中的错误是:

C:\Users\user\AppData\Local\Programs\Python\Python38\lib\site-packages\numba\core\decorators.py:153: NumbaDeprecationWarning: The 'target' keyword argument is deprecated.
  warnings.warn("The 'target' keyword argument is deprecated.", NumbaDeprecationWarning)

为什么会发生这种情况,我该如何解决?

我有一个英特尔 i7 10750H 和一个 1650ti 显卡

去除弃用警告:https://numba.pydata.org/numba-doc/dev/reference/deprecation.html#deprecation-of-the-target-kwarg

这是一个 hack,但请先尝试更新您的 CUDA 和驱动程序版本,然后重试代码。如果没有效果,最后试试这个技巧:

from numba import cuda
import code
code.interact(local=locals)

# function optimized to run on gpu 
@cuda.jit(target ="cuda")                         
def func2(a):
    for i in range(10000000):
        a[i]+= 1