我有一个关于使用 Continuum 的 Accelerate 和 numba 包中的 Python CUDA 库的问题。是用装饰@jit
用target = gpu
一样的@cuda.jit
?
我正在努力进入numba gpu processing
。我有这个MWE
:
import numpy as np
import numba
@numba.njit
def function():
ar = np.zeros((3, 3))
for i in range(3):
ar[i] = (1, 2, 3)
return ar
ar = function()
print(ar)
Run Code Online (Sandbox Code Playgroud)
输出:
[[1. 2. 3.]
[1. 2. 3.]
[1. 2. 3.]]
Run Code Online (Sandbox Code Playgroud)
现在我想在我的gpu
. 我尝试使用以下decorators
:
@numba.njit(target='cuda')
@numba.njit(target='gpu')
@numba.cuda.jit
Run Code Online (Sandbox Code Playgroud)
这些都不起作用。以下是上面的错误消息decorators
:
Traceback (most recent call last):
File "/home/amu/Desktop/RL_framework/help_functions/test.py", line 4, in <module>
@numba.jit(target='cuda')
File "/home/amu/anaconda3/lib/python3.7/site-packages/numba/core/decorators.py", line 171, in jit
targetoptions=options, **dispatcher_args)
File "/home/amu/anaconda3/lib/python3.7/site-packages/numba/core/decorators.py", line …
Run Code Online (Sandbox Code Playgroud)