在python中加速逐元素的数组乘法

JEq*_*hua 13 performance numpy matrix-multiplication python-2.7 numba

我一直在玩numba和numexpr试图加速一个简单的元素矩阵乘法.我无法获得更好的结果,它们基本上(速度方向)等同于numpys乘法函数.这个地区有人有运气吗?我使用numba和numexpr是错误的(我对此很新)或者这是一个不好的方法来尝试加快速度.这是一个可重现的代码,谢谢你的高级:

import numpy as np
from numba import autojit
import numexpr as ne

a=np.random.rand(10,5000000)

# numpy
multiplication1 = np.multiply(a,a)

# numba
def multiplix(X,Y):
    M = X.shape[0]
    N = X.shape[1]
    D = np.empty((M, N), dtype=np.float)
    for i in range(M):
        for j in range(N):
            D[i,j] = X[i, j] * Y[i, j]
    return D

mul = autojit(multiplix)
multiplication2 = mul(a,a)

# numexpr
def numexprmult(X,Y):
    M = X.shape[0]
    N = X.shape[1]
    return ne.evaluate("X * Y")

multiplication3 = numexprmult(a,a) 
Run Code Online (Sandbox Code Playgroud)

Ale*_*ogt 11

那么使用呢?

elementwise.F90:

subroutine elementwise( a, b, c, M, N ) bind(c, name='elementwise')
  use iso_c_binding, only: c_float, c_int

  integer(c_int),intent(in) :: M, N
  real(c_float), intent(in) :: a(M, N), b(M, N)
  real(c_float), intent(out):: c(M, N)

  integer :: i,j

  forall (i=1:M,j=1:N)
    c(i,j) = a(i,j) * b(i,j)
  end forall

end subroutine 
Run Code Online (Sandbox Code Playgroud)

elementwise.py:

from ctypes import CDLL, POINTER, c_int, c_float
import numpy as np
import time

fortran = CDLL('./elementwise.so')
fortran.elementwise.argtypes = [ POINTER(c_float), 
                                 POINTER(c_float), 
                                 POINTER(c_float),
                                 POINTER(c_int),
                                 POINTER(c_int) ]

# Setup    
M=10
N=5000000

a = np.empty((M,N), dtype=c_float)
b = np.empty((M,N), dtype=c_float)
c = np.empty((M,N), dtype=c_float)

a[:] = np.random.rand(M,N)
b[:] = np.random.rand(M,N)


# Fortran call
start = time.time()
fortran.elementwise( a.ctypes.data_as(POINTER(c_float)), 
                     b.ctypes.data_as(POINTER(c_float)), 
                     c.ctypes.data_as(POINTER(c_float)), 
                     c_int(M), c_int(N) )
stop = time.time()
print 'Fortran took ',stop - start,'seconds'

# Numpy
start = time.time()
c = np.multiply(a,b)
stop = time.time()
print 'Numpy took ',stop - start,'seconds'
Run Code Online (Sandbox Code Playgroud)

我使用编译了Fortran文件

gfortran -O3 -funroll-loops -ffast-math -floop-strip-mine -shared -fPIC \
         -o elementwise.so elementwise.F90
Run Code Online (Sandbox Code Playgroud)

输出产生~10%的加速:

 $ python elementwise.py 
Fortran took  0.213667869568 seconds
Numpy took  0.230120897293 seconds
 $ python elementwise.py 
Fortran took  0.209784984589 seconds
Numpy took  0.231616973877 seconds
 $ python elementwise.py 
Fortran took  0.214708089828 seconds
Numpy took  0.25369310379 seconds
Run Code Online (Sandbox Code Playgroud)

  • 可爱的答案如JEquihua所说.但是,要获得准确的答案,必须先进行第一次Fortran调用以初始化共享库.第二个电话会给出最精确的答案.加速应该在50%左右.获得最准确的另一种方法是使用循环(假设100次调用相同的函数)并获取平均时间. (2认同)

Jen*_*man 6

你是如何做你的时间的?

随机数组的创建占用了计算的大部分,如果将​​它包含在时间中,您几乎看不到结果的任何真正差异,但是,如果您预先创建它,您实际上可以比较这些方法.

这是我的结果,我一直在看你所看到的.numpy和numba给出了相同的结果(numba更快一点.)

(我没有可用的mathxpr)

In [1]: import numpy as np
In [2]: from numba import autojit
In [3]: a=np.random.rand(10,5000000)

In [4]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 90 ms per loop

In [5]: # numba

In [6]: def multiplix(X,Y):
   ...:         M = X.shape[0]
   ...:         N = X.shape[1]
   ...:         D = np.empty((M, N), dtype=np.float)
   ...:         for i in range(M):
   ...:                 for j in range(N):
   ...:                         D[i,j] = X[i, j] * Y[i, j]
   ...:         return D
   ...:         

In [7]: mul = autojit(multiplix)

In [26]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 182 ms per loop

In [27]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 185 ms per loop

In [28]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 181 ms per loop

In [29]: %timeit multiplication2 = mul(a,a)
10 loops, best of 3: 179 ms per loop

In [30]: %timeit multiplication2 = mul(a,a)
10 loops, best of 3: 180 ms per loop

In [31]: %timeit multiplication2 = mul(a,a)
10 loops, best of 3: 178 ms per loop
Run Code Online (Sandbox Code Playgroud)

更新:我使用了最新版本的numba,刚从源代码编译:'0.11.0-3-gea20d11-dirty'

我用Fedora 19中的默认numpy测试了这个,'1.7.1' 从源编译的numpy'1.6.1'链接到:

Update3 我之前的结果当然不正确,我在内循环中返回D,因此跳过了90%的计算.

这为ali_m的假设提供了更多的证据,即很难比已经非常优化的c代码做得更好.

但是,如果你想做一些更复杂的事情,例如,

np.sqrt(((X[:, None, :] - X) ** 2).sum(-1))
Run Code Online (Sandbox Code Playgroud)

我可以重现Jake Vanderplas得到的数字:

In [14]: %timeit pairwise_numba(X)
10000 loops, best of 3: 92.6 us per loop

In [15]: %timeit pairwise_numpy(X)
1000 loops, best of 3: 662 us per loop
Run Code Online (Sandbox Code Playgroud)

所以看起来你做的事情到目前为止已经被numpy优化了很难做得更好.