mou*_*hio 24 python numpy linear-algebra cython blas
我发现这个有用的教程是关于使用低级BLAS函数(在Cython中实现)来获得比python中标准的numpy线性代数例程更快的速度.现在,我已成功地使矢量产品正常工作.首先,我保存以下内容linalg.pyx
:
import cython
import numpy as np
cimport numpy as np
from libc.math cimport exp
from libc.string cimport memset
from scipy.linalg.blas import fblas
REAL = np.float64
ctypedef np.float64_t REAL_t
cdef extern from "/home/jlorince/flda/voidptr.h":
void* PyCObject_AsVoidPtr(object obj)
ctypedef double (*ddot_ptr) (const int *N, const double *X, const int *incX, const double *Y, const int *incY) nogil
cdef ddot_ptr ddot=<ddot_ptr>PyCObject_AsVoidPtr(fblas.ddot._cpointer) # vector-vector multiplication
cdef int ONE = 1
def vec_vec(syn0, syn1, size):
cdef int lSize = size
f = <REAL_t>ddot(&lSize, <REAL_t *>(np.PyArray_DATA(syn0)), &ONE, <REAL_t *>(np.PyArray_DATA(syn1)), &ONE)
return f
Run Code Online (Sandbox Code Playgroud)
(此处提供了voidptr.h的源代码)
一旦我编译它,它工作正常,并且肯定比np.inner
以下更快:
In [1]: import linalg
In [2]: import numpy as np
In [3]: x = np.random.random(100)
In [4]: %timeit np.inner(x,x)
1000000 loops, best of 3: 1.61 µs per loop
In [5]: %timeit linalg.vec_vec(x,x,100)
1000000 loops, best of 3: 483 ns per loop
In [8]: np.all(np.inner(x,x)==linalg.vec_vec(x,x,100))
Out[8]: True
Run Code Online (Sandbox Code Playgroud)
现在,这很好,但仅适用于计算两个向量的点/内积(在本例中相当)的情况.我现在需要做的是,实现类似的功能(我希望能提供类似的加速)来做矢量矩阵内部产品.也就是说,我希望复制np.inner
传递一维数组和二维矩阵时的功能:
In [4]: x = np.random.random(5)
In [5]: y = np.random.random((5,5))
In [6]: np.inner(x,y)
Out[6]: array([ 1.42116225, 1.13242989, 1.95690196, 1.87691992, 0.93967486])
Run Code Online (Sandbox Code Playgroud)
这相当于计算1D数组和矩阵每行的内/点积(再次,相当于1D数组):
In [32]: [np.inner(x,row) for row in y]
Out[32]:
[1.4211622497461549, 1.1324298918119025, 1.9569019618096966,1.8769199192990056, 0.93967485730285505]
Run Code Online (Sandbox Code Playgroud)
从我所看到的BLAS文档,我想我需要从这样的东西开始(使用dgemv):
ctypedef double (*dgemv_ptr) (const str *TRANS, const int *M, const int *N, const double *ALPHA, const double *A, const int *LDA, const double *X, const int *incX, const double *BETA, const double *Y, const int *incY)
cdef dgemv_ptr dgemv=<dgemv>PyCObject_AsVoidPtr(fblas.dgemv._cpointer) # matrix vector multiplication
Run Code Online (Sandbox Code Playgroud)
但是我需要帮助(a)定义我可以在Python中使用的实际函数(即vec-matrix
类似于vec_vec
上面的函数),以及(b)知道如何使用它来正确地复制行为np.inner
,这就是我需要的模型我正在实施.
编辑: 这是dgemv的相关BLAS文档的链接,我需要使用,这在此处得到确认:
In [13]: np.allclose(scipy.linalg.blas.fblas.dgemv(1.0,y,x), np.inner(x,y))
Out[13]: True
Run Code Online (Sandbox Code Playgroud)
但是像这样开箱即用它实际上比纯np.inner慢,所以我仍然需要Cython实现的帮助.
编辑2这是我最近的尝试,编译很好,但每当我尝试运行它时崩溃python与分段错误:
cdef int ONE = 1
cdef char tr = 'n'
cdef REAL_t ZEROF = <REAL_t>0.0
cdef REAL_t ONEF = <REAL_t>1.0
def mat_vec(mat,vec,mat_rows,mat_cols):
cdef int m = mat_rows
cdef int n = mat_cols
out = <REAL_t>dgemv(&tr, &m, &n, &ONEF, <REAL_t *>(np.PyArray_DATA(mat)), &m, <REAL_t *>(np.PyArray_DATA(vec)), &ONE, &ZEROF, NULL, &ONE)
return out
Run Code Online (Sandbox Code Playgroud)
编译后,我尝试运行linalg.mat_vec(y,x,5,5)
,(使用与上面相同的x和y),但这只是崩溃.我想我很亲密,但不知道还有什么可以改变......
根据@Pietro Saccardi:
int dgemv_(char *trans, integer *m, integer *n, doublereal *
alpha, doublereal *a, integer *lda, doublereal *x, integer *incx,
doublereal *beta, doublereal *y, integer *incy)
...
Y - DOUBLE PRECISION array of DIMENSION at least
( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n'
and at least
( 1 + ( n - 1 )*abs( INCY ) ) otherwise.
Before entry with BETA non-zero, the incremented array Y
must contain the vector y. On exit, Y is overwritten by the
updated vector y.
Run Code Online (Sandbox Code Playgroud)
我怀疑你可以在通话中使用NULL
for 。Y