Numpy"矢量化"的行式点积运行比for循环慢

php*_*kie 1 python arrays numpy matrix vectorization

给定具有形状(n,k)的矩阵A和大小为n的向量s,我想要计算具有形状(k,k)的矩阵G,如下所示:

对于{0,...,n-1}中的所有i,G + = s [i]*A [i] .T*A [i]

我尝试使用for循环(方法1)和矢量化方法(方法2)来实现它,但对于大的k值(特别是当k> 500时),for循环实现更快.

代码编写如下:

import numpy as np
k = 200
n = 50000
A = np.random.randint(0, 1000, (n,k)) # generates random data for the matrix A (n,k)
G1 = np.zeros((k,k)) # initialize G1 as a (k,k) matrix
s = np.random.randint(0, 1000, n) * 1.0 # initialize a random vector of size n

# METHOD 1
for i in xrange(n):
    G1 += s[i] * np.dot(np.array([A[i]]).T, np.array([A[i]]))

# METHOD 2
G2 = np.dot(A[:,np.newaxis].T, s[:,np.newaxis]*A)
G2 = np.squeeze(G2) # reduces dimension from (k,1,k) to (k,k)
Run Code Online (Sandbox Code Playgroud)

矩阵G1和G2是相同的(它们是矩阵G),唯一的区别在于它们的计算方式.有更聪明有效的方法来计算吗?

最后,这些是我用kn的随机大小得到的时间:

Test #: 1
k,n: (866, 45761)
Method1: 337.457569838s
Method2: 386.290487051s
--------------------
Test #: 2
k,n: (690, 48011)
Method1: 152.999140978s
Method2: 226.080267191s
--------------------
Test #: 3
k,n: (390, 5317)
Method1: 5.28722500801s
Method2: 4.86999702454s
--------------------
Test #: 4
k,n: (222, 5009)
Method1: 1.73456382751s
Method2: 0.929286956787s
--------------------
Test #: 5
k,n: (915, 16561)
Method1: 101.782826185s
Method2: 159.167108059s
--------------------
Test #: 6
k,n: (130, 11283)
Method1: 1.53138184547s
Method2: 0.64450097084s
--------------------
Test #: 7
k,n: (57, 37863)
Method1: 1.44776391983s
Method2: 0.494270086288s
--------------------
Test #: 8
k,n: (110, 34599)
Method1: 3.51851701736s
Method2: 1.61688089371s
Run Code Online (Sandbox Code Playgroud)

Div*_*kar 5

两个更加改进的版本将是 -

(A.T*s).dot(A)
(A.T).dot(A*s[:,None])
Run Code Online (Sandbox Code Playgroud)

问题method2:

有了method2,我们正在创造A[:,np.newaxis].T,这将是一个形状(k,1,n),这是一个3D数组.我认为使用3D数组,np.dot进入某种循环并且不是真正的矢量化(源代码可以在这里显示更多信息).

对于这种3D张量乘法,最好使用张量等价:np.tensordot.因此,改进版本method2变为:

G2 = np.tensordot(A[:,np.newaxis].T, s[:,np.newaxis]*A, axes=((2),(0)))
G2 = np.squeeze(G2)
Run Code Online (Sandbox Code Playgroud)

因为,我们sum-reducing只是每个输入中的一个轴np.tensordot,我们tensordot在这里并不需要,只需np.dotsqueezed-in版本上即可.这将引导我们回归method4.

运行时测试

方法 -

def method1(A, s):
    G1 = np.zeros((k,k)) # initialize G1 as a (k,k) matrix
    for i in xrange(n):
        G1 += s[i] * np.dot(np.array([A[i]]).T, np.array([A[i]]))
    return G1

def method2(A, s):
    G2 = np.dot(A[:,np.newaxis].T, s[:,np.newaxis]*A)
    G2 = np.squeeze(G2) # reduces dimension from (k,1,k) to (k,k)
    return G2

def method3(A, s):
    return (A.T*s).dot(A)

def method4(A, s):
    return (A.T).dot(A*s[:,None])

def method2_improved(A, s):
    G2 = np.tensordot(A[:,np.newaxis].T, s[:,np.newaxis]*A, axes=((2),(0)))
    G2 = np.squeeze(G2)
    return G2
Run Code Online (Sandbox Code Playgroud)

时间和验证 -

In [56]: k = 200
    ...: n = 5000
    ...: A = np.random.randint(0, 1000, (n,k))
    ...: s = np.random.randint(0, 1000, n) * 1.0
    ...: 

In [72]: print np.allclose(method1(A, s), method2(A, s))
    ...: print np.allclose(method1(A, s), method3(A, s))
    ...: print np.allclose(method1(A, s), method4(A, s))
    ...: print np.allclose(method1(A, s), method2_improved(A, s))
    ...: 
True
True
True
True

In [73]: %timeit method1(A, s)
    ...: %timeit method2(A, s)
    ...: %timeit method3(A, s)
    ...: %timeit method4(A, s)
    ...: %timeit method2_improved(A, s)
    ...: 
1 loops, best of 3: 1.12 s per loop
1 loops, best of 3: 693 ms per loop
100 loops, best of 3: 8.12 ms per loop
100 loops, best of 3: 8.17 ms per loop
100 loops, best of 3: 8.28 ms per loop
Run Code Online (Sandbox Code Playgroud)