Wad*_*ade 9 python arrays numpy
所以我试图编写一个python函数来返回一个名为Mielke-Berry R值的度量.度量标准计算如下:

我编写的当前代码有效,但由于等式中的和的总和,我唯一能想到解决它的问题是在Python中使用嵌套的for循环,这非常慢......
以下是我的代码:
def mb_r(forecasted_array, observed_array):
"""Returns the Mielke-Berry R value."""
assert len(observed_array) == len(forecasted_array)
y = forecasted_array.tolist()
x = observed_array.tolist()
total = 0
for i in range(len(y)):
for j in range(len(y)):
total = total + abs(y[j] - x[i])
total = np.array([total])
return 1 - (mae(forecasted_array, observed_array) * forecasted_array.size ** 2 / total[0])
Run Code Online (Sandbox Code Playgroud)
我将输入数组转换为列表的原因是因为我听说(尚未测试)使用python for循环索引一个numpy数组非常慢.
我觉得可能有某种numpy功能可以更快地解决这个问题,任何人都知道什么?
下面是利用一个量化的方式broadcasting来获得total-
np.abs(forecasted_array[:,None] - observed_array).sum()
Run Code Online (Sandbox Code Playgroud)
要同时接受列表和数组,我们可以使用NumPy内置的外部减法,如下所示 -
np.abs(np.subtract.outer(forecasted_array, observed_array)).sum()
Run Code Online (Sandbox Code Playgroud)
我们还可以利用numexpr模块进行更快的absolute计算,并summation-reductions在一次numexpr evaluate调用中执行,因此可以提高内存效率,就像这样 -
import numexpr as ne
forecasted_array2D = forecasted_array[:,None]
total = ne.evaluate('sum(abs(forecasted_array2D - observed_array))')
Run Code Online (Sandbox Code Playgroud)
如果您不受内存限制,优化嵌套循环的第一步numpy是使用广播并以矢量化方式执行操作:
import numpy as np
def mb_r(forecasted_array, observed_array):
"""Returns the Mielke-Berry R value."""
assert len(observed_array) == len(forecasted_array)
total = np.abs(forecasted_array[:, np.newaxis] - observed_array).sum() # Broadcasting
return 1 - (mae(forecasted_array, observed_array) * forecasted_array.size ** 2 / total[0])
Run Code Online (Sandbox Code Playgroud)
但是在这种情况下,循环发生在 C 而不是 Python 中,它涉及大小 (N, N) 数组的分配。
如上所述,广播意味着巨大的内存开销。所以它应该小心使用,它并不总是正确的方法。虽然您可能有在任何地方使用它的第一印象——但不要这样做。不久前,我也对这个事实感到困惑,请参阅我的问题Numpy ufuncs speed vs for loop speed。不要太冗长,我将在您的示例中展示这一点:
import numpy as np
# Broadcast version
def mb_r_bcast(forecasted_array, observed_array):
return np.abs(forecasted_array[:, np.newaxis] - observed_array).sum()
# Inner loop unrolled version
def mb_r_unroll(forecasted_array, observed_array):
size = len(observed_array)
total = 0.
for i in range(size): # There is only one loop
total += np.abs(forecasted_array - observed_array[i]).sum()
return total
Run Code Online (Sandbox Code Playgroud)
小尺寸阵列(广播速度更快)
forecasted = np.random.rand(100)
observed = np.random.rand(100)
%timeit mb_r_bcast(forecasted, observed)
57.5 µs ± 359 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit mb_r_unroll(forecasted, observed)
1.17 ms ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Run Code Online (Sandbox Code Playgroud)
中等大小的数组(相等)
forecasted = np.random.rand(1000)
observed = np.random.rand(1000)
%timeit mb_r_bcast(forecasted, observed)
15.6 ms ± 208 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit mb_r_unroll(forecasted, observed)
16.4 ms ± 13.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Run Code Online (Sandbox Code Playgroud)
大型阵列(广播较慢)
forecasted = np.random.rand(10000)
observed = np.random.rand(10000)
%timeit mb_r_bcast(forecasted, observed)
1.51 s ± 18 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit mb_r_unroll(forecasted, observed)
377 ms ± 994 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
Run Code Online (Sandbox Code Playgroud)
正如您所看到的,对于小型数组,广播版本比展开版本快20 倍,对于中型数组,它们相当相等,但对于大型数组,它要慢 4 倍,因为内存开销正在付出昂贵的代价。
另一种方法是使用numba及其神奇强大的@jit函数装饰器。在这种情况下,只需稍微修改初始代码即可。同样要使循环并行,您应该更改range为prange并提供parallel=True关键字参数。在下面的代码段中,我使用了@njit与 相同的装饰器@jit(nopython=True):
from numba import njit, prange
@njit(parallel=True)
def mb_r_njit(forecasted_array, observed_array):
"""Returns the Mielke-Berry R value."""
assert len(observed_array) == len(forecasted_array)
total = 0.
size = len(forecasted_array)
for i in prange(size):
observed = observed_array[i]
for j in prange(size):
total += abs(forecasted_array[j] - observed)
return 1 - (mae(forecasted_array, observed_array) * size ** 2 / total)
Run Code Online (Sandbox Code Playgroud)
您没有提供mae函数,但要在njit模式下运行代码,您还必须装饰mae函数,或者如果它是一个数字,则将其作为参数传递给 jitted 函数。
Python的科学生态系统是巨大的,我只是提一些其他等价的选项,以加快:Cython,Nuitka,Pythran,bottleneck和其他许多人。也许您对 感兴趣gpu computing,但这实际上是另一个故事。
在我的电脑上,不幸的是旧的,时间是:
import numpy as np
import numexpr as ne
forecasted_array = np.random.rand(10000)
observed_array = np.random.rand(10000)
Run Code Online (Sandbox Code Playgroud)
初始版本
%timeit mb_r(forecasted_array, observed_array)
23.4 s ± 430 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Run Code Online (Sandbox Code Playgroud)
numexpr
%%timeit
forecasted_array2d = forecasted_array[:, np.newaxis]
ne.evaluate('sum(abs(forecasted_array2d - observed_array))')[()]
784 ms ± 11.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Run Code Online (Sandbox Code Playgroud)
广播版
%timeit mb_r_bcast(forecasted, observed)
1.47 s ± 4.13 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Run Code Online (Sandbox Code Playgroud)
内循环展开版本
%timeit mb_r_unroll(forecasted, observed)
389 ms ± 11.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Run Code Online (Sandbox Code Playgroud)
麻麻njit(parallel=True)版本
%timeit mb_r_njit(forecasted_array, observed_array)
32 ms ± 4.05 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Run Code Online (Sandbox Code Playgroud)
可以看出,该njit方法比您的初始解决方案快730倍,也比解决方案快24.5倍numexpr(也许您需要英特尔的矢量数学库来加速它)。与初始版本相比,使用内循环展开的简单方法也可以使您的速度提高60倍。我的规格是:
Intel(R) Core(TM)2 Quad CPU Q9550 2.83GHz
Python 3.6.3
numpy 1.13.3
numba 0.36.1
numexpr 2.6.4
我对您的短语“我听说(尚未测试)使用 python for 循环索引 numpy 数组非常慢”感到惊讶。所以我测试:
arr = np.arange(1000)
ls = arr.tolistist()
%timeit for i in arr: pass
69.5 µs ± 282 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit for i in ls: pass
13.3 µs ± 81.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit for i in range(len(arr)): arr[i]
167 µs ± 997 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit for i in range(len(ls)): ls[i]
90.8 µs ± 1.07 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Run Code Online (Sandbox Code Playgroud)
事实证明你是对的。这是2-5x在列表上更快地迭代。当然,这些结果必须带有一定的讽刺意味:)
| 归档时间: |
|
| 查看次数: |
1873 次 |
| 最近记录: |