快速迭代numpy数组

Ste*_*van 7 python arrays filtering signal-processing numpy

我是python的新手,我正在尝试做一些基本的信号处理工作,而且我遇到了严重的性能问题.以矢量化方式执行此操作是否存在python技巧?基本上我正在尝试实现一阶滤波器,但滤波器特性可能会从一个样本变为另一个样本.如果它只是一个过滤器我会使用numpy.signal.lfilter(),但它有点棘手.这里的代码片段非常缓慢:

#filter state
state = 0

#perform filtering
for sample in amplitude:
    if( sample == 1.0 ): #attack filter
        sample = (1.0 - att_coeff) * sample + att_coeff * state
    else: #release filter
        sample = (1.0 - rel_coeff) * sample + rel_coeff * state

    state = sample
Run Code Online (Sandbox Code Playgroud)

ser*_*lle 7

您可以考虑使用Python到本机代码转换器之一,例如Cython,NumbaPythran.

例如,使用timeit运行原始代码可以让我:

$ python -m timeit -s 'from co import co; import numpy as np; a = np.random.random(100000)' 'co(a, .5, .7)'
10 loops, best of 3: 120 msec per loop
Run Code Online (Sandbox Code Playgroud)

用Pythran注释它,如:

#pythran export co(float[], float, float)
def co(amplitude, att_coeff, rel_coeff):
    # filter state
    state = 0

    # perform filtering
    for sample in amplitude:
        if sample == 1.0: # attack filter
            state = (1.0 - att_coeff) * sample + att_coeff * state
        else:             # release filter
            state = (1.0 - rel_coeff) * sample + rel_coeff * state
    return state
Run Code Online (Sandbox Code Playgroud)

并用它编译

$ pythran co.py
Run Code Online (Sandbox Code Playgroud)

给我:

$ python -m timeit -s 'from co import co; import numpy as np; a = np.random.random(100000)' 'co(a, .5, .7)' 
1000 loops, best of 3: 253 usec per loop
Run Code Online (Sandbox Code Playgroud)

这大约是x470加速!我希望Numba和Cython能够提供类似的加速.


Cha*_*ley 0

每个条目都需要前一个条目,并且必须先计算前一个条目,然后才能计算当前条目。因此每个条目必须串行计算,并且不能以矢量化(即映射、并行)方式进行。