np.sum和之间有什么区别np.add.reduce?
虽然文档很明确:
例如,add.reduce()等效于sum().
两者的性能似乎完全不同:相对较小的阵列大小add.reduce大约快两倍.
$ python -mtimeit -s"import numpy as np; a = np.random.rand(100); summ=np.sum" "summ(a)"
100000 loops, best of 3: 2.11 usec per loop
$ python -mtimeit -s"import numpy as np; a = np.random.rand(100); summ=np.add.reduce" "summ(a)"
1000000 loops, best of 3: 0.81 usec per loop
$ python -mtimeit -s"import numpy as np; a = np.random.rand(1000); summ=np.sum" "summ(a)"
100000 loops, best of 3: 2.78 usec per loop
$ python -mtimeit -s"import numpy as …Run Code Online (Sandbox Code Playgroud) 这是一个软问题,但我怀疑理解这一点将帮助我(也希望其他人)更好地理解numpy(我最近从 MATLAB 迁移)的哲学。
有些函数,如sum、max、transpose等conjugate是类的方法ndarray,因此可以使用arr.sum()、arr.sum(axis=1)等。
不过,大多数函数都是模块的函数numpy,因此您需要像numpy.count_nonzero(arr)、 或numpy.roll(arr)等那样调用它们。其中许多方法仅将单个ndarray对象作为输入,因此在设计方面可以将它们视为数组本身的属性。
这种设计选择背后的逻辑是什么?
对于一维numpy数组a,我认为np.sum(a)和a.sum()是等价的函数,但我只是做了一个简单的实验,似乎后者总是要快一点:
In [1]: import numpy as np
In [2]: a = np.arange(10000)
In [3]: %timeit np.sum(a)
The slowest run took 16.85 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 6.46 µs per loop
In [4]: %timeit a.sum()
The slowest run took 19.80 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best …Run Code Online (Sandbox Code Playgroud)