Sco*_*y1- 2 python performance dictionary cython
我目前正在 python 中的建模环境中使用 dicts 来共享连接部件的连接属性。我目前这样做的方式大约需要我的总程序运行时间的 15-20%,这是相当多的有几百万次迭代......
所以我发现自己正在研究如何加快更新 dicts 中的多个值并从 dicts 中获取多个值。
我的示例 dict 看起来像这样(键值对的数量预计将保持在 300 到 1000 的当前范围内,因此我将其填充到这个数量):
val_dict = {'a': 5.0, 'b': 18.8, 'c': -55/2}
for i in range(200):
val_dict[str(i)] = i
val_dict[i] = i**2
keys = ('b', 123, '89', 'c')
new_values = np.arange(10, 41, 10)
length = new_values.shape[0]
Run Code Online (Sandbox Code Playgroud)
虽然键值对keys的形状new_values和数量val_dict将始终保持不变,但每次迭代时的变化值都必须在每次迭代时更新(并且也必须在每次迭代时从我的另一部分中检索)代码)。new_values
我对几种方法进行了计时,其中从 dicts获取多个值似乎是itemgetter从operator模块中使用最快的方法。我可以getter在迭代开始之前定义,因为所需的变量是常量:
getter = itemgetter(*keys)
%timeit getter(val_dict)
The slowest run took 10.45 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 140 ns per loop
Run Code Online (Sandbox Code Playgroud)
我想这很好,或者有什么更快的吗?
但是当通过掩码将这些值分配给一个 numpy 数组时,它会减慢速度:
result = np.ones(25)
idx = np.array((0, 5, 8, -1))
def getter_fun(result, idx, getter, val_dict):
result[idx] = getter(val_dict)
%timeit getter_fun(result, idx, getter, new_values)
The slowest run took 11.44 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.77 µs per loop
Run Code Online (Sandbox Code Playgroud)
有什么办法可以改善吗?我想元组拆包是这里最糟糕的部分......
为了设置多个值,我计时了几种方法:一个解包值的函数,一个使用给定键值对更新的函数,一个使用 for 循环的函数,一个字典理解和一个生成器函数.
def unpack_putter(val_dict, keys, new_values):
(val_dict[keys[0]],
val_dict[keys[1]],
val_dict[keys[2]],
val_dict[keys[3]]) = new_values
%timeit unpack_putter(val_dict, keys, new_values)
The slowest run took 8.85 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.29 µs per loop
def upd_putter(val_dict, keys, new_values):
val_dict.update({keys[0]: new_values[0],
keys[1]: new_values[1],
keys[2]: new_values[2],
keys[3]: new_values[3]})
%timeit upd_putter(val_dict, keys, new_values)
The slowest run took 15.22 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 963 ns per loop
def for_putter(val_dict, keys, new_values, length):
for i in range(length):
val_dict[keys[i]] = new_values[i]
%timeit for_putter(val_dict, keys, new_values, length)
The slowest run took 12.31 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.14 µs per loop
def dictcomp_putter(val_dict, keys, new_values, length):
val_dict.update({keys[i]: new_values[i] for i in range(length)})
%timeit dictcomp_putter(val_dict, keys, new_values, length)
The slowest run took 7.13 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 1.69 µs per loop
def gen_putter(val_dict, keys, new_values, length):
gen = ((keys[i], new_values[i]) for i in range(length))
val_dict.update(dict(gen))
%timeit gen_putter(val_dict, keys, new_values, length)
The slowest run took 10.03 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.54 µs per loop
Run Code Online (Sandbox Code Playgroud)
的upd_putter将是最快的,但我可以采用某种具有交替的形状使用它keys和new_values(它们仍然将是迭代期间恒定的,但是这被认为是每个部分具有不同的量键更新,其具有由用户输入确定的)。有趣的是,for 循环对我来说似乎还不错。所以我想我做错了,必须有更快的方法来做到这一点。
最后要考虑的一件事:我很可能很快就会使用 Cython,所以我想这会使 for 循环更有利?或者我可以joblib用来并行化 for 循环。我也想过使用numba,但后来我必须摆脱所有的 dicts ......
希望你能帮我解决这个问题。
为 MSeifert 进行编辑(尽管我不确定您是否是这个意思):
tuplelist = list()
for i in range(200):
tuplelist.append(i)
tuplelist.append(str(i))
keys_long = tuple(tuplelist)
new_values_long = np.arange(0,400)
%timeit for_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 73.5 µs per loop
%timeit dictcomp_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 96.4 µs per loop
%timeit gen_putter(val_dict, keys_long, new_values_long, 400)
10000 loops, best of 3: 129 µs per loop
Run Code Online (Sandbox Code Playgroud)
现在让我们关注两个与性能无关的非常重要的事情:可维护性和可伸缩性。
手动索引的前两种方法:
(val_dict[keys[0]],
val_dict[keys[1]],
val_dict[keys[2]],
val_dict[keys[3]]) = new_values
Run Code Online (Sandbox Code Playgroud)
和
val_dict.update({keys[0]: new_values[0],
keys[1]: new_values[1],
keys[2]: new_values[2],
keys[3]: new_values[3]})
Run Code Online (Sandbox Code Playgroud)
硬编码(维护噩梦)你插入的元素数量,所以这些方法不能很好地扩展。因此,我不会将它们包含在答案的其余部分中。我并不是说这些不好 - 它们只是不能很好地扩展,并且很难比较仅适用于特定数量条目的函数的时间。
首先让我介绍另外两种基于zip(itertools.izip如果您使用的是 python-2.x 时使用)的方法:
def new1(val_dict, keys, new_values, length):
val_dict.update(zip(keys, new_values))
def new2(val_dict, keys, new_values, length):
for key, val in zip(keys, new_values):
val_dict[key] = val
Run Code Online (Sandbox Code Playgroud)
这将是解决这个问题的“最pythonic”的方法(至少在我看来)。
我还将 更改new_values为列表,因为迭代 NumPy 数组比将数组转换为列表然后迭代列表更糟糕 如果您对细节感兴趣,我在另一个答案中详细说明了该部分。
让我们看看这些方法是如何执行的:
import numpy as np
def old_for(val_dict, keys, new_values, length):
for i in range(length):
val_dict[keys[i]] = new_values[i]
def old_update_comp(val_dict, keys, new_values, length):
val_dict.update({keys[i]: new_values[i] for i in range(length)})
def old_update_gen(val_dict, keys, new_values, length):
gen = ((keys[i], new_values[i]) for i in range(length))
val_dict.update(dict(gen))
def new1(val_dict, keys, new_values, length):
val_dict.update(zip(keys, new_values))
def new2(val_dict, keys, new_values, length):
for key, val in zip(keys, new_values):
val_dict[key] = val
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = ('b', 123, '89', 'c')
new_values = np.arange(10, 41, 10).tolist()
length = len(new_values)
%timeit old_for(val_dict, keys, new_values, length)
# 4.1 µs ± 183 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit old_update_comp(val_dict, keys, new_values, length)
# 9.56 µs ± 180 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit old_update_gen(val_dict, keys, new_values, length)
# 17 µs ± 332 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new1(val_dict, keys, new_values, length)
# 5.92 µs ± 123 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 3.23 µs ± 84.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Run Code Online (Sandbox Code Playgroud)
并且有更多的键和值:
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = range(1000)
new_values = range(1000)
length = len(new_values)
%timeit old_for(val_dict, keys, new_values, length)
# 1.08 ms ± 26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit old_update_comp(val_dict, keys, new_values, length)
# 1.08 ms ± 13.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit old_update_gen(val_dict, keys, new_values, length)
# 1.44 ms ± 31.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new1(val_dict, keys, new_values, length)
# 242 µs ± 3.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 346 µs ± 8.24 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Run Code Online (Sandbox Code Playgroud)
因此,对于更大的输入,我的方法似乎比您的方法快得多(2-5 倍)。
您可以尝试使用 Cython 改进您的方法,不幸的是 Cython 不支持理解cdef或cpdef函数,所以我只对其他方法进行了 cythonized:
%load_ext cython
%%cython
cpdef new1_cy(dict val_dict, tuple keys, new_values, Py_ssize_t length):
val_dict.update(zip(keys, new_values.tolist()))
cpdef new2_cy(dict val_dict, tuple keys, new_values, Py_ssize_t length):
for key, val in zip(keys, new_values.tolist()):
val_dict[key] = val
cpdef new3_cy(dict val_dict, tuple keys, int[:] new_values, Py_ssize_t length):
cdef Py_ssize_t i
for i in range(length):
val_dict[keys[i]] = new_values[i]
Run Code Online (Sandbox Code Playgroud)
这次我制作keys了一个tupleandnew_values成为一个 NumPy 数组,以便它们使用定义的 Cython 函数:
import numpy as np
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = tuple(range(4))
new_values = np.arange(4)
length = len(new_values)
%timeit new1(val_dict, keys, new_values, length)
# 7.88 µs ± 317 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2(val_dict, keys, new_values, length)
# 4.4 µs ± 140 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit new2_cy(val_dict, keys, new_values, length)
# 5.51 µs ± 56.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
val_dict = {'a': 1, 'b': 2, 'c': 3}
keys = tuple(range(1000))
new_values = np.arange(1000)
length = len(new_values)
%timeit new1_cy(val_dict, keys, new_values, length)
# 208 µs ± 9.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new2_cy(val_dict, keys, new_values, length)
# 231 µs ± 13.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit new3_cy(val_dict, keys, new_values, length)
# 156 µs ± 4.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Run Code Online (Sandbox Code Playgroud)
因此,如果您有一个元组和一个 numpy 数组,您可以使用使用普通索引和 memoryview 的函数实现几乎 2 倍的加速new3_cy。至少如果您有很多需要插入的键值对。
请注意,我没有解决从 dict 中获取多个值的问题,因为operator.itemgetter这可能是最好的方法。