Ash*_*ary 34 python list-comprehension list generator-expression timeit
我正在回答这个问题,我更喜欢这里的生成器表达并使用它,我认为它会更快,因为生成器不需要先创建整个列表:
>>> lis=[['a','b','c'],['d','e','f']]
>>> 'd' in (y for x in lis for y in x)
True
Run Code Online (Sandbox Code Playgroud)
Levon在他的解决方案中使用了列表理解,
>>> lis = [['a','b','c'],['d','e','f']]
>>> 'd' in [j for i in mylist for j in i]
True
Run Code Online (Sandbox Code Playgroud)
但是当我做这些LC的时间结果比生成器快时:
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f']]" "'d' in (y for x in lis for y in x)"
100000 loops, best of 3: 2.36 usec per loop
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f']]" "'d' in [y for x in lis for y in x]"
100000 loops, best of 3: 1.51 usec per loop
Run Code Online (Sandbox Code Playgroud)
然后我增加了列表的大小,并再次计时:
lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]
Run Code Online (Sandbox Code Playgroud)
这次搜索'd'
生成器比LC快,但是当我搜索中间元素(11)和最后一个元素然后LC再次击败生成器表达式时,我无法理解为什么?
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in (y for x in lis for y in x)"
100000 loops, best of 3: 2.96 usec per loop
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "'d' in [y for x in lis for y in x]"
100000 loops, best of 3: 7.4 usec per loop
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "11 in [y for x in lis for y in x]"
100000 loops, best of 3: 5.61 usec per loop
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "11 in (y for x in lis for y in x)"
100000 loops, best of 3: 9.76 usec per loop
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "18 in (y for x in lis for y in x)"
100000 loops, best of 3: 8.94 usec per loop
~$ python -m timeit -s "lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],[7,8,9],[10,11,12],[13,14,15],[16,17,18]]" "18 in [y for x in lis for y in x]"
100000 loops, best of 3: 7.13 usec per loop
Run Code Online (Sandbox Code Playgroud)
sen*_*rle 33
扩展Paulo的答案,由于函数调用的开销,生成器表达式通常比列表推导慢.在这种情况下,in
如果相当早地找到项目,则偏移的短路行为会缓慢,否则,模式成立.
我通过分析器运行了一个简单的脚本,以进行更详细的分析.这是脚本:
lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],
[7,8,9],[10,11,12],[13,14,15],[16,17,18]]
def ge_d():
return 'd' in (y for x in lis for y in x)
def lc_d():
return 'd' in [y for x in lis for y in x]
def ge_11():
return 11 in (y for x in lis for y in x)
def lc_11():
return 11 in [y for x in lis for y in x]
def ge_18():
return 18 in (y for x in lis for y in x)
def lc_18():
return 18 in [y for x in lis for y in x]
for i in xrange(100000):
ge_d()
lc_d()
ge_11()
lc_11()
ge_18()
lc_18()
Run Code Online (Sandbox Code Playgroud)
以下是相关结果,重新排序以使模式更清晰.
5400002 function calls in 2.830 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
100000 0.158 0.000 0.251 0.000 fop.py:3(ge_d)
500000 0.092 0.000 0.092 0.000 fop.py:4(<genexpr>)
100000 0.285 0.000 0.285 0.000 fop.py:5(lc_d)
100000 0.356 0.000 0.634 0.000 fop.py:8(ge_11)
1800000 0.278 0.000 0.278 0.000 fop.py:9(<genexpr>)
100000 0.333 0.000 0.333 0.000 fop.py:10(lc_11)
100000 0.435 0.000 0.806 0.000 fop.py:13(ge_18)
2500000 0.371 0.000 0.371 0.000 fop.py:14(<genexpr>)
100000 0.344 0.000 0.344 0.000 fop.py:15(lc_18)
Run Code Online (Sandbox Code Playgroud)
创建生成器表达式等同于创建生成器函数并调用它.这占了一个电话<genexpr>
.然后,在第一种情况下,next
被调用4次,直到d
达到,总共5次调用(次数100000次迭代= ncalls = 500000).在第二种情况下,它被称为17次,总共18次呼叫; 在第三次,24次,总共25次通话.
genex在第一种情况下优于列表理解,但额外的调用next
解释了列表理解的速度与第二种和第三种情况下生成器表达的速度之间的大部分差异.
>>> .634 - .278 - .333
0.023
>>> .806 - .371 - .344
0.091
Run Code Online (Sandbox Code Playgroud)
我不确定剩下的时间是什么原因; 即使没有额外的函数调用,似乎生成器表达式也会变慢.我想这证实了inspectorG4dget的断言"创建生成器理解比列表理解有更多的本机开销." 但无论如何,这清楚地表明生成器表达式较慢,主要是因为调用next
.
我要补充一点,当短路无效时,列表推导仍然更快,即使对于非常大的列表也是如此.例如:
>>> counter = itertools.count()
>>> lol = [[counter.next(), counter.next(), counter.next()]
for _ in range(1000000)]
>>> 2999999 in (i for sublist in lol for i in sublist)
True
>>> 3000000 in (i for sublist in lol for i in sublist)
False
>>> %timeit 2999999 in [i for sublist in lol for i in sublist]
1 loops, best of 3: 312 ms per loop
>>> %timeit 2999999 in (i for sublist in lol for i in sublist)
1 loops, best of 3: 351 ms per loop
>>> %timeit any([2999999 in sublist for sublist in lol])
10 loops, best of 3: 161 ms per loop
>>> %timeit any(2999999 in sublist for sublist in lol)
10 loops, best of 3: 163 ms per loop
>>> %timeit for i in [2999999 in sublist for sublist in lol]: pass
1 loops, best of 3: 171 ms per loop
>>> %timeit for i in (2999999 in sublist for sublist in lol): pass
1 loops, best of 3: 183 ms per loop
Run Code Online (Sandbox Code Playgroud)
正如您所看到的,当短路无关紧要时,即使对于长达百万项的列表列表,列表推导也始终更快.显然,对于in
这些尺度的实际应用,由于短路,发电机将更快.但对于其他类型的迭代任务而言,项目数量确实是线性的,列表推导几乎总是更快.如果您需要在列表上执行多个测试,则尤其如此; 您可以非常快速地迭代已经构建的列表理解:
>>> incache = [2999999 in sublist for sublist in lol]
>>> get_list = lambda: incache
>>> get_gen = lambda: (2999999 in sublist for sublist in lol)
>>> %timeit for i in get_list(): pass
100 loops, best of 3: 18.6 ms per loop
>>> %timeit for i in get_gen(): pass
1 loops, best of 3: 187 ms per loop
Run Code Online (Sandbox Code Playgroud)
在这种情况下,列表理解速度提高了一个数量级!
当然,只有在内存耗尽之后才能保持这种状态.这让我想到了最后一点.使用发生器有两个主要原因:利用短路和节省内存.对于非常大的seqences/iterables,生成器是显而易见的方法,因为它们可以节省内存.但如果短路不是一种选择,那么你几乎从不选择发电机而不是列表来提高速度.你选择它们来节省内存,这总是一种权衡.
the*_*olf 13
完全取决于数据.
生成器具有固定的设置时间,必须根据调用的项目来分摊; 列表推导最初更快,但随着更多内存与更大的数据集一起使用,将会显着减慢.
回想一下,作为CPython的列表被扩大,该列表中的生长模式调整大小4,8,16,25,35,46,58,72,88,....对于更大的列表推导,Python可能会分配比数据大小多4倍的内存.一旦你击中了VM--真的是sloowww!但是,如上所述,列表推导比小数据集的生成器更快.
考虑案例 1,2x26列表列表:
LoL=[[c1,c2] for c1,c2 in zip(string.ascii_lowercase,string.ascii_uppercase)]
def lc_d(item='d'):
return item in [i for sub in LoL for i in sub]
def ge_d(item='d'):
return item in (y for x in LoL for y in x)
def any_lc_d(item='d'):
return any(item in x for x in LoL)
def any_gc_d(item='d'):
return any([item in x for x in LoL])
def lc_z(item='z'):
return item in [i for sub in LoL for i in sub]
def ge_z(item='z'):
return item in (y for x in LoL for y in x)
def any_lc_z(item='z'):
return any(item in x for x in LoL)
def any_gc_z(item='z'):
return any([item in x for x in LoL])
cmpthese.cmpthese([lc_d,ge_d,any_gc_d,any_gc_z,any_lc_d,any_lc_z, lc_z, ge_z])
Run Code Online (Sandbox Code Playgroud)
结果在这些时间:
rate/sec ge_z lc_z lc_d any_lc_z any_gc_z any_gc_d ge_d any_lc_d
ge_z 124,652 -- -10.1% -16.6% -44.3% -46.5% -48.5% -76.9% -80.7%
lc_z 138,678 11.3% -- -7.2% -38.0% -40.4% -42.7% -74.3% -78.6%
lc_d 149,407 19.9% 7.7% -- -33.3% -35.8% -38.2% -72.3% -76.9%
any_lc_z 223,845 79.6% 61.4% 49.8% -- -3.9% -7.5% -58.5% -65.4%
any_gc_z 232,847 86.8% 67.9% 55.8% 4.0% -- -3.7% -56.9% -64.0%
any_gc_d 241,890 94.1% 74.4% 61.9% 8.1% 3.9% -- -55.2% -62.6%
ge_d 539,654 332.9% 289.1% 261.2% 141.1% 131.8% 123.1% -- -16.6%
any_lc_d 647,089 419.1% 366.6% 333.1% 189.1% 177.9% 167.5% 19.9% --
Run Code Online (Sandbox Code Playgroud)
现在考虑案例2,它表明LC和gen之间存在很大差异.在这种情况下,我们正在寻找一个100 x 97 x 97列表列表中的一个元素:
LoL=[[str(a),str(b),str(c)]
for a in range(100) for b in range(97) for c in range(97)]
def lc_10(item='10'):
return item in [i for sub in LoL for i in sub]
def ge_10(item='10'):
return item in (y for x in LoL for y in x)
def any_lc_10(item='10'):
return any([item in x for x in LoL])
def any_gc_10(item='10'):
return any(item in x for x in LoL)
def lc_99(item='99'):
return item in [i for sub in LoL for i in sub]
def ge_99(item='99'):
return item in (y for x in LoL for y in x)
def any_lc_99(item='99'):
return any(item in x for x in LoL)
def any_gc_99(item='99'):
return any([item in x for x in LoL])
cmpthese.cmpthese([lc_10,ge_10,any_lc_10,any_gc_10,lc_99,ge_99,any_lc_99,any_gc_99],c=10,micro=True)
Run Code Online (Sandbox Code Playgroud)
结果在这些时间:
rate/sec usec/pass ge_99 lc_99 lc_10 any_lc_99 any_gc_99 any_lc_10 ge_10 any_gc_10
ge_99 3 354545.903 -- -20.6% -30.6% -60.8% -61.7% -63.5% -100.0% -100.0%
lc_99 4 281678.295 25.9% -- -12.6% -50.6% -51.8% -54.1% -100.0% -100.0%
lc_10 4 246073.484 44.1% 14.5% -- -43.5% -44.8% -47.4% -100.0% -100.0%
any_lc_99 7 139067.292 154.9% 102.5% 76.9% -- -2.4% -7.0% -100.0% -100.0%
any_gc_99 7 135748.100 161.2% 107.5% 81.3% 2.4% -- -4.7% -100.0% -100.0%
any_lc_10 8 129331.803 174.1% 117.8% 90.3% 7.5% 5.0% -- -100.0% -100.0%
ge_10 175,494 5.698 6221964.0% 4943182.0% 4318339.3% 2440446.0% 2382196.2% 2269594.1% -- -38.5%
any_gc_10 285,327 3.505 10116044.9% 8036936.7% 7021036.1% 3967862.6% 3873157.1% 3690083.0% 62.6% --
Run Code Online (Sandbox Code Playgroud)
正如你所看到的 - 这取决于它是一个权衡...
Pau*_*ine 10
与普遍的看法相反,列表理解对于中等范围来说非常好.Iterator协议意味着调用iterator.next(),而Python中的函数调用很昂贵.
当然,在某些时候,生成器内存/ CPU权衡将开始支付,但对于小集合,列表理解是非常有效的.