我在理解(并最终解决)为什么在内存中使用大型字典使得创建其他字典的时间更长时会遇到一些困难.
这是我正在使用的测试代码
import time
def create_dict():
# return {x:[x]*125 for x in xrange(0, 100000)}
return {x:(x)*125 for x in xrange(0, 100000)} # UPDATED: to use tuples instead of list of values
class Foo(object):
@staticmethod
def dict_init():
start = time.clock()
Foo.sample_dict = create_dict()
print "dict_init in Foo took {0} sec".format(time.clock() - start)
if __name__ == '__main__':
Foo.dict_init()
for x in xrange(0, 10):
start = time.clock()
create_dict()
print "Run {0} took {1} seconds".format(x, time.clock() - start)
Run Code Online (Sandbox Code Playgroud)
如果我按原样运行代码(首先在Foo中初始化sample_dict),然后在循环中再次创建相同的字典10次,我得到以下结果:
dict_init in Foo took 0.385263764287 …Run Code Online (Sandbox Code Playgroud)