为什么numpy narray从文件中读取消耗如此多的内存?

祝方泽*_*祝方泽 6 python arrays file-io numpy

该文件包含2000000行:每行包含208列,以逗号分隔,如下所示:

0.0863314058048,0.0208767447842,0.03358010485,0.0,1.0,0.0,0.314285714286,0.336293217457,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0

该程序将这个文件读成一个numpy叙述,我预计它将消耗大约(2000000 * 208 * 8B) = 3.2GB内存.但是,当程序读取此文件时,我发现该程序消耗大约20GB的内存.

我很困惑为什么我的程序会消耗如此多的内存而不符合预期?

Sau*_*tro 2

np.loadtxt()我使用的是 Numpy 1.9.0,和的内存效率低下np.genfromtxt()似乎与它们基于临时列表来存储数据的事实直接相关:

  • 请参阅此处np.loadtxt()
  • 这里为了np.genfromtxt()

通过事先了解shape数组的大小,您可以想到一个文件读取器,它通过使用相应的存储数据来消耗非常接近理论内存量(本例为 3.2 GB)的内存量dtype

def read_large_txt(path, delimiter=None, dtype=None):
    with open(path) as f:
        nrows = sum(1 for line in f)
        f.seek(0)
        ncols = len(f.next().split(delimiter))
        out = np.empty((nrows, ncols), dtype=dtype)
        f.seek(0)
        for i, line in enumerate(f):
            out[i] = line.split(delimiter)
    return out
Run Code Online (Sandbox Code Playgroud)