Python:下载大文件时出现不可预知的内存错误

Tim*_* R. 5 python download out-of-memory

我编写了一个python脚本,用于从HTTP服务器下载大量视频文件(每个50-400 MB).到目前为止,它在很长的下载列表中运行良好,但由于某种原因,它很少出现内存错误.

这台机器有大约1 GB的RAM空闲,但我认为在运行这个脚本时它不会超出RAM.

我已经监视了任务管理器和perfmon中的内存使用情况,它总是与我看到的相同:在下载过程中缓慢增加,然后在完成下载后恢复到正常水平(不会出现小漏洞)或类似的东西).

下载行为的方式是它创建文件,在下载完成之前保持0 KB(或程序崩溃),然后立即写入整个文件并关闭它.

for i in range(len(urls)):
    if os.path.exists(folderName + '/' + filenames[i] + '.mov'):
        print 'File exists, continuing.'
        continue

    # Request the download page
    req = urllib2.Request(urls[i], headers = headers)

    sock = urllib2.urlopen(req)
    responseHeaders = sock.headers
    body = sock.read()
    sock.close()

    # Search the page for the download URL
    tmp = body.find('/getfile/')
    downloadSuffix = body[tmp:body.find('"', tmp)]
    downloadUrl = domain + downloadSuffix

    req = urllib2.Request(downloadUrl, headers = headers)

    print '%s Downloading %s, file %i of %i'
        % (time.ctime(), filenames[i], i+1, len(urls))

    f = urllib2.urlopen(req)

    # Open our local file for writing, 'b' for binary file mode
    video_file = open(foldername + '/' + filenames[i] + '.mov', 'wb')

    # Write the downloaded data to the local file
    video_file.write(f.read()) ##### MemoryError: out of memory #####
    video_file.close()

    print '%s Download complete!' % (time.ctime())

    # Free up memory, in hopes of preventing memory errors
    del f
    del video_file
Run Code Online (Sandbox Code Playgroud)

这是堆栈跟踪:

  File "downloadVideos.py", line 159, in <module>
    main()
  File "downloadVideos.py", line 136, in main
    video_file.write(f.read())
  File "c:\python27\lib\socket.py", line 358, in read
    buf.write(data)
MemoryError: out of memory
Run Code Online (Sandbox Code Playgroud)

bra*_*ers 9

你的问题在这里:f.read().该行尝试将整个文件下载到内存中.而不是那样,读取块(chunk = f.read(4096)),并将碎片保存到临时文件.

  • 你需要查看`Content-length`标题. (2认同)