ConnectionResetError:远程主机强制关闭现有连接

Ken*_*nny 5 python python-3.x

我正在编写一个脚本来下载一组文件.我成功完成了这项工作并且工作正常.现在我尝试添加下载进度的动态打印输出.

对于小型下载(顺便说一下是.mp4文件),例如5MB,进展很有效,文件成功关闭,从而生成完整且有效的下载.mp4文件.对于较大的文件,如250MB及以上,它无法正常工作,我收到以下错误:

在此输入图像描述

这是我的代码:

import urllib.request
import shutil
import os
import sys
import io

script_dir = os.path.dirname('C:/Users/Kenny/Desktop/')
rel_path = 'stupid_folder/video.mp4'
abs_file_path = os.path.join(script_dir, rel_path)
url = 'https://archive.org/download/SF145/SF145_512kb.mp4'
# Download the file from `url` and save it locally under `file_name`:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    resp = urllib.request.urlopen(url)
    length = resp.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up

    # print(length, blocksize)

    buf = io.BytesIO()
    size = 0
    while True:
        buf1 = resp.read(blocksize)
        if not buf1:
            break
        buf.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()

    shutil.copyfileobj(response, out_file)
Run Code Online (Sandbox Code Playgroud)

这适用于小文件,但较大的文件我得到错误.现在,如果我注释掉进度指示器代码,我不会得到更大的文件错误:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    # eventID = 123456
    # 
    # resp = urllib.request.urlopen(url)
    # length = resp.getheader('content-length')
    # if length:
    #     length = int(length)
    #     blocksize = max(4096, length//100)
    # else:
    #     blocksize = 1000000 # just made something up
    # 
    # # print(length, blocksize)
    # 
    # buf = io.BytesIO()
    # size = 0
    # while True:
    #     buf1 = resp.read(blocksize)
    #     if not buf1:
    #         break
    #     buf.write(buf1)
    #     size += len(buf1)
    #     if length:
    #         print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    # print()

    shutil.copyfileobj(response, out_file)
Run Code Online (Sandbox Code Playgroud)

有没有人有任何想法?这是我项目的最后一部分,我真的希望能够看到进展.再一次,这是Python 3.5.感谢您提供的任何帮助!

Jea*_*bre 3

您将打开您的网址两次,一次为response,一次为resp。使用进度条的东西,您正在消耗数据,因此当使用复制文件时copyfileobj,数据是空的(也许这不准确,因为它适用于小文件,但您在这里做了两次事情,它可能是你的问题的根源)

要获取进度条和有效文件,请执行以下操作:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    length = response.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up


    size = 0
    while True:
        buf1 = response.read(blocksize)
        if not buf1:
            break
        out_file.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()
Run Code Online (Sandbox Code Playgroud)

对您的代码进行的简化:

  • 只有一个urlopen,如response
  • BytesIO,直接写信给out_file