运行上传器脚本时的结果不同

aws*_*ice 7 python amazon-s3 boto

我已经整理了一个将数据上传到S3的脚本.如果文件小于5MB,则将其作为一个块上传,但如果文件较大,则会进行分段上传.我知道阈值目前很小我只是在测试脚本的同时.如果我通过导入每个函数并以这种方式运行它来从Python运行脚本,那么一切都按预期工作.我知道代码需要清理,因为它尚未完成.但是,当我从命令行运行脚本时,我遇到了这个错误:

Traceback (most recent call last):
  File "upload_files_to_s3.py", line 106, in <module>
    main()
  File "upload_files_to_s3.py", line 103, in main
    check_if_mp_needed(conn, input_file, mb, bucket_name, sub_directory)
  File "upload_files_to_s3.py", line 71, in check_if_mp_needed
    multipart_upload(conn, input_file, mb, bucket_name, sub_directory)
  File "upload_files_to_s3.py", line 65, in multipart_upload
    mp.complete_upload()
  File "/usr/local/lib/python2.7/site-packages/boto/s3/multipart.py", line 304, in complete_upload
    self.id, xml)
  File "/usr/local/lib/python2.7/site-packages/boto/s3/bucket.py", line 1571, in complete_multipart_upload
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request

>The XML you provided was not well-formed or did not validate against our published schema
Run Code Online (Sandbox Code Playgroud)

这是代码:

import sys
import boto
from boto.s3.key import Key
import os
import math
from filechunkio import FileChunkIO


KEY = os.environ['AWS_ACCESS_KEY_ID']
SECRET = os.environ['AWS_SECRET_ACCESS_KEY']

def start_connection():
    key = KEY
    secret = SECRET
    return boto.connect_s3(key, secret)

def get_bucket_key(conn, bucket_name):
    bucket = conn.get_bucket(bucket_name)
    k = Key(bucket)
    return k

def get_key_name(sub_directory, input_file):
    full_key_name = os.path.join(sub_directory, os.path.basename(input_file))
    return full_key_name

def get_file_info(input_file):
    source_size = os.stat(input_file).st_size
    return source_size

def multipart_request(conn, input_file, bucket_name, sub_directory):
    bucket = conn.get_bucket(bucket_name)
    mp = bucket.initiate_multipart_upload(get_key_name(sub_directory, input_file))
    return mp

def get_chunk_size(mb):
    chunk_size = mb * 1048576
    return chunk_size

def get_chunk_count(input_file, mb):
    chunk_count = int(math.ceil(get_file_info(input_file)/float(get_chunk_size(mb))))
    return chunk_count

def regular_upload(conn, input_file, bucket_name, sub_directory):
    k = get_bucket_key(conn, bucket_name)
    k.key = get_key_name(sub_directory, input_file)
    k.set_contents_from_filename(input_file)


def multipart_upload(conn, input_file, mb, bucket_name, sub_directory):
    chunk_size = get_chunk_size(mb)
    chunks = get_chunk_count(input_file, mb)
    source_size = get_file_info(input_file)
    mp = multipart_request(conn, input_file, bucket_name, sub_directory)
    for i in range(chunks):
        offset = chunk_size * i
        b = min(chunk_size, source_size - offset)
        with FileChunkIO(input_file, 'r', offset = offset, bytes = b) as fp:
            mp.upload_part_from_file(fp, part_num = i + 1)
    mp.complete_upload()

def check_if_mp_needed(conn, input_file, mb, bucket_name, sub_directory):
    if get_file_info(input_file) <= 5242880:
        regular_upload(conn, input_file, bucket_name, sub_directory)
    else:
        multipart_upload(conn, input_file, mb, bucket_name, sub_directory)

def main():
    input_file = sys.argv[1]
    mb = sys.argv[2]
    bucket_name = sys.argv[3]
    sub_directory = sys.argv[4]
    conn = start_connection()
    check_if_mp_needed(conn, input_file, mb, bucket_name, sub_directory)

if __name__ == '__main__':
    main()
Run Code Online (Sandbox Code Playgroud)

谢谢!

Pet*_*ain 0

您的两个案例之间存在版本不匹配的情况。当您使用旧版本的 boto 时,它使用了错误的 AWS 架构,因此您会看到错误。

更详细地说,当在 IPython 中运行(使用 virtualenv)时,您拥有版本 2.45.0,当从命令行运行时,您拥有版本 2.8.0 boto。鉴于版本 2.8.0 可以追溯到 2013 年,因此出现架构错误也就不足为奇了。

修复方法是通过运行来升级 boto 的系统版本(您当前正在脚本中获取)pip install -U boto或将脚本转换为使用虚拟环境。有关后者的建议,请查看 SO 上的其他答案:从 virtualenv bin 内部运行 python 脚本不起作用