tah*_*qui 4 python amazon-s3 boto3
I have more than 500,000 objects on s3. I am trying get the size of each object. I am using the following python code for that
import boto3
bucket = 'bucket'
prefix = 'prefix'
contents = boto3.client('s3').list_objects_v2(Bucket=bucket, MaxKeys=1000, Prefix=prefix)["Contents"]
for c in contents:
print(c["Size"])Run Code Online (Sandbox Code Playgroud)
But it just gave me the size of top 1000 objects. Based on the documentation we can't get more 1000. Is there any way I can get more than that?
小智 86
内置的 boto3Paginator类是克服 1000 条记录限制的最简单方法list-objects-v2。这可以实现如下
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket='bucket', Prefix='prefix')
for page in pages:
for obj in page['Contents']:
print(obj['Size'])
Run Code Online (Sandbox Code Playgroud)
更多详情:https : //boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Paginator.ListObjectsV2
AKX*_*AKX 19
使用响应中返回的ContinuationToken作为后续调用的参数,直到响应中返回的IsTruncated值为false。
这可以分解成一个整洁的生成器函数:
def get_all_s3_objects(s3, **base_kwargs):
continuation_token = None
while True:
list_kwargs = dict(MaxKeys=1000, **base_kwargs)
if continuation_token:
list_kwargs['ContinuationToken'] = continuation_token
response = s3.list_objects_v2(**list_kwargs)
yield from response.get('Contents', [])
if not response.get('IsTruncated'): # At the end of the list?
break
continuation_token = response.get('NextContinuationToken')
for file in get_all_s3_objects(boto3.client('s3'), Bucket=bucket, Prefix=prefix):
print(file['size'])
Run Code Online (Sandbox Code Playgroud)
see*_*spi 16
如果您不需要使用 ,则boto3.client可以使用它boto3.resource来获取文件的完整列表:
s3r = boto3.resource('s3')
bucket = s3r.Bucket('bucket_name')
files_in_bucket = list(bucket.objects.all())
Run Code Online (Sandbox Code Playgroud)
然后得到大小:
sizes = [f.size for f in files_in_bucket]
Run Code Online (Sandbox Code Playgroud)
根据您的存储桶的大小,这可能需要一分钟。
| 归档时间: |
|
| 查看次数: |
3795 次 |
| 最近记录: |