Ame*_*ina 146 python amazon-s3 boto boto3
如何查看S3中的存储桶内部有boto3什么?(即做一个"ls")?
执行以下操作:
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')
Run Code Online (Sandbox Code Playgroud)
收益:
s3.Bucket(name='some/path/')
Run Code Online (Sandbox Code Playgroud)
我怎么看其内容?
gar*_*aat 189
查看内容的一种方法是:
for my_bucket_object in my_bucket.objects.all():
print(my_bucket_object)
Run Code Online (Sandbox Code Playgroud)
cgs*_*ler 85
这与'ls'类似,但它没有考虑前缀文件夹约定,并将列出存储桶中的对象.这取决于读者过滤掉作为密钥名称一部分的前缀.
在Python 2中:
from boto.s3.connection import S3Connection
conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
print(obj.key)
Run Code Online (Sandbox Code Playgroud)
在Python 3中:
from boto3 import client
conn = client('s3') # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
print(key['Key'])
Run Code Online (Sandbox Code Playgroud)
Tus*_*ras 35
我假设您已单独配置身份验证.
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('bucket_name')
for file in my_bucket.objects.all():
print(file.key)
Run Code Online (Sandbox Code Playgroud)
小智 26
如果要传递ACCESS和SECRET键:
from boto3.session import Session
ACCESS_KEY='your_access_key'
SECRET_KEY='your_secret_key'
session = Session(aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
your_bucket = s3.Bucket('your_bucket')
for s3_file in your_bucket.objects.all():
print(s3_file.key)
Run Code Online (Sandbox Code Playgroud)
Hep*_*tus 18
为了处理大型密钥列表(即当目录列表大于1000个项目时),我使用以下代码来累积具有多个列表的键值(即文件名)(感谢上面的Amelio用于第一行).代码适用于python3:
from boto3 import client
bucket_name = "my_bucket"
prefix = "my_key/sub_key/lots_o_files"
s3_conn = client('s3') # type: BaseClient ## again assumes boto.cfg setup, assume AWS S3
s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter = "/")
if 'Contents' not in s3_result:
#print(s3_result)
return []
file_list = []
for key in s3_result['Contents']:
file_list.append(key['Key'])
print(f"List count = {len(file_list)}")
while s3_result['IsTruncated']:
continuation_key = s3_result['NextContinuationToken']
s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter="/", ContinuationToken=continuation_key)
for key in s3_result['Contents']:
file_list.append(key['Key'])
print(f"List count = {len(file_list)}")
return file_list
Run Code Online (Sandbox Code Playgroud)
ise*_*lim 17
#To print all filenames in a bucket
import boto3
s3 = boto3.client('s3')
def get_s3_keys(bucket):
"""Get a list of keys in an S3 bucket."""
resp = s3.list_objects_v2(Bucket=bucket)
for obj in resp['Contents']:
files = obj['Key']
return files
filename = get_s3_keys('your_bucket_name')
print(filename)
#To print all filenames in a certain directory in a bucket
import boto3
s3 = boto3.client('s3')
def get_s3_keys(bucket, prefix):
"""Get a list of keys in an S3 bucket."""
resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
for obj in resp['Contents']:
files = obj['Key']
print(files)
return files
filename = get_s3_keys('your_bucket_name', 'folder_name/sub_folder_name/')
print(filename)
Run Code Online (Sandbox Code Playgroud)
更新:最简单的方法是使用 awswrangler
import awswrangler as wr
wr.s3.list_objects('s3://bucket_name')
Run Code Online (Sandbox Code Playgroud)
vj *_*san 12
import boto3
s3 = boto3.resource('s3')
## Bucket to use
my_bucket = s3.Bucket('city-bucket')
## List objects within a given prefix
for obj in my_bucket.objects.filter(Delimiter='/', Prefix='city/'):
print obj.key
Run Code Online (Sandbox Code Playgroud)
输出:
city/pune.csv
city/goa.csv
Run Code Online (Sandbox Code Playgroud)
我的s3 keys实用程序功能基本上是@ Hephaestus答案的优化版本:
import boto3
s3_paginator = boto3.client('s3').get_paginator('list_objects_v2')
def keys(bucket_name, prefix='/', delimiter='/', start_after=''):
prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
start_after = (start_after or prefix) if prefix.endswith(delimiter) else start_after
for page in s3_paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
for content in page.get('Contents', ()):
yield content['Key']
Run Code Online (Sandbox Code Playgroud)
在我的测试中(boto3 1.9.84),它明显快于等效(但更简单)的代码:
import boto3
def keys(bucket_name, prefix='/', delimiter='/'):
prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
bucket = boto3.resource('s3').Bucket(bucket_name)
return (_.key for _ in bucket.objects.filter(Prefix=prefix))
Run Code Online (Sandbox Code Playgroud)
由于S3保证UTF-8二进制排序结果,start_after因此在第一个函数中添加了优化.
所以你要求的是 boto3 中的等价物aws s3 ls。这将列出所有顶级文件夹和文件。这是我能得到的最接近的结果;它只列出所有顶级文件夹。令人惊讶的是,如此简单的操作却如此困难。
import boto3
def s3_ls():
s3 = boto3.resource('s3')
bucket = s3.Bucket('example-bucket')
result = bucket.meta.client.list_objects(Bucket=bucket.name,
Delimiter='/')
for o in result.get('CommonPrefixes'):
print(o.get('Prefix'))
Run Code Online (Sandbox Code Playgroud)
对象摘要:
有两个标识符附加到 ObjectSummary:
AWS S3 文档中有关对象键的更多信息:
对象键:
创建对象时,您指定键名,该键名唯一标识存储桶中的对象。例如,在 Amazon S3 控制台(请参阅 AWS 管理控制台)中,当您突出显示一个存储桶时,您的存储桶中会出现一个对象列表。这些名称是对象键。密钥的名称是一系列 Unicode 字符,其 UTF-8 编码长度最多为 1024 个字节。
Amazon S3 数据模型是一种扁平结构:您创建一个存储桶,该存储桶存储对象。没有子桶或子文件夹的层次结构;但是,您可以像 Amazon S3 控制台那样使用键名前缀和分隔符来推断逻辑层次结构。Amazon S3 控制台支持文件夹的概念。假设您的存储桶(由管理员创建)有四个对象,它们具有以下对象键:
开发/项目1.xls
财务/报表1.pdf
私人/税务文件.pdf
s3-dg.pdf
参考:
下面是一些示例代码,演示了如何获取存储桶名称和对象键。
例子:
import boto3
from pprint import pprint
def main():
def enumerate_s3():
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print("Name: {}".format(bucket.name))
print("Creation Date: {}".format(bucket.creation_date))
for object in bucket.objects.all():
print("Object: {}".format(object))
print("Object bucket_name: {}".format(object.bucket_name))
print("Object key: {}".format(object.key))
enumerate_s3()
if __name__ == '__main__':
main()
Run Code Online (Sandbox Code Playgroud)
一种更简化的方法,而不是通过for循环进行遍历,您还可以仅打印包含S3存储桶中所有文件的原始对象:
session = Session(aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket_name')
files_in_s3 = bucket.objects.all()
#you can print this iterable with print(list(files_in_s3))
Run Code Online (Sandbox Code Playgroud)
小智 5
这是解决方案
import boto3
s3=boto3.resource('s3')
BUCKET_NAME = 'Your S3 Bucket Name'
allFiles = s3.Bucket(BUCKET_NAME).objects.all()
for file in allFiles:
print(file.key)
Run Code Online (Sandbox Code Playgroud)
这是一个简单的函数,它返回所有文件的文件名或具有某些类型(例如“json”、“jpg”)的文件名。
def get_file_list_s3(bucket, prefix="", file_extension=None):
"""Return the list of all file paths (prefix + file name) with certain type or all
Parameters
----------
bucket: str
The name of the bucket. For example, if your bucket is "s3://my_bucket" then it should be "my_bucket"
prefix: str
The full path to the the 'folder' of the files (objects). For example, if your files are in
s3://my_bucket/recipes/deserts then it should be "recipes/deserts". Default : ""
file_extension: str
The type of the files. If you want all, just leave it None. If you only want "json" files then it
should be "json". Default: None
Return
------
file_names: list
The list of file names including the prefix
"""
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(bucket)
file_objs = my_bucket.objects.filter(Prefix=prefix).all()
file_names = [file_obj.key for file_obj in file_objs if file_extension is not None and file_obj.key.split(".")[-1] == file_extension]
return file_names
Run Code Online (Sandbox Code Playgroud)
小智 5
我曾经这样做的一种方法:
import boto3
s3 = boto3.resource('s3')
bucket=s3.Bucket("bucket_name")
contents = [_.key for _ in bucket.objects.all() if "subfolders/ifany/" in _.key]
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
201828 次 |
| 最近记录: |