Ami*_*mit 2 python amazon-s3 aws-sdk boto3
下面是我用来读取gz文件的代码
import json
import boto3
from io import BytesIO
import gzip
def lambda_handler(event, context):
try:
s3 = boto3.resource('s3')
key='test.gz'
obj = s3.Object('athenaamit',key)
n = obj.get()['Body'].read()
#print(n)
gzip = BytesIO(n)
gzipfile = gzip.GzipFile(fileobj=gzip)
content = gzipfile.read()
print(content)
return 'dddd'
Run Code Online (Sandbox Code Playgroud)
except Exception as e: print(e) raise e 但是我遇到了以下错误
"errorMessage": "'_io.BytesIO' object has no attribute 'GzipFile'",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 20, in lambda_handler\n raise e\n",
" File \"/var/task/lambda_function.py\", line 14, in lambda_handler\n gzipfile = gzip.GzipFile(fileobj=gzip)\n"
Run Code Online (Sandbox Code Playgroud)
蟒蛇版本-3.7
我还尝试实现以下建议 /sf/ask/2295638621/ gzipfile-and-write-to-gzipfile
但它也不适合我,请建议我如何读取文件内容
将其整理为正确的答案。工作代码是:
s3 = boto3.resource('s3')
obj = s3.Object('my-bucket-name','path/to/file.gz')
buf = io.BytesIO(obj.get()["Body"].read()) # reads whole gz file into memory
for line in gzip.GzipFile(fileobj=buf):
# do something with line
Run Code Online (Sandbox Code Playgroud)
我有点担心内存占用,但似乎只有 gz 文件保留在内存中(上面第 3 行)。然后for line循环中只有解压缩形式的每一行。
对于 gz 文件,38M我的内存占用量为47M(在虚拟内存中,VIRT在 htop 中)。解压后的文件是308M.
| 归档时间: |
|
| 查看次数: |
7169 次 |
| 最近记录: |