Python 3 中的 Pandas to_csv 到 GzipFile 不起作用

Wai*_*ski 6 python pandas

在 Python 2.7 (Pandas 0.22.0) 中,将 Pandas 数据帧保存到内存中的 gzipped csv 的工作方式如下:

from io import BytesIO
import gzip
import pandas as pd
df = pd.DataFrame.from_dict({'a': ['a', 'b', 'c']})
s = BytesIO()
f = gzip.GzipFile(fileobj=s, mode='wb', filename='file.csv')
df.to_csv(f)
s.seek(0)
content = s.getvalue()
Run Code Online (Sandbox Code Playgroud)

但是,在 Python 3.6 (Pandas 0.22.0) 中,相同的代码在调用时会抛出错误to_csv

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "lib/python3.6/site-packages/pandas/core/frame.py", line 1524, in to_csv
    formatter.save()
  File "lib/python3.6/site-packages/pandas/io/formats/format.py", line 1652, in save
    self._save()
  File "lib/python3.6/site-packages/pandas/io/formats/format.py", line 1740, in _save
    self._save_header()
  File "lib/python3.6/site-packages/pandas/io/formats/format.py", line 1708, in _save_header
    writer.writerow(encoded_labels)
  File "miniconda3/lib/python3.6/gzip.py", line 260, in write
    data = memoryview(data)
TypeError: memoryview: a bytes-like object is required, not 'str'
Run Code Online (Sandbox Code Playgroud)

我应该如何解决这个问题?我是否需要以GzipFile某种方式更改对象以to_csv正确处理它?

为了澄清起见,我想在内存中创建 gzip 压缩文件(content变量),以便稍后使用Boto 3put_object将其保存到 Amazon S3 。

Rol*_*kas 1

您可以利用StringIO

from io import StringIO
buf = StringIO()
df.to_csv(buf)
f = gzip.GzipFile(fileobj=s, mode='wb', filename='file.csv')
f.write(buf.getvalue().encode())
f.flush()
Run Code Online (Sandbox Code Playgroud)

另请注意添加的f.flush()- 根据我的经验,如果没有这条线,GzipFile在某些情况下可能会随机不刷新数据,从而导致存档损坏。

或者作为基于您的代码的完整示例:

from io import BytesIO
import gzip
import pandas as pd
from io import StringIO
df = pd.DataFrame.from_dict({'a': ['a', 'b', 'c']})
s = BytesIO()
buf = StringIO()
f = gzip.GzipFile(fileobj=s, mode='wb', filename='file.csv')
df.to_csv(buf)
f.write(buf.getvalue().encode())
f.flush()
s.seek(0)
content = s.getvalue()
Run Code Online (Sandbox Code Playgroud)