Rag*_*ath 5 python pandas google-cloud-storage google-cloud-platform
我想将 Pandas 数据框直接保存到 Google Cloud Storage。我使用write-a-pandas-dataframe-to-google-cloud-storage-or-bigquery尝试了不同的方法。但我无法保存。
注意:我只能使用 google.cloud 包
下面是我试过的代码
from google.cloud import storage
import pandas as pd
input_dict = [{'Name': 'A', 'Id': 100}, {'Name': 'B', 'Id': 110}, {'Name': 'C', 'Id': 120}]
df = pd.DataFrame(input_dict)
Run Code Online (Sandbox Code Playgroud)
尝试:1
destination = f'gs://bucket_name/test.csv'
df.to_csv(destination)
Run Code Online (Sandbox Code Playgroud)
尝试:2
storage_client = storage.Client(project='project')
bucket = storage_client.get_bucket('bucket_name')
gs_file = bucket.blob('test.csv')
df.to_csv(gs_file)
Run Code Online (Sandbox Code Playgroud)
我收到以下错误
对于选项 1:没有这样的文件或目录:'gs://bucket_name/test.csv'
选项 2:'Blob' 对象没有属性 'close'
谢谢,
拉古纳特。
Ali*_*sro 14
from google.cloud import storage
import os
from io import StringIO # if going with no saving csv file
# say where your private key to google cloud exists
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path/to/your-google-cloud-private-key.json'
df = pd.DataFrame([{'Name': 'A', 'Id': 100}, {'Name': 'B', 'Id': 110}])
Run Code Online (Sandbox Code Playgroud)
先将它写入你机器上的csv文件并上传:
df.to_csv('local_file.csv')
gcs.get_bucket('BUCKET_NAME').blob('FILE_NAME.csv').upload_from_filename('local_file.csv', content_type='text/csv')
Run Code Online (Sandbox Code Playgroud)
如果您不想创建临时 csv 文件,请使用 StringIO:
f = StringIO()
df.to_csv(f)
f.seek(0)
gcs.get_bucket('BUCKET_NAME').blob('FILE_NAME.csv').upload_from_file(f, content_type='text/csv')
Run Code Online (Sandbox Code Playgroud)
Ayo*_*che -1
也许这篇文章可以帮助你
from datalab.context import Context
import google.datalab.storage as storage
import google.datalab.bigquery as bq
import pandas as pd
# Dataframe to write
simple_dataframe = pd.DataFrame(data=[{1,2,3},{4,5,6}],columns=['a','b','c'])
sample_bucket_name = Context.default().project_id + '-datalab-example'
sample_bucket_path = 'gs://' + sample_bucket_name
sample_bucket_object = sample_bucket_path + '/Hello.txt'
bigquery_dataset_name = 'TestDataSet'
bigquery_table_name = 'TestTable'
# Define storage bucket
sample_bucket = storage.Bucket(sample_bucket_name)
# Create storage bucket if it does not exist
if not sample_bucket.exists():
sample_bucket.create()
# Define BigQuery dataset and table
dataset = bq.Dataset(bigquery_dataset_name)
table = bq.Table(bigquery_dataset_name + '.' + bigquery_table_name)
# Create BigQuery dataset
if not dataset.exists():
dataset.create()
# Create or overwrite the existing table if it exists
table_schema = bq.Schema.from_data(simple_dataframe)
table.create(schema = table_schema, overwrite = True)
# Write the DataFrame to GCS (Google Cloud Storage)
%storage write --variable simple_dataframe --object $sample_bucket_object
# Write the DataFrame to a BigQuery table
table.insert(simple_dataframe)
Run Code Online (Sandbox Code Playgroud)
将 Pandas DataFrame 写入 Google Cloud Storage 或 BigQuery