如何最好地处理存储在Google BigQuery中不同位置的数据?

fso*_*ety 5 google-cloud-storage google-bigquery google-cloud-datastore

我目前在BigQuery中的工作流程如下:

(1)在公共存储库(存储在美国)中查询数据,(2)将数据写入存储库中的表,(3)将csv导出到云存储桶,(4)在我工作的服务器上下载csv和(5)一起使用服务器上的。

我现在遇到的问题是,我工作的服务器位于欧盟。因此,在美国水桶和欧盟服务器之间传输数据时,我必须支付相当多的费用。现在,我可以继续在欧盟找到自己的存储桶,但是仍然有一个问题,我需要将数据从美国(BigQuery)传输到欧盟(存储桶)。因此,我也可以将bq中的数据集设置为位于欧盟,但是之后我将无法再执行任何查询,因为公共存储库中的数据位于美国,并且不允许在不同位置之间进行查询。

有谁知道如何解决这个问题?

Tim*_*ast 6

将BigQuery数据集从一个区域复制到另一个区域的一种方法是利用存储数据传输服务。您仍然需要为存储到存储的网络流量付费,这并没有绕开这一事实,但是可能会节省一些将数据复制到EU中的服务器上的CPU时间。

流程将是:

  1. 将所有BigQuery表提取到与表相同区域的存储桶中。(推荐使用Avro格式,以确保数据类型的最佳保真度和最快的加载速度。)
  2. 运行存储传输作业​​,将提取的文件从起始位置存储桶复制到目标位置的存储桶。
  3. 将所有文件加载到位于目标位置的BigQuery数据集中。

Python示例:

# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import datetime
import sys
import time

import googleapiclient.discovery
from google.cloud import bigquery
import json
import pytz


PROJECT_ID = 'swast-scratch'  # TODO: set this to your project name
FROM_LOCATION = 'US'  # TODO: set this to the BigQuery location
FROM_DATASET = 'workflow_test_us'  # TODO: set to BQ dataset name
FROM_BUCKET = 'swast-scratch-us'  # TODO: set to bucket name in same location
TO_LOCATION = 'EU'  # TODO: set this to the destination BigQuery location
TO_DATASET = 'workflow_test_eu'  # TODO: set to destination dataset name
TO_BUCKET = 'swast-scratch-eu'  # TODO: set to bucket name in destination loc

# Construct API clients.
bq_client = bigquery.Client(project=PROJECT_ID)
transfer_client = googleapiclient.discovery.build('storagetransfer', 'v1')


def extract_tables():
    # Extract all tables in a dataset to a Cloud Storage bucket.
    print('Extracting {}:{} to bucket {}'.format(
        PROJECT_ID, FROM_DATASET, FROM_BUCKET))

    tables = list(bq_client.list_tables(bq_client.dataset(FROM_DATASET)))
    extract_jobs = []
    for table in tables:
        job_config = bigquery.ExtractJobConfig()
        job_config.destination_format = bigquery.DestinationFormat.AVRO
        extract_job = bq_client.extract_table(
            table.reference,
            ['gs://{}/{}.avro'.format(FROM_BUCKET, table.table_id)],
            location=FROM_LOCATION,  # Available in 0.32.0 library.
            job_config=job_config)  # Starts the extract job.
        extract_jobs.append(extract_job)

    for job in extract_jobs:
        job.result()

    return tables


def transfer_buckets():
    # Transfer files from one region to another using storage transfer service.
    print('Transferring bucket {} to {}'.format(FROM_BUCKET, TO_BUCKET))
    now = datetime.datetime.now(pytz.utc)
    transfer_job = {
        'description': '{}-{}-{}_once'.format(
            PROJECT_ID, FROM_BUCKET, TO_BUCKET),
        'status': 'ENABLED',
        'projectId': PROJECT_ID,
        'transferSpec': {
            'transferOptions': {
                'overwriteObjectsAlreadyExistingInSink': True,
            },
            'gcsDataSource': {
                'bucketName': FROM_BUCKET,
            },
            'gcsDataSink': {
                'bucketName': TO_BUCKET,
            },
        },
        # Set start and end date to today (UTC) without a time part to start
        # the job immediately.
        'schedule': {
            'scheduleStartDate': {
                'year': now.year,
                'month': now.month,
                'day': now.day,
            },
            'scheduleEndDate': {
                'year': now.year,
                'month': now.month,
                'day': now.day,
            },
        },
    }
    transfer_job = transfer_client.transferJobs().create(
        body=transfer_job).execute()
    print('Returned transferJob: {}'.format(
        json.dumps(transfer_job, indent=4)))

    # Find the operation created for the job.
    job_filter = {
        'project_id': PROJECT_ID,
        'job_names': [transfer_job['name']],
    }

    # Wait until the operation has started.
    response = {}
    while ('operations' not in response) or (not response['operations']):
        time.sleep(1)
        response = transfer_client.transferOperations().list(
            name='transferOperations', filter=json.dumps(job_filter)).execute()

    operation = response['operations'][0]
    print('Returned transferOperation: {}'.format(
        json.dumps(operation, indent=4)))

    # Wait for the transfer to complete.
    print('Waiting ', end='')
    while operation['metadata']['status'] == 'IN_PROGRESS':
        print('.', end='')
        sys.stdout.flush()
        time.sleep(5)
        operation = transfer_client.transferOperations().get(
            name=operation['name']).execute()
    print()

    print('Finished transferOperation: {}'.format(
        json.dumps(operation, indent=4)))


def load_tables(tables):
    # Load all tables into the new dataset.
    print('Loading tables from bucket {} to {}:{}'.format(
        TO_BUCKET, PROJECT_ID, TO_DATASET))

    load_jobs = []
    for table in tables:
        dest_table = bq_client.dataset(TO_DATASET).table(table.table_id)
        job_config = bigquery.LoadJobConfig()
        job_config.source_format = bigquery.SourceFormat.AVRO
        load_job = bq_client.load_table_from_uri(
            ['gs://{}/{}.avro'.format(TO_BUCKET, table.table_id)],
            dest_table,
            location=TO_LOCATION,  # Available in 0.32.0 library.
            job_config=job_config)  # Starts the load job.
        load_jobs.append(load_job)

    for job in load_jobs:
        job.result()


# Actually run the script.
tables = extract_tables()
transfer_buckets()
load_tables(tables)
Run Code Online (Sandbox Code Playgroud)

前面的示例对BigQuery API使用google-cloud-bigquery库,对Storage Data Transfer API使用google-api-python-client。

请注意,此示例不考虑分区表。


JJ *_*wax 0

无论如何,你在美国有你在欧盟需要的数据,所以我认为你有两个选择:

  1. 您可以继续支付许多较小的费用,将减少的数据集从美国转移到欧盟,就像您今天所做的那样。

  2. 您可以支付一次性费用,将原始公共 BQ 数据集从美国转移到您自己在欧盟的数据集。从那时起,您运行的所有查询都将停留在同一区域,并且您不再需要跨大陆传输。

这实际上取决于您计划执行多少查询。如果不是很多,那么你今天做事的方式似乎是最有效的。如果数量很多,那么移动一次数据(支付预付费用)可能会更便宜。

也许谷歌有一些神奇的方法可以让这一切变得更好,但据我所知,你在大西洋的一侧处理大量的数据,而你在另一侧需要这些数据,而将它们穿过这条线路需要花钱。