Rak*_*uha 1 scala apache-spark-sql pyspark aws-glue
我在 AWS Glue 中有以下工作,它基本上从一个表中读取数据并将其提取为 S3 中的 csv 文件,但是我想在这个表上运行查询(A Select、SUM 和 GROUPBY)并希望得到该输出CSV,如何在 AWS Glue 中执行此操作?我是 Spark 的新手,所以请帮忙
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = 
"db1", table_name = "dbo1_expdb_dbo_stg_plan", transformation_ctx = 
"datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = 
[("plan_code", "int", "plan_code", "int"), ("plan_id", "int", "plan_id", 
"int")], transformation_ctx = "applymapping1")
datasink2 = glueContext.write_dynamic_frame.from_options(frame = 
applymapping1, connection_type = "s3", connection_options = {"path": 
"s3://bucket"}, format = "csv", transformation_ctx = "datasink2")
job.commit()
胶水上下文的“create_dynamic_frame.from_catalog”函数创建一个动态帧而不是数据帧。并且动态框架不支持执行 sql 查询。
要执行 sql 查询,您首先需要将动态帧转换为数据帧,在 spark 的内存中注册一个临时表,然后在这个临时表上执行 sql 查询。
示例代码:
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from pyspark.sql import SQLContext
glueContext = GlueContext(SparkContext.getOrCreate())
spark_session = glueContext.spark_session
sqlContext = SQLContext(spark_session.sparkContext, spark_session)
DyF = glueContext.create_dynamic_frame.from_catalog(database="{{database}}", table_name="{{table_name}}")
df = DyF.toDF()
df.registerTempTable('{{name}}')
df = sqlContext.sql('{{your select query with table name that you used for temp table above}}')
df.write.format('{{orc/parquet/whatever}}').partitionBy("{{columns}}").save('path to s3 location')