Vzz*_*arr 2 amazon-redshift apache-spark pyspark aws-glue
在执行 Glue 作业时,在进行必要的转换后,我将 Spark 的结果df写入 Redshift 表,如下所示:
dynamic_df = DynamicFrame.fromDF(df, glue_context, "dynamic_df")
glue_context.write_dynamic_frame.from_jdbc_conf(
frame=dynamic_df, catalog_connection=args['catalog_connection'],
connection_options={"dbtable": args['dbschema'] + "." + args['dbtable'], "database": args['database']},
transformation_ctx="write_my_df")
Run Code Online (Sandbox Code Playgroud)
但我收到了这个异常:
19/08/23 14:29:31 ERROR __main__: Traceback (most recent call last):
File "/mnt/yarn/usercache/root/appcache/application_1572375324962_0001/container_1572375324962_0001_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/mnt/yarn/usercache/root/appcache/application_1572375324962_0001/container_1572375324962_0001_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o190.pyWriteDynamicFrame.
: java.lang.IllegalArgumentException: Unrecognized scheme null; expected s3, s3n, or s3a
Run Code Online (Sandbox Code Playgroud)
我究竟做错了什么?我该如何解决?
我缺少文档中报告的redshift_tmp_dir函数中的参数。from_jdbc_conf
所以现在的功能是:
glue_context.write_dynamic_frame.from_jdbc_conf(
frame=dynamic_df, catalog_connection=args['catalog_connection'],
connection_options={"dbtable": args['dbschema'] + "." + args['dbtable'], "database": args['database']},
redshift_tmp_dir="s3://my_bucket/my/location/", transformation_ctx="write_my_df")
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
4811 次 |
| 最近记录: |