Ros*_*s J 4 amazon-web-services amazon-redshift apache-spark pyspark aws-glue
目前,AWS Glue 作业读取 S3 集合并将其写入 AWS Redshift 时遇到问题,其中有一列包含null值。
这项工作应该相当简单,大部分代码是由 Glue 接口自动生成的,但由于 Redshift 中没有空列,而这些列在我们的数据集中有时为空,因此我们无法完成这项工作。
代码的精简版本如下所示,代码采用 Python 语言,环境为 PySpark。
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "db_1", table_name = "table_1", transformation_ctx = "datasource0")
resolvedDDF = datasource0.resolveChoice(specs = [
('price_current','cast:double'),
('price_discount','cast:double'),
])
applymapping = ApplyMapping.apply(frame = resolvedDDF, mappings = [
("id", "string", "id", "string"),
("status", "string", "status", "string"),
("price_current", "double", "price_current", "double"),
("price_discount", "double", "price_discount", "double"),
("created_at", "string", "created_at", "string"),
("updated_at", "string", "updated_at", "string"),
], transformation_ctx = "applymapping")
droppedDF = applymapping.toDF().dropna(subset=('created_at', 'price_current'))
newDynamicDF = DynamicFrame.fromDF(droppedDF, glueContext, "newframe")
dropnullfields = DropNullFields.apply(frame = newDynamicDF, transformation_ctx = "dropnullfields")
datasink = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields, catalog_connection = "RedshiftDataStaging", connection_options = {"dbtable": "dbtable_1", "database": "database_1"}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink")
Run Code Online (Sandbox Code Playgroud)
price_current我们对 Redshift 中的和表有一个非空约束created_at,并且由于我们系统中的一些早期错误,一些记录已到达 S3 存储桶,但没有所需的数据。我们只想删除这些行,因为它们只占要处理的总体数据的很小一部分。
尽管有dropna代码,我们仍然从 Redshift 收到以下错误。
Error (code 1213) while loading data into Redshift: "Missing data for not-null field"
Table name: "PUBLIC".table_1
Column name: created_at
Column type: timestampt(0)
Raw field value: @NULL@
Run Code Online (Sandbox Code Playgroud)
如果您不想删除默认值,可以传递默认值
df= dropnullfields.toDF()
df = df.na.fill({'price_current': 0.0, 'created_at': ' '})
dyf = DynamicFrame.fromDF(df,'glue_context_1')
datasink = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dyf, catalog_connection = "RedshiftDataStaging", connection_options = {"dbtable": "dbtable_1", "database": "database_1"}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink")
Run Code Online (Sandbox Code Playgroud)
如果您想删除,请使用以下代码代替df.na.fill
df = df.na.drop(subset=["price_current", "created_at"])
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
10247 次 |
| 最近记录: |