mik*_*ung 5 json apache-spark apache-spark-sql pyspark
目标: 对于具有模式的数据框
id:string
Cold:string
Medium:string
Hot:string
IsNull:string
annual_sales_c:string
average_check_c:string
credit_rating_c:string
cuisine_c:string
dayparts_c:string
location_name_c:string
market_category_c:string
market_segment_list_c:string
menu_items_c:string
msa_name_c:string
name:string
number_of_employees_c:string
number_of_rooms_c:string
Months In Role:integer
Tenured Status:string
IsCustomer:integer
units_c:string
years_in_business_c:string
medium_interactions_c:string
hot_interactions_c:string
cold_interactions_c:string
is_null_interactions_c:string
Run Code Online (Sandbox Code Playgroud)
我想添加一个新列,它是列的所有键和值的 JSON 字符串。我在这篇文章PySpark - Convert to JSON row by row和相关问题中使用了该方法。我的代码
df = df.withColumn("JSON",func.to_json(func.struct([df[x] for x in small_df.columns])))
Run Code Online (Sandbox Code Playgroud)
我有一个问题:
问题: 当任何行的列具有空值(并且我的数据有很多...)时,Json 字符串不包含键。即,如果 27 列中只有 9 列有值,那么 JSON 字符串只有 9 个键...我想要做的是维护所有键,但对于空值只需传递一个空字符串“”
有小费吗?
您应该能够修改使用 链接的问题的答案pyspark.sql.functions.when。
考虑以下示例 DataFrame:
data = [
('one', 1, 10),
(None, 2, 20),
('three', None, 30),
(None, None, 40)
]
sdf = spark.createDataFrame(data, ["A", "B", "C"])
sdf.printSchema()
#root
# |-- A: string (nullable = true)
# |-- B: long (nullable = true)
# |-- C: long (nullable = true)
Run Code Online (Sandbox Code Playgroud)
用于when实现if-then-else逻辑。如果该列不为空,则使用该列。否则返回空字符串。
from pyspark.sql.functions import col, to_json, struct, when, lit
sdf = sdf.withColumn(
"JSON",
to_json(
struct(
[
when(
col(x).isNotNull(),
col(x)
).otherwise(lit("")).alias(x)
for x in sdf.columns
]
)
)
)
sdf.show()
#+-----+----+---+-----------------------------+
#|A |B |C |JSON |
#+-----+----+---+-----------------------------+
#|one |1 |10 |{"A":"one","B":"1","C":"10"} |
#|null |2 |20 |{"A":"","B":"2","C":"20"} |
#|three|null|30 |{"A":"three","B":"","C":"30"}|
#|null |null|40 |{"A":"","B":"","C":"40"} |
#+-----+----+---+-----------------------------+
Run Code Online (Sandbox Code Playgroud)
另一种选择是使用pyspark.sql.functions.coalesce而不是when:
from pyspark.sql.functions import coalesce
sdf.withColumn(
"JSON",
to_json(
struct(
[coalesce(col(x), lit("")).alias(x) for x in sdf.columns]
)
)
).show(truncate=False)
## Same as above
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
5971 次 |
| 最近记录: |