PySpark“爆炸”列中的字典

Jac*_*son 5 explode apache-spark pyspark

我在 spark 数据框中有一列“true_recoms”:

-RECORD 17----------------------------------------------------------------- 
item        | 20380109                                                                                                                                                                  
true_recoms | {"5556867":1,"5801144":5,"7397596":21}          
Run Code Online (Sandbox Code Playgroud)

我需要“爆炸”这个列来得到这样的东西:

item        | 20380109                                                                                                                                                                  
recom_item  | 5556867
recom_cnt   | 1
..............
item        | 20380109                                                                                                                                                                  
recom_item  | 5801144
recom_cnt   | 5
..............
item        | 20380109                                                                                                                                                                  
recom_item  | 7397596
recom_cnt   | 21
Run Code Online (Sandbox Code Playgroud)

我试过使用 from_json 但它不起作用:

    schema_json = StructType(fields=[
        StructField("item", StringType()),
        StructField("recoms", StringType())
    ])
    df.select(col("true_recoms"),from_json(col("true_recoms"), schema_json)).show(5)

+--------+--------------------+------+
|    item|         true_recoms|true_r|
+--------+--------------------+------+
|31746548|{"32731749":3,"31...|   [,]|
|17359322|{"17359392":1,"17...|   [,]|
|31480894|{"31480598":1,"31...|   [,]|
| 7265665|{"7265891":1,"503...|   [,]|
|31350949|{"32218698":1,"31...|   [,]|
+--------+--------------------+------+
only showing top 5 rows
Run Code Online (Sandbox Code Playgroud)

hi-*_*zir 5

架构定义不正确。您声明为struct两个字符串字段

  • item
  • recoms

而文档中不存在这两个字段。

不幸的是,from_json只能返回结构或结构数组,因此将其重新定义为

MapType(StringType(), LongType())
Run Code Online (Sandbox Code Playgroud)

不是一个选择。

就我个人而言,我会使用udf

from pyspark.sql.functions import udf, explode
import json

@udf("map<string, bigint>")
def parse(s):
    try:
        return json.loads(s)
    except json.JSONDecodeError:
        pass 
Run Code Online (Sandbox Code Playgroud)

可以像这样应用

df = spark.createDataFrame(
    [(31746548, """{"5556867":1,"5801144":5,"7397596":21}""")],
    ("item", "true_recoms")
)

df.select("item",  explode(parse("true_recoms")).alias("recom_item", "recom_cnt")).show()
# +--------+----------+---------+
# |    item|recom_item|recom_cnt|
# +--------+----------+---------+
# |31746548|   5801144|        5|
# |31746548|   7397596|       21|
# |31746548|   5556867|        1|
# +--------+----------+---------+
Run Code Online (Sandbox Code Playgroud)