PySpark:在连接中处理 NULL

orN*_*aka 9 hadoop dataframe pyspark

我正在尝试在 pyspark 中加入 2 个数据帧。我的问题是我希望我的“内部联接”能够通过,而不管空值如何。我可以看到在 scala 中,我有一个<=>的替代品。但是,<=>在 pyspark 中不起作用。

userLeft = sc.parallelize([
Row(id=u'1', 
    first_name=u'Steve', 
    last_name=u'Kent', 
    email=u's.kent@email.com'),
Row(id=u'2', 
    first_name=u'Margaret', 
    last_name=u'Peace', 
    email=u'marge.peace@email.com'),
Row(id=u'3', 
    first_name=None, 
    last_name=u'hh', 
    email=u'marge.hh@email.com')]).toDF()

userRight = sc.parallelize([
Row(id=u'2', 
    first_name=u'Margaret', 
    last_name=u'Peace', 
    email=u'marge.peace@email.com'),
Row(id=u'3', 
    first_name=None, 
    last_name=u'hh', 
    email=u'marge.hh@email.com')]).toDF()
Run Code Online (Sandbox Code Playgroud)

当前工作版本:

userLeft.join(userRight, (userLeft.last_name==userRight.last_name) & (userLeft.first_name==userRight.first_name)).show()

当前结果:

    +--------------------+----------+---+---------+--------------------+----------+---+---------+
|               email|first_name| id|last_name|               email|first_name| id|last_name|
    +--------------------+----------+---+---------+--------------------+----------+---+---------+ 
    |marge.peace@email...|  Margaret|  2|    Peace|marge.peace@email...|  Margaret|  2|    Peace|
    +--------------------+----------+---+---------+--------------------+----------+---+---------+
Run Code Online (Sandbox Code Playgroud)

预期结果:

    +--------------------+----------+---+---------+--------------------+----------+---+---------+
|               email|first_name| id|last_name|               email|first_name| id|last_name|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
|  marge.hh@email.com|      null|  3|       hh|  marge.hh@email.com|      null|  3|       hh|
|marge.peace@email...|  Margaret|  2|    Peace|marge.peace@email...|  Margaret|  2|    Peace|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
Run Code Online (Sandbox Code Playgroud)

Mar*_*ado 10

对于PYSPARK < 2.3.0,您仍然可以使用如下表达式列构建<=>运算符:

import pyspark.sql.functions as F
df1.alias("df1").join(df2.alias("df2"), on = F.expr('df1.column <=> df2.column'))

Run Code Online (Sandbox Code Playgroud)

对于PYSPARK >= 2.3.0,您可以使用Column.eqNullSafeIS NOT DISTINCT FROM,如此处的答案。


MaF*_*aFF 7

使用另一个值代替null

userLeft = userLeft.na.fill("unknown")
userRight = userRight.na.fill("unknown")

userLeft.join(userRight, ["last_name", "first_name"])

    +---------+----------+--------------------+---+--------------------+---+
    |last_name|first_name|               email| id|               email| id|
    +---------+----------+--------------------+---+--------------------+---+
    |    Peace|  Margaret|marge.peace@email...|  2|marge.peace@email...|  2|
    |       hh|   unknown|  marge.hh@email.com|  3|  marge.hh@email.com|  3|
    +---------+----------+--------------------+---+--------------------+---+
Run Code Online (Sandbox Code Playgroud)