在Apache Spark Join中包含空值

Pow*_*ers 38 sql scala join apache-spark apache-spark-sql

我想在Apache Spark连接中包含空值.Spark默认情况下不包含null的行.

这是默认的Spark行为.

val numbersDf = Seq(
  ("123"),
  ("456"),
  (null),
  ("")
).toDF("numbers")

val lettersDf = Seq(
  ("123", "abc"),
  ("456", "def"),
  (null, "zzz"),
  ("", "hhh")
).toDF("numbers", "letters")

val joinedDf = numbersDf.join(lettersDf, Seq("numbers"))
Run Code Online (Sandbox Code Playgroud)

这是输出joinedDf.show():

+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|       |    hhh|
+-------+-------+
Run Code Online (Sandbox Code Playgroud)

这是我想要的输出:

+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|       |    hhh|
|   null|    zzz|
+-------+-------+
Run Code Online (Sandbox Code Playgroud)

use*_*411 55

Spark提供了一个特殊的NULL安全等于运算符

numbersDf
  .join(lettersDf, numbersDf("numbers") <=> lettersDf("numbers"))
  .drop(lettersDf("numbers"))
Run Code Online (Sandbox Code Playgroud)
+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|   null|    zzz|
|       |    hhh|
+-------+-------+
Run Code Online (Sandbox Code Playgroud)

小心不要在Spark 1.5或更早版本中使用它.在Spark 1.6之前,它需要一个笛卡尔积(SPARK-11111 - 快速零安全连接).

Spark 2.3.0或更高版本中,您可以Column.eqNullSafePySpark中使用:

numbers_df = sc.parallelize([
    ("123", ), ("456", ), (None, ), ("", )
]).toDF(["numbers"])

letters_df = sc.parallelize([
    ("123", "abc"), ("456", "def"), (None, "zzz"), ("", "hhh")
]).toDF(["numbers", "letters"])

numbers_df.join(letters_df, numbers_df.numbers.eqNullSafe(letters_df.numbers))
Run Code Online (Sandbox Code Playgroud)
+-------+-------+-------+
|numbers|numbers|letters|
+-------+-------+-------+
|    456|    456|    def|
|   null|   null|    zzz|
|       |       |    hhh|
|    123|    123|    abc|
+-------+-------+-------+
Run Code Online (Sandbox Code Playgroud)

%<=>%SparkR中:

numbers_df <- createDataFrame(data.frame(numbers = c("123", "456", NA, "")))
letters_df <- createDataFrame(data.frame(
  numbers = c("123", "456", NA, ""),
  letters = c("abc", "def", "zzz", "hhh")
))

head(join(numbers_df, letters_df, numbers_df$numbers %<=>% letters_df$numbers))
Run Code Online (Sandbox Code Playgroud)
  numbers numbers letters
1     456     456     def
2    <NA>    <NA>     zzz
3                     hhh
4     123     123     abc
Run Code Online (Sandbox Code Playgroud)

使用SQL(Spark 2.2.0+),您可以使用IS NOT DISTINCT FROM:

SELECT * FROM numbers JOIN letters 
ON numbers.numbers IS NOT DISTINCT FROM letters.numbers
Run Code Online (Sandbox Code Playgroud)

这也可以与DataFrameAPI 一起使用:

numbersDf.alias("numbers")
  .join(lettersDf.alias("letters"))
  .where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")
Run Code Online (Sandbox Code Playgroud)

  • 谢谢.[这是另一个很好的答案](http://stackoverflow.com/questions/31240148/spark-specify-multiple-column-conditions-for-dataframe-join)使用`<=>`运算符.如果您正在进行多列连接,则可以使用`&&`运算符链接条件. (3认同)
  • 如果我将列列表传递给 `join` 的 `on` 参数,有没有办法使用 eqNullSafe ? (3认同)

小智 8

val numbers2 = numbersDf.withColumnRenamed("numbers","num1") //rename columns so that we can disambiguate them in the join
val letters2 = lettersDf.withColumnRenamed("numbers","num2")
val joinedDf = numbers2.join(letters2, $"num1" === $"num2" || ($"num1".isNull &&  $"num2".isNull) ,"outer")
joinedDf.select("num1","letters").withColumnRenamed("num1","numbers").show  //rename the columns back to the original names
Run Code Online (Sandbox Code Playgroud)


tim*_*ang 7

根据KL的想法,您可以使用foldLeft生成连接列表达式:

def nullSafeJoin(rightDF: DataFrame, columns: Seq[String], joinType: String)(leftDF: DataFrame): DataFrame = 
{

  val colExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
  val fullExpr = columns.tail.foldLeft(colExpr) { 
    (colExpr, p) => colExpr && leftDF(p) <=> rightDF(p) 
  }

  leftDF.join(rightDF, fullExpr, joinType)
}
Run Code Online (Sandbox Code Playgroud)

然后,你可以像这样调用这个函数:

aDF.transform(nullSafejoin(bDF, columns, joinType))
Run Code Online (Sandbox Code Playgroud)