为少数列创建具有空值的DataFrame

Avi*_*jit 12 scala apache-spark spark-dataframe apache-spark-dataset

我正在尝试创建一个DataFrame使用RDD.

首先,我创建一个RDD使用下面的代码 -

val account = sc.parallelize(Seq(
                                 (1, null, 2,"F"), 
                                 (2, 2, 4, "F"),
                                 (3, 3, 6, "N"),
                                 (4,null,8,"F")))
Run Code Online (Sandbox Code Playgroud)

它工作正常 -

account:org.apache.spark.rdd.RDD [(Int,Any,Int,String)] = ParallelCollectionRDD [0]并行化:27

但是,当尝试创建DataFrameRDD使用下面的代码

account.toDF("ACCT_ID", "M_CD", "C_CD","IND")
Run Code Online (Sandbox Code Playgroud)

我收到了以下错误

java.lang.UnsupportedOperationException:不支持类型为Any的架构

我分析说,每当我把null值放进去的时候,Seq只有我得到了错误.

有没有办法添加空值?

Mar*_*ace 17

不使用RDD的替代方法:

import spark.implicits._

val df = spark.createDataFrame(Seq(
  (1, None,    2, "F"),
  (2, Some(2), 4, "F"),
  (3, Some(3), 6, "N"),
  (4, None,    8, "F")
)).toDF("ACCT_ID", "M_CD", "C_CD","IND")

df.show
+-------+----+----+---+
|ACCT_ID|M_CD|C_CD|IND|
+-------+----+----+---+
|      1|null|   2|  F|
|      2|   2|   4|  F|
|      3|   3|   6|  N|
|      4|null|   8|  F|
+-------+----+----+---+

df.printSchema
root
 |-- ACCT_ID: integer (nullable = false)
 |-- M_CD: integer (nullable = true)
 |-- C_CD: integer (nullable = false)
 |-- IND: string (nullable = true)
Run Code Online (Sandbox Code Playgroud)

  • 这种方法优于公认的答案。 (3认同)

Zyo*_*oma 13

问题是Any是太普通的类型而Spark根本不知道如何序列化它.在您的情况下,您应该明确提供一些特定类型Integer.由于无法将空值分配给Scala中的基本类型,因此可以使用java.lang.Integer.试试这个:

val account = sc.parallelize(Seq(
                                 (1, null.asInstanceOf[Integer], 2,"F"), 
                                 (2, new Integer(2), 4, "F"),
                                 (3, new Integer(3), 6, "N"),
                                 (4, null.asInstanceOf[Integer],8,"F")))
Run Code Online (Sandbox Code Playgroud)

这是一个输出:

rdd: org.apache.spark.rdd.RDD[(Int, Integer, Int, String)] = ParallelCollectionRDD[0] at parallelize at <console>:24
Run Code Online (Sandbox Code Playgroud)

和相应的DataFrame:

scala> val df = rdd.toDF("ACCT_ID", "M_CD", "C_CD","IND")

df: org.apache.spark.sql.DataFrame = [ACCT_ID: int, M_CD: int ... 2 more fields]

scala> df.show
+-------+----+----+---+
|ACCT_ID|M_CD|C_CD|IND|
+-------+----+----+---+
|      1|null|   2|  F|
|      2|   2|   4|  F|
|      3|   3|   6|  N|
|      4|null|   8|  F|
+-------+----+----+---+
Run Code Online (Sandbox Code Playgroud)

你也可以考虑一些更简洁的方法来声明空整数值,如:

object Constants {
  val NullInteger: java.lang.Integer = null
}
Run Code Online (Sandbox Code Playgroud)