获取Spark数据帧列列表

RaA*_*aAm 11 scala apache-spark apache-spark-sql spark-dataframe

如何将spark数据帧中的所有列名称转换为Seq变量.

输入数据和架构

val dataset1 = Seq(("66", "a", "4"), ("67", "a", "0"), ("70", "b", "4"), ("71", "d", "4")).toDF("KEY1", "KEY2", "ID")

dataset1.printSchema()
root
|-- KEY1: string (nullable = true)
|-- KEY2: string (nullable = true)
|-- ID: string (nullable = true)
Run Code Online (Sandbox Code Playgroud)

我需要使用scala编程将所有列名存储在变量中.我试过如下,但它不起作用.

val selectColumns = dataset1.schema.fields.toSeq

selectColumns: Seq[org.apache.spark.sql.types.StructField] = WrappedArray(StructField(KEY1,StringType,true),StructField(KEY2,StringType,true),StructField(ID,StringType,true))
Run Code Online (Sandbox Code Playgroud)

预期产量:

val selectColumns = Seq(
  col("KEY1"),
  col("KEY2"),
  col("ID")
)

selectColumns: Seq[org.apache.spark.sql.Column] = List(KEY1, KEY2, ID)
Run Code Online (Sandbox Code Playgroud)

Yar*_*ron 15

您可以使用以下命令:

val selectColumns = dataset1.columns.toSeq
Run Code Online (Sandbox Code Playgroud)
scala> val dataset1 = Seq(("66", "a", "4"), ("67", "a", "0"), ("70", "b", "4"), ("71", "d", "4")).toDF("KEY1", "KEY2", "ID")
dataset1: org.apache.spark.sql.DataFrame = [KEY1: string, KEY2: string ... 1 more field]

scala> val selectColumns = dataset1.columns.toSeq
selectColumns: Seq[String] = WrappedArray(KEY1, KEY2, ID)
Run Code Online (Sandbox Code Playgroud)


RaA*_*aAm 7

val selectColumns = dataset1.columns.toList.map(col(_))
Run Code Online (Sandbox Code Playgroud)


uh_*_*boi 5

我像这样使用 columns 属性

val cols = dataset1.columns.toSeq
Run Code Online (Sandbox Code Playgroud)

然后,如果您稍后按照从头到尾的顺序选择所有列,则可以使用

val orderedDF = dataset1.select(cols.head, cols.tail:_ *)
Run Code Online (Sandbox Code Playgroud)