如何使用Scala在Spark DataFrame中将每一行分成多行

Sag*_*gar 2 scala dataframe

我有一个数据框,其数据如下

Key  Today  MTD  QTD  HTD  YTD 
K1   10     20   10   20   50
K2   20     30   20   10   60
Run Code Online (Sandbox Code Playgroud)

我正在寻找输出像

Key  PRD     Amt
K1   Today   10
K1   MTD     20
K1   QTD     10
K1   HTD     20
K1   YTD     50
Run Code Online (Sandbox Code Playgroud)

我尝试使用Pivot,但是它提供了其他方法。我不确定是否可以使用平面地图或地图?请指教。

Sar*_*ngh 5

import org.apache.spark.sql._
import spark.implicits._

val list = List(("K1", 10, 20, 10, 20,50), ("K2", 20, 30, 20, 10, 60))
val yourDF = sc.parallelize(list).toDF("Key", "Today", "MTD", "QTD", "HTD", "YTD")

// yourDF.show()
// +---+-----+---+---+---+---+
// |Key|Today|MTD|QTD|HTD|YTD|
// +---+-----+---+---+---+---+
// | K1|   10| 20| 10| 20| 50|
// | K2|   20| 30| 20| 10| 60|
// +---+-----+---+---+---+---+

val newDataFrame = yourDF
  .rdd
  .flatMap(row => {
    val key = row.getString(0)
    val todayAmt = row.getInt(1)
    val mtdAmt = row.getInt(2)
    val qtdAmt = row.getInt(3)
    val htdAmt = row.getInt(4)
    val ytdAmt = row.getInt(5)

    List(
      (key, "today", todayAmt),
      (key, "MTD", mtdAmt),
      (key, "QTD", qtdAmt),
      (key, "HTD", htdAmt),
      (key, "YTD", ytdAmt)
    )
  })
  .toDF("Key", "PRD", "Amt" )

// newDataFrame.show()
// +---+-----+---+
// |Key|  PRD|Amt|
// +---+-----+---+
// | K1|today| 10|
// | K1|  MTD| 20|
// | K1|  QTD| 10|
// | K1|  HTD| 20|
// | K1|  YTD| 50|
// | K2|today| 20|
// | K2|  MTD| 30|
// | K2|  QTD| 20|
// | K2|  HTD| 10|
// | K2|  YTD| 60|
// +---+-----+---+
Run Code Online (Sandbox Code Playgroud)