Chr*_*ris 5 scala apache-spark spark-structured-streaming
将流与静态数据集合并是结构化流的一个重要功能。但是,每一批数据集都会从数据源中刷新。由于这些源并不总是那么动态,因此在指定的时间段(或批处理数量)中缓存静态数据集会提高性能。在指定的周期/批次数后,将从源重新加载数据集,否则从缓存中检索数据集。
在Spark流中,我使用缓存的数据集对此进行了管理,并在指定数量的批处理运行后取消了持久化,但是由于某种原因,它不再适用于结构化流。
有什么建议可以使用结构化流媒体吗?
我为另一个问题开发了一个解决方案Stream-Static Join: How tofresh (unpersist/persist) static Dataframe period这也可能有助于解决您的问题:
您可以通过利用结构化流提供的流调度功能来做到这一点。
您可以通过创建定期刷新静态数据集的人工“速率”流来触发静态数据帧的刷新(取消持久 -> 加载 -> 持久)。这个想法是:
varforeachBatch调用刷新方法的接收器以下代码在 Spark 3.0.1、Scala 2.12.10 和 Delta 0.7.0 上运行良好。
// 1. Load the staticDataframe initially and keep as `var`
var staticDf = spark.read.format("delta").load(deltaPath)
staticDf.persist()
// 2. Define a method that refreshes the static Dataframe
def foreachBatchMethod[T](batchDf: Dataset[T], batchId: Long) = {
staticDf.unpersist()
staticDf = spark.read.format("delta").load(deltaPath)
staticDf.persist()
println(s"${Calendar.getInstance().getTime}: Refreshing static Dataframe from DeltaLake")
}
// 3. Use a "Rate" Stream that gets triggered at the required interval (e.g. 1 hour)
val staticRefreshStream = spark.readStream
.format("rate")
.option("rowsPerSecond", 1)
.option("numPartitions", 1)
.load()
.selectExpr("CAST(value as LONG) as trigger")
.as[Long]
// 4. Read actual streaming data and perform join operation with static Dataframe
// As an example I used Kafka as a streaming source
val streamingDf = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "test")
.option("startingOffsets", "earliest")
.option("failOnDataLoss", "false")
.load()
.selectExpr("CAST(value AS STRING) as id", "offset as streamingField")
val joinDf = streamingDf.join(staticDf, "id")
val query = joinDf.writeStream
.format("console")
.option("truncate", false)
.option("checkpointLocation", "/path/to/sparkCheckpoint")
.start()
// 5. Within that Rate Stream have a `foreachBatch` sink that calls refresher method
staticRefreshStream.writeStream
.outputMode("append")
.foreachBatch(foreachBatchMethod[Long] _)
.queryName("RefreshStream")
.trigger(Trigger.ProcessingTime("5 seconds"))
.start()
Run Code Online (Sandbox Code Playgroud)
为了获得完整的示例,增量表的创建如下:
val deltaPath = "file:///tmp/delta/table"
import spark.implicits._
val df = Seq(
(1L, "static1"),
(2L, "static2")
).toDF("id", "deltaField")
df.write
.mode(SaveMode.Overwrite)
.format("delta")
.save(deltaPath)
Run Code Online (Sandbox Code Playgroud)