小编Mar*_*rco的帖子

如何在Spark Streaming中跨多个批处理间隔传输数据流

我正在使用Apache Spark Streaming 1.6.1编写一个Java应用程序,它连接两个Key/Value数据流并将输出写入HDFS.这两个数据流包含K/V字符串,并使用textFileStream()定期在HDFS中从Spark中摄取.

两个数据流不同步,这意味着在时间t0在stream1中的一些键可以在时间t1出现在stream2中,反之亦然.因此,我的目标是加入两个流并计算"剩余"密钥,这应该考虑在下一个批处理间隔中的连接操作.

为了更好地澄清这一点,请查看以下算法:

variables:
stream1 = <String, String> input stream at time t1
stream2 = <String, String> input stream at time t1
left_keys_s1 = <String, String> records of stream1 that didn't appear in the join at time t0
left_keys_s2 = <String, String> records of stream2 that didn't appear in the join at time t0

operations at time t1:
out_stream = (stream1 + left_keys_s1) join (stream2 + left_keys_s2)
write out_stream to HDFS
left_keys_s1 = left_keys_s1 + records of …
Run Code Online (Sandbox Code Playgroud)

apache-spark spark-streaming dstream

5
推荐指数
1
解决办法
1159
查看次数

标签 统计

apache-spark ×1

dstream ×1

spark-streaming ×1