将静态数据集与DStream连接时出现Spark检查点错误

Raj*_*mar 6 java apache-spark hadoop2 spark-streaming

我试图在Java中使用Spark Streaming应用程序.我的Spark应用程序使用textFileStream()以每1 分钟的间隔从Hadoop目录读取连续的feed .我需要对传入的DStream执行Spark聚合(分组依据)操作.聚合后,我汇总接合DStream<Key, Value1>RDD<Key, Value2>RDD<Key, Value2>由读取静态数据集创建()文本文件从Hadoop的目录.

启用检查点时会出现问题.使用空检查点目录,它运行正常.运行2-3批后,我用ctrl+ 关闭它c并再次运行.在第二次运行时,它会立即抛出火花异常:"SPARK-5063"

Exception in thread "main" org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063
Run Code Online (Sandbox Code Playgroud)

以下是火花应用代码块:

private void compute(JavaSparkContext sc, JavaStreamingContext ssc) {

   JavaRDD<String> distFile = sc.textFile(MasterFile);      
   JavaDStream<String> file = ssc.textFileStream(inputDir);             

   // Read Master file
   JavaRDD<MasterParseLog> masterLogLines = distFile.flatMap(EXTRACT_MASTER_LOGLINES);
   final JavaPairRDD<String, String> masterRDD = masterLogLines.mapToPair(MASTER_KEY_VALUE_MAPPER);

   // Continuous Streaming file
   JavaDStream<ParseLog> logLines = file.flatMap(EXTRACT_CKT_LOGLINES);

   // calculate the sum of required field and generate group sum RDD
   JavaPairDStream<String, Summary> sumRDD = logLines.mapToPair(CKT_GRP_MAPPER);
   JavaPairDStream<String, Summary> grpSumRDD = sumRDD.reduceByKey(CKT_GRP_SUM);

   //GROUP BY Operation
   JavaPairDStream<String, Summary> grpAvgRDD = grpSumRDD.mapToPair(CKT_GRP_AVG);

   // Join Master RDD with the DStream  //This is the block causing error (without it code is working fine)
   JavaPairDStream<String, Tuple2<String, String>> joinedStream = grpAvgRDD.transformToPair(

       new Function2<JavaPairRDD<String, String>, Time, JavaPairRDD<String, Tuple2<String, String>>>() {

           private static final long serialVersionUID = 1L;

           public JavaPairRDD<String, Tuple2<String, String>> call(
               JavaPairRDD<String, String> rdd, Time v2) throws Exception {
               return masterRDD.value().join(rdd);
           }
       }
   );
   joinedStream.print(10);
}

public static void main(String[] args) {

   JavaStreamingContextFactory contextFactory = new JavaStreamingContextFactory() {
        public JavaStreamingContext create() {

           // Create the context with a 60 second batch size
           SparkConf sparkConf = new SparkConf();
           final JavaSparkContext sc = new JavaSparkContext(sparkConf);
           JavaStreamingContext ssc1 = new JavaStreamingContext(sc, Durations.seconds(duration));               

           app.compute(sc, ssc1);

           ssc1.checkpoint(checkPointDir);                       
           return ssc1;
        }
   };

   JavaStreamingContext ssc = JavaStreamingContext.getOrCreate(checkPointDir, contextFactory);

   // start the streaming server
   ssc.start();
   logger.info("Streaming server started...");

   // wait for the computations to finish
   ssc.awaitTermination();
   logger.info("Streaming server stopped...");
}
Run Code Online (Sandbox Code Playgroud)

我知道将静态数据集与DStream连接的代码块导致错误,但这取自Apache spark网站的spark-streaming页面(" Join Operations " 下的子标题" stream-dataset join ").即使有不同的方法,请帮助我让它工作.我需要在流媒体应用程序中启用检查点.

环境细节:

  • Centos6.5:2节点集群
  • Java:1.8
  • Spark:1.4.1
  • Hadoop:2.7.1*