如何使用单个spark上下文在Apache Spark中运行并发作业(操作)

Spo*_*rty 19 java concurrency apache-spark


它在Apache Spark文档中说" 在每个Spark应用程序中,多个"作业"(Spark动作)如果由不同的线程提交,可能会同时运行 ".有人可以解释如何实现以下示例代码的并发性吗?

    SparkConf conf = new SparkConf().setAppName("Simple_App");
    JavaSparkContext sc = new JavaSparkContext(conf);

    JavaRDD<String> file1 = sc.textFile("/path/to/test_doc1");
    JavaRDD<String> file2 = sc.textFile("/path/to/test_doc2");

    System.out.println(file1.count());
    System.out.println(file2.count());
Run Code Online (Sandbox Code Playgroud)

这两个工作是独立的,必须同时运行.
谢谢.

G Q*_*ana 22

尝试这样的事情:

    final JavaSparkContext sc = new JavaSparkContext("local[2]","Simple_App");
    ExecutorService executorService = Executors.newFixedThreadPool(2);
    // Start thread 1
    Future<Long> future1 = executorService.submit(new Callable<Long>() {
        @Override
        public Long call() throws Exception {
            JavaRDD<String> file1 = sc.textFile("/path/to/test_doc1");
            return file1.count();
        }
    });
    // Start thread 2
    Future<Long> future2 = executorService.submit(new Callable<Long>() {
        @Override
        public Long call() throws Exception {
            JavaRDD<String> file2 = sc.textFile("/path/to/test_doc2");
            return file2.count();
        }
    });
    // Wait thread 1
    System.out.println("File1:"+future1.get());
    // Wait thread 2
    System.out.println("File2:"+future2.get());
Run Code Online (Sandbox Code Playgroud)

  • 我们不能简单地使用`spark.streaming.concurrentJobs` conf来设置并发级别吗? (3认同)