相关疑难解决方法(0)

并行读取S3中的多个文件(Spark,Java)

我看到了一些关于此问题的讨论,但无法理解正确的解决方案:我想将S3中的几百个文件加载到RDD中.我现在就是这样做的:

ObjectListing objectListing = s3.listObjects(new ListObjectsRequest().
                withBucketName(...).
                withPrefix(...));
List<String> keys = new LinkedList<>();
objectListing.getObjectSummaries().forEach(summery -> keys.add(summery.getKey())); // repeat while objectListing.isTruncated()

JavaRDD<String> events = sc.parallelize(keys).flatMap(new ReadFromS3Function(clusterProps));
Run Code Online (Sandbox Code Playgroud)

ReadFromS3Function不使用实际的阅读AmazonS3客户端:

    public Iterator<String> call(String s) throws Exception {
        AmazonS3 s3Client = getAmazonS3Client(properties);
        S3Object object = s3Client.getObject(new GetObjectRequest(...));
        InputStream is = object.getObjectContent();
        List<String> lines = new LinkedList<>();
        String str;
        try {
            BufferedReader reader = new BufferedReader(new InputStreamReader(is));
            if (is != null) {
                while ((str = reader.readLine()) != null) {
                    lines.add(str);
                }
            } …
Run Code Online (Sandbox Code Playgroud)

java amazon-s3 apache-spark

10
推荐指数
1
解决办法
1万
查看次数

标签 统计

amazon-s3 ×1

apache-spark ×1

java ×1