相关疑难解决方法(0)

从PySpark连接到S3数据

我正在尝试从Amazon s3读取JSON文件,以创建一个spark上下文并使用它来处理数据.

Spark基本上位于docker容器中.因此将文件放入docker路径也是PITA.因此把它推到了S3.

下面的代码解释了其他的东西.

from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("first")
sc = SparkContext(conf=conf)

config_dict = {"fs.s3n.awsAccessKeyId":"**",
               "fs.s3n.awsSecretAccessKey":"**"}

bucket = "nonamecpp"
prefix = "dataset.json"
filename = "s3n://{}/{}".format(bucket, prefix)
rdd = sc.hadoopFile(filename,
                    'org.apache.hadoop.mapred.TextInputFormat',
                    'org.apache.hadoop.io.Text',
                    'org.apache.hadoop.io.LongWritable',
                    conf=config_dict)
Run Code Online (Sandbox Code Playgroud)

我收到以下错误 -

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-2-b94543fb0e8e> in <module>()
      9                     'org.apache.hadoop.io.Text',
     10                     'org.apache.hadoop.io.LongWritable',
---> 11                     conf=config_dict)
     12 

/usr/local/spark/python/pyspark/context.pyc in hadoopFile(self, path, inputFormatClass, keyClass, valueClass, keyConverter, valueConverter, conf, batchSize)
    558         jrdd = self._jvm.PythonRDD.hadoopFile(self._jsc, path, inputFormatClass, keyClass,
    559                                               valueClass, keyConverter, valueConverter,
--> 560                                               jconf, …
Run Code Online (Sandbox Code Playgroud)

python hadoop amazon-s3 apache-spark pyspark

10
推荐指数
1
解决办法
2万
查看次数

标签 统计

amazon-s3 ×1

apache-spark ×1

hadoop ×1

pyspark ×1

python ×1