MRo*_*lin 5 python amazon-s3 amazon-web-services apache-spark pyspark
我想从PySpark读取存储在S3上的Parquet数据。
我从这里下载了spark:
http://www.apache.org/dist/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz
Run Code Online (Sandbox Code Playgroud)
并天真地将其安装到Python
cd python
python setup.py install
Run Code Online (Sandbox Code Playgroud)
这似乎工作正常,我可以导入pyspark,创建SparkContext等。但是,当我阅读一些可公开访问的镶木地板数据时,会得到以下信息:
import pyspark
sc = pyspark.SparkContext('local[4]')
sql = pyspark.SQLContext(sc)
df = sql.read.parquet('s3://bucket-name/mydata.parquet')
Run Code Online (Sandbox Code Playgroud)
我收到以下异常
Py4JJavaError: An error occurred while calling o55.parquet.
: java.io.IOException: No FileSystem for scheme: s3
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:372)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:441)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)
该错误从Google搜索中弹出。到目前为止,所提供的解决方案均无济于事。
我在个人计算机上的Linux(Ubuntu 16.04)操作系统上,未安装其他软件(所有产品都足够库存)。
我降级为http://www.apache.org/dist/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.4.tgz,以默认包含AWS。
现在很不幸,我的AWS凭证没有被领取。我已经尝试了几件事:
将它们包括为SparkConf参数
conf = (pyspark.SparkConf()
.set('fs.s3.awsAccessKeyId', ...')
.set('fs.s3.awsSecretAccessKey', '...'))
sc = pyspark.SparkContext('local[4]', conf=conf)
Run Code Online (Sandbox Code Playgroud)不幸的是,在所有情况下,我都会收到如下所示的回溯
IllegalArgumentException: 'AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).'
Run Code Online (Sandbox Code Playgroud)
使用预构建的 Spark 2.X 二进制文件的 Hadoop-2.4 版本(我相信它附带 s3 功能),您可以通过编程方式配置 Spark 以通过以下方式提取 s3 数据:
import pyspark
conf = pyspark.SparkConf()
sc = pyspark.SparkContext('local[4]', conf=conf)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "")
sql = pyspark.SQLContext(sc)
df = sql.read.parquet('s3n://bucket-name/mydata.parquet')
Run Code Online (Sandbox Code Playgroud)
需要注意的关键一点是存储桶 URI 和配置名称中的前缀s3n
| 归档时间: |
|
| 查看次数: |
4841 次 |
| 最近记录: |