小编chi*_*tty的帖子

在 AWS EMR 上的 PySpark 脚本上找不到 com.amazon.ws.emr.hadoop.fs.EmrFileSystem

我尝试使用 AWS CLI 创建 EMR 集群来运行 python 脚本(使用 pyspark),如下所示:

aws emr create-cluster --name "emr cluster for pyspark (test)"\
 --applications Name=Spark Name=Hadoop --release-label emr-5.25.0 --use-default-roles \
 --ec2-attributes KeyName=my-key --instance-groups \
 InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.xlarge \
 InstanceGroupType=CORE,InstanceCount=2,InstanceType=m4.xlarge \
 --bootstrap-actions Path="s3://mybucket/my_bootstrap.sh" --steps \
 Type=CUSTOM_JAR,Name="Spark Count group by QRACE",ActionOnFailure=CONTINUE\
 ,Jar=s3://us-east-2.elasticmapreduce/libs/script-runner/script-runner.jar,\
 Args=["s3://mybucket/my_step.py","s3://mybucket/my_input.txt","s3://mybucket/output"]\
 --log-uri "s3://mybucket/logs"
Run Code Online (Sandbox Code Playgroud)

引导脚本设置 Python3.7、安装 pyspark (2.4.3) 并安装 Java 8。但是,我的脚本失败并出现以下错误:

y4j.protocol.Py4JJavaError: An error occurred while calling o32.csv.
: java.lang.RuntimeException: 
java.lang.ClassNotFoundException: Class com.amazon.ws.emr.hadoop.fs.EmrFileSystem not found
Run Code Online (Sandbox Code Playgroud)

我尝试将--configurations带有以下 json 文件的参数添加到create-cluster命令中(但没有帮助):

[
{
  "Classification":"spark-defaults",
  "Properties":{
    "spark.executor.extraClassPath":"/etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*",
    "spark.driver.extraClassPath":"/etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*" …
Run Code Online (Sandbox Code Playgroud)

amazon-ec2 amazon-web-services amazon-emr pyspark

6
推荐指数
1
解决办法
8008
查看次数