Spark 在 Win10 安装时出现 None.org.apache.spark.api.java.JavaSparkContext 错误

laz*_*rea 2 apache-spark pyspark

最近,我一直在努力让 Spark 在我的 Windows 10 设备上运行,但没有成功。我只是想尝试 Spark 并能够遵循教程,因此我当前无法访问要连接的集群。为了安装 Spark,我根据本教程完成了以下步骤:

  • 我安装了 Java JDK 并将其放置到C:\jdk. 该文件夹内有binconfincludejmodslegallib文件夹。
  • 我安装了Java运行环境并将其放置到C:\jre. 这个里面有binlegal、 和lib文件夹。
  • 我下载了这个文件夹并将其winutils.exe放入C:\winutils\bin.
  • 我创建了一个HADOOP_HOME用户环境变量并将其设置为C:\winutils
  • 我打开 Anaconda Prompt 并将 PySpark 安装conda install pyspark到我的基本环境中。
  • 安装成功后,我打开一个新提示符并键入pyspark以验证安装。这应该会出现 Spark 欢迎屏幕。相反,我收到了以下长错误消息:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/12/05 12:22:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/12/05 12:22:47 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext should be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
py4j.Gateway.invoke(Gateway.java:238)
py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
py4j.ClientServerConnection.run(ClientServerConnection.java:106)
java.base/java.lang.Thread.run(Thread.java:833)
C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\shell.py:42: UserWarning: Failed to initialize Spark session.
  warnings.warn("Failed to initialize Spark session.")
Traceback (most recent call last):
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\shell.py", line 38, in <module>
    spark = SparkSession._create_shell_session()  # type: ignore
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\sql\session.py", line 553, in _create_shell_session
    return SparkSession.builder.getOrCreate()
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\sql\session.py", line 228, in getOrCreate
    sc = SparkContext.getOrCreate(sparkConf)
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 392, in getOrCreate
    SparkContext(conf=conf or SparkConf())
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 146, in __init__
    self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 209, in _do_init
    self._jsc = jsc or self._initialize_context(self._conf._jconf)
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 329, in _initialize_context
    return self._jvm.JavaSparkContext(jconf)
  File "C:\Users\lazarea\Anaconda3\lib\site-packages\py4j\java_gateway.py", line 1573, in __call__
    return_value = get_return_value(
  File "C:\Users\lazarea\Anaconda3\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.storage.StorageUtils$
        at org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:110)
        at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:348)
        at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:287)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:336)
        at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:191)
        at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:460)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
        at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
        at java.base/java.lang.Thread.run(Thread.java:833)
Run Code Online (Sandbox Code Playgroud)

我在 Stackoverflow 上查找类似问题并遇到了这个问题。这有类似的错误消息。然而,提供的解决方案,即将SPARK_LOCAL_IP用户环境变量设置为 并localhost没有解决问题,在输入pysparkAnaconda Prompt 时仍然存在相同的错误消息。

注意#1,这可能是相关的:在pyspark命令行中键入时,不提供任何输出。相反,Windows 默认打开 Microsoft Store。

注意#2:我尝试直接用 Python 编码,看看是否有更多提示。我运行了以下代码片段:

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('sampleApp').getOrCreate()
Run Code Online (Sandbox Code Playgroud)

它返回了与上面类似的错误消息,以及更多可能有用的信息:

An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.IllegalAccessError: class org.apache.spark.storage.StorageUtils$
(in unnamed module @0x776b83cc) cannot access class sun.nio.ch.DirectBuffer
(in module java.base) because module java.base does not export sun.nio.ch
to unnamed module @0x776b83cc
Run Code Online (Sandbox Code Playgroud)

注意#3:打开命令行并输入 时spark-shell,会输出以下错误:

java.lang.IllegalAccessError: class org.apache.spark.storage.StorageUtils$ (in unnamed module @0x3c947bc5) cannot access class sun.nio.ch.DirectBuffer (in module java.base) because module java.base does not export sun.nio.ch to unnamed module @0x3c947bc5
  at org.apache.spark.storage.StorageUtils$.<init>(StorageUtils.scala:213)
  at org.apache.spark.storage.StorageUtils$.<clinit>(StorageUtils.scala)
  at org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:110)
  at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:348)
  at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:287)
  at org.apache.spark.SparkEnv$.create(SparkEnv.scala:336)
  at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:191)
  at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:460)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
  at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
  at scala.Option.getOrElse(Option.scala:189)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
  ... 55 elided
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^
Run Code Online (Sandbox Code Playgroud)

请帮助我成功启动 Spark,因为我无法理解此时我可能缺少什么。

laz*_*rea 6

最后,我成功了,所以让我分享一下我学到的东西,以供将来参考,以防其他人以后在安装 Apache Spark 时遇到困难。在 Windows 10 计算机上安装 Apache Spark 时,存在三个关键方面。

  1. 确保您已安装 Java 8!我们中的许多人都陷入了下载目前默认的 Java 17 的陷阱,Apache Spark 不支持它。有一个选项可以在 Java 8 或 Java 11 之间进行选择,但根据此线程的讨论,我得出的结论是,对于我的快速 POC 示例,Java 11 JDK 和 JRE 不值得那么麻烦,因此我选择了 Java 8 JDK 和 JRE 都可以从 Oracle 网站轻松下载。请注意,您选择的版本越高,它就越安全,因此对于更严重的事情,我可能会选择 Java 11。

  2. 将新安装的Java文件夹移动到C盘。C:\jdk为Java 8 JDK 和C:\jreJava 8 JRE创建一个文件夹。然后,就不需要 JAVA_HOME 环境变量了,因为它们都位于 C 驱动器的根目录中。

  3. 使用旧版本的 Spark!事实证明,Apache Spark 网站上当前提供的最新稳定版本(2021 年 10 月发布的 3.2.0)已被多次报告在初始化 Spark 上下文时出现此类和其他类似问题。因此,我尝试回滚到以前的版本。具体来说,我下载了 2021 年 6 月发布的 Apache Spark 3.0.3 版本,并将SPARK_HOME环境变量指向新提取的文件夹:C:\Spark\spark-3.0.3-bin-hadoop2.7

经过所有这些修改后,我关闭了所有命令行窗口,打开了一个新窗口,然后运行spark-shell,最后我得到了备受追捧的 Spark 欢迎屏幕:

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.0.3
      /_/

Using Scala version 2.12.10 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_301)
Run Code Online (Sandbox Code Playgroud)