我正在尝试从 Spark 开始。我的库中有 Hadoop (3.3.1) 和 Spark (3.2.2)。我已将 SPARK_HOME、PATH、HADOOP_HOME 和 LD_LIBRARY_PATH 设置为各自的路径。我还运行 JDK 17(echo 和 -version 在终端中工作正常)。
然而,我仍然收到以下错误:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/10/25 17:17:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
java.lang.IllegalAccessError: class org.apache.spark.storage.StorageUtils$ (in unnamed module @0x1f508f09) cannot access class sun.nio.ch.DirectBuffer (in module java.base) because module java.base does not export sun.nio.ch to unnamed …Run Code Online (Sandbox Code Playgroud) import static java.lang.Math.pow;
class projectThreeQ2{
public static void main (String args[]){
//Q2: Write a for statement to compute the sum 1 + 2^2 + 32 + 42 + 52 + ... + n2.
int n = 7;
int sum = 0;
for (int i = 0; i < n; i++){
sum = sum + (int)Math.pow(n,2);
}
System.out.println(sum);
}
}
Run Code Online (Sandbox Code Playgroud)
问题是对n ^ 2的总和进行for循环.
所以在我的情况下; 1 4 9 16 25 36.这等于91.但是,当我运行我的代码时,我得到343.为什么?