小编qui*_*hts的帖子

如何比较多行?

我想连续两排比较ii-1col2(排序col1).

如果item_i在的i行且第item_[i-1]_row是不同的,我想增加的计数item_[i-1]1.

+--------------+
| col1 col2    |
+--------------+
| row_1 item_1 |
| row_2 item_1 |
| row_3 item_2 |
| row_4 item_1 |
| row_5 item_2 |
| row_6 item_1 |
+--------------+
Run Code Online (Sandbox Code Playgroud)

在上面的示例中,如果我们一次向下扫描两行,我们会看到row_2并且row_3不同,因此我们在item_1中添加了一行.接下来,我们看到row_3不同于row_4,然后添加一个item_2.继续,直到我们结束:

+-------------+
|  col2  col3 |
+-------------+
|  item_1  2  |
|  item_2  2  |
+-------------+
Run Code Online (Sandbox Code Playgroud)

scala apache-spark spark-streaming apache-spark-sql

5
推荐指数
1
解决办法
2975
查看次数

如何使用Supervisord自动启动Apache Spark集群?

启动Apache Spark集群通常是通过代码库提供的spark-submit shell脚本完成的.但是,问题是每次集群关闭并重新启动时,都需要执行这些shell脚本来启动spark集群.

Supervisord非常适合管理流程,并且似乎是重启后自动启动spark流程的理想选择.

但是,在启动主进程后

command=/usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -cp :/path/spark-1.3.0-bin-cdh4/sbin/../conf:/path/spark-1.3.0-bin-cdh4/lib/spark-assembly-1.3.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/path/spark-1.3.0-bin-cdh4/lib/datanucleus-api-jdo-3.2.6.jar:/path/spark-1.3.0-bin-cdh4/lib/datanucleus-core-3.2.10.jar:/path/spark-1.3.0-bin-cdh4/lib/datanucleus-rdbms-3.2.9.jar:etc/hadoop/conf -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.master.Master --ip master.mydomain.com --port 7077 --webui-port 18080
Run Code Online (Sandbox Code Playgroud)

和工人的过程

command=/usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -cp :/path/spark-1.3.0-bin-cdh4/sbin/../conf:/path/spark-1.3.0-bin-cdh4/lib/spark-assembly-1.3.0-hadoop2.0.0-mr1-cdh4.2.0.jar:/path/spark-1.3.0-bin-cdh4/lib/datanucleus-api-jdo-3.2.6.jar:/path/spark-1.3.0-bin-cdh4/lib/datanucleus-core-3.2.10.jar:/path/spark-1.3.0-bin-cdh4/lib/datanucleus-rdbms-3.2.9.jar:etc/hadoop/conf -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.worker.Worker spark://master.mydomain.com:7077
Run Code Online (Sandbox Code Playgroud)

提交我的spark应用程序后,我最终得到以下错误:

15/06/05 17:16:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/05 17:16:32 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
15/06/05 17:16:32 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 1
15/06/05 17:16:32 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2
15/06/05 17:16:32 …
Run Code Online (Sandbox Code Playgroud)

supervisord apache-spark

2
推荐指数
1
解决办法
992
查看次数