我想在dev/prod环境中更改Spark作业的Typesafe配置.在我看来,实现这一目标的最简单方法是转到-Dconfig.resource=ENVNAME工作岗位.然后Typesafe配置库将为我完成工作.
有没有办法直接将该选项传递给作业?或者也许有更好的方法在运行时更改作业配置?
编辑:
--conf "spark.executor.extraJavaOptions=-Dconfig.resource=dev"向spark-submit命令添加选项时没有任何反应.Error: Unrecognized option '-Dconfig.resource=dev'.当我传递-Dconfig.resource=dev给spark-submit命令时,我得到了.在Spark中使用Scala时,每当我使用结果转储结果时saveAsTextFile,它似乎将输出分成多个部分.我只是将一个参数(路径)传递给它.
val year = sc.textFile("apat63_99.txt").map(_.split(",")(1)).flatMap(_.split(",")).map((_,1)).reduceByKey((_+_)).map(_.swap)
year.saveAsTextFile("year")
Run Code Online (Sandbox Code Playgroud)
我正在使用spark 1.4.0-rc2所以我可以使用python 3和spark.如果我添加export PYSPARK_PYTHON=python3到我的.bashrc文件,我可以使用python 3以交互方式运行spark.但是,如果我想在本地模式下运行一个独立程序,我会收到一个错误:
Exception: Python in worker has different version 3.4 than that in driver 2.7, PySpark cannot run with different minor versions
Run Code Online (Sandbox Code Playgroud)
如何为驱动程序指定python的版本?设置export PYSPARK_DRIVER_PYTHON=python3不起作用.
我有一个数据框,列为String.我想在PySpark中将列类型更改为Double类型.
以下是方式,我做了:
toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))
Run Code Online (Sandbox Code Playgroud)
只是想知道,这是通过Logistic回归运行的正确方法,我遇到了一些错误,所以我想知道,这是问题的原因.
我正在以推测模式运行Spark工作.我有大约500个任务和大约500个压缩1 GB gz的文件.我继续参加每项工作,完成1-2项任务,附加错误,然后重新运行数十次(阻止工作完成).
org.apache.spark.shuffle.MetadataFetchFailedException:缺少shuffle 0的输出位置
知道问题的含义是什么以及如何克服它?
org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:384)
at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:381)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:380)
at org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:176)
at org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Run Code Online (Sandbox Code Playgroud) 我想将数据帧的字符串列转换为列表.我可以从DataframeAPI 找到的是RDD,所以我尝试先将其转换回RDD,然后将toArray函数应用于RDD.在这种情况下,长度和SQL工作就好了.但是,我从RDD得到的结果在每个元素周围都有方括号[A00001].我想知道是否有适当的方法将列转换为列表或删除方括号的方法.
任何建议,将不胜感激.谢谢!
我尝试在Mac OS Yosemite 10.10.5上使用spark 1.6.0(spark-1.6.0-bin-hadoop2.4)启动
"./bin/spark-shell".
Run Code Online (Sandbox Code Playgroud)
它有以下错误.我也尝试安装不同版本的Spark,但都有相同的错误.这是我第二次运行Spark.我之前的运行正常.
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel("INFO")
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.0
/_/
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have …Run Code Online (Sandbox Code Playgroud) 我试图在spark 1.4.0和tachyon 0.6.4上使用off heap storage来保持我的RDD这样做:
val a = sqlContext.parquetFile("a1.parquet")
a.persist(org.apache.spark.storage.StorageLevel.OFF_HEAP)
a.count()
Run Code Online (Sandbox Code Playgroud)
之后我得到以下异常.
有什么想法吗?
15/06/16 10:14:53 INFO : Tachyon client (version 0.6.4) is trying to connect master @ localhost/127.0.0.1:19998
15/06/16 10:14:53 INFO : User registered at the master localhost/127.0.0.1:19998 got UserId 3
15/06/16 10:14:53 INFO TachyonBlockManager: Created tachyon directory at /tmp_spark_tachyon/spark-6b2512ab-7bb8-47ca-b6e2-8023d3d7f7dc/driver/spark-tachyon-20150616101453-ded3
15/06/16 10:14:53 INFO BlockManagerInfo: Added rdd_10_3 on ExternalBlockStore on localhost:33548 (size: 0.0 B)
15/06/16 10:14:53 INFO BlockManagerInfo: Added rdd_10_1 on ExternalBlockStore on localhost:33548 (size: 0.0 B)
15/06/16 10:14:53 ERROR TransportRequestHandler: …Run Code Online (Sandbox Code Playgroud) 我是apache spark的新手,显然我在我的macbook中用自制软件安装了apache-spark:
Last login: Fri Jan 8 12:52:04 on console
user@MacBook-Pro-de-User-2:~$ pyspark
Python 2.7.10 (default, Jul 13 2015, 12:05:58)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/01/08 14:46:44 INFO SparkContext: Running Spark version 1.5.1
16/01/08 14:46:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/08 14:46:47 INFO SecurityManager: Changing view acls to: user
16/01/08 14:46:47 INFO …Run Code Online (Sandbox Code Playgroud) 我是Apache Spark的新手,我刚刚了解到Spark支持三种类型的集群:
由于我是Spark的新手,我想我应该首先尝试Standalone.但我想知道哪一个是推荐的.说,将来我需要构建一个大型集群(数百个实例),我应该去哪个集群类型?
apache-spark ×10
pyspark ×3
scala ×3
python ×2
alluxio ×1
dataframe ×1
hadoop-yarn ×1
homebrew ×1
mesos ×1
pycharm ×1