我有一个Spark应用程序在本地模式下运行没有问题,但在提交到Spark集群时有一些问题.
错误消息如下:
16/06/24 15:42:06 WARN scheduler.TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, cluster-node-02): java.lang.ExceptionInInitializerError
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:401)
at GroupEvolutionES$.<init>(GroupEvolutionES.scala:37)
at GroupEvolutionES$.<clinit>(GroupEvolutionES.scala)
... 14 more
16/06/24 15:42:06 WARN scheduler.TaskSetManager: Lost task 5.0 in stage 0.0 (TID 5, cluster-node-02): java.lang.NoClassDefFoundError: Could …Run Code Online (Sandbox Code Playgroud) 我正在使用scrapy抓取一些网站.如何获取队列中的请求数?
我查看了scrapy源代码,发现scrapy.core.scheduler.Scheduler可能导致我的回答.请参阅:https://github.com/scrapy/scrapy/blob/0.24/scrapy/core/scheduler.py
两个问题:
self.dqs和self.mqs在调度类是什么意思?我想在 scikit 的Pipeline 中添加过采样程序,例如SMOTE oversampling。但是transformers只支持和方法,并没有提供增加样本和目标数量的方法。fittransform
一种可能的方法是将管道分解为通过 SMOTE 采样连接的两个独立管道。
有没有更好的解决方案?
我使用带有 Tensorflow 的 Keras 作为后端并遇到不兼容的错误:
model = Sequential()
model.add(LSTM(64, input_dim = 1))
model.add(Dropout(0.2))
model.add(LSTM(16))
Run Code Online (Sandbox Code Playgroud)
以下错误显示:
Traceback (most recent call last):
File "train_lstm_model.py", line 36, in <module>
model.add(LSTM(16))
File "/home/***/anaconda2/lib/python2.7/site-packages/keras/models.py", line 332, in add
output_tensor = layer(self.outputs[0])
File "/home/***/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 529, in __call__
self.assert_input_compatibility(x)
File "/home/***/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 469, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2
Run Code Online (Sandbox Code Playgroud)
我该如何解决这个问题?
Keras 版本:1.2.2 Tensorflow 版本:0.12
python ×3
apache-spark ×1
keras ×1
keras-layer ×1
linux ×1
scala ×1
scikit-learn ×1
scrapy ×1
tensorflow ×1