现在,我必须用来df.count > 0检查它是否DataFrame为空.但它效率低下.有没有更好的方法来做到这一点.
谢谢.
PS:我想检查它是否为空,以便我只保存,DataFrame如果它不是空的
我注意到我的scala编译器有一种奇怪的行为.它在编译类时偶尔会抛出OutOfMemoryError.这是错误消息:
[info] Compiling 1 Scala source to /Users/gruetter/Workspaces/scala/helloscala/target/scala-2.9.0/test-classes...
java.lang.OutOfMemoryError: PermGen space
Error during sbt execution: java.lang.OutOfMemoryError: PermGen space
Run Code Online (Sandbox Code Playgroud)
它只会偶尔发生一次,并且通常不会在随后的编译运行中抛出错误.我使用Scala 2.9.0并通过SBT编译.
有没有人知道造成这个错误的原因是什么?提前感谢您的见解.
我从GH开发大师那里构建了Spark 1.4,并且构建很顺利.但是当我这样做时,bin/pyspark我得到了Python 2.7.9版本.我怎么能改变这个?
我想DataFrame在Scala中使用指定的模式创建.我曾尝试使用JSON读取(我的意思是读取空文件),但我认为这不是最好的做法.
我有一个Spark应用程序在本地模式下运行没有问题,但在提交到Spark集群时有一些问题.
错误消息如下:
16/06/24 15:42:06 WARN scheduler.TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, cluster-node-02): java.lang.ExceptionInInitializerError
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at GroupEvolutionES$$anonfun$6.apply(GroupEvolutionES.scala:579)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1595)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:401)
at GroupEvolutionES$.<init>(GroupEvolutionES.scala:37)
at GroupEvolutionES$.<clinit>(GroupEvolutionES.scala)
... 14 more
16/06/24 15:42:06 WARN scheduler.TaskSetManager: Lost task 5.0 in stage 0.0 (TID 5, cluster-node-02): java.lang.NoClassDefFoundError: Could …Run Code Online (Sandbox Code Playgroud) 我无法spark在Scala IDE(Maven spark project)上安装一个简单的工作Windows 7
已添加Spark核心依赖项.
val conf = new SparkConf().setAppName("DemoDF").setMaster("local")
val sc = new SparkContext(conf)
val logData = sc.textFile("File.txt")
logData.count()
Run Code Online (Sandbox Code Playgroud)
错误:
16/02/26 18:29:33 INFO SparkContext: Created broadcast 0 from textFile at FrameDemo.scala:13
16/02/26 18:29:34 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
at <br>org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
at <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at …Run Code Online (Sandbox Code Playgroud) 我读了一下文档HashPartitioner.不幸的是,除了API调用之外没有解释太多.我假设HashPartitioner根据键的哈希对分布式集进行分区.例如,如果我的数据是这样的
(1,1), (1,2), (1,3), (2,1), (2,2), (2,3)
Run Code Online (Sandbox Code Playgroud)
因此,分区器会将其放入不同的分区,同一个键落在同一个分区中.但是我不明白构造函数参数的意义
new HashPartitoner(numPartitions) //What does numPartitions do?
Run Code Online (Sandbox Code Playgroud)
对于上述数据集,如果我这样做,结果会有何不同
new HashPartitoner(1)
new HashPartitoner(2)
new HashPartitoner(10)
Run Code Online (Sandbox Code Playgroud)
那么HashPartitioner工作怎么样呢?
我正在尝试过滤具有None行值的PySpark数据帧:
df.select('dt_mvmt').distinct().collect()
[Row(dt_mvmt=u'2016-03-27'),
Row(dt_mvmt=u'2016-03-28'),
Row(dt_mvmt=u'2016-03-29'),
Row(dt_mvmt=None),
Row(dt_mvmt=u'2016-03-30'),
Row(dt_mvmt=u'2016-03-31')]
Run Code Online (Sandbox Code Playgroud)
我可以使用字符串值正确过滤:
df[df.dt_mvmt == '2016-03-31']
# some results here
Run Code Online (Sandbox Code Playgroud)
但这失败了:
df[df.dt_mvmt == None].count()
0
df[df.dt_mvmt != None].count()
0
Run Code Online (Sandbox Code Playgroud)
但每个类别肯定都有价值观.这是怎么回事?
我处理一个包含两列mvv和count的数据帧.
+---+-----+
|mvv|count|
+---+-----+
| 1 | 5 |
| 2 | 9 |
| 3 | 3 |
| 4 | 1 |
Run Code Online (Sandbox Code Playgroud)
我想获得两个包含mvv值和计数值的列表.就像是
mvv = [1,2,3,4]
count = [5,9,3,1]
Run Code Online (Sandbox Code Playgroud)
所以,我尝试了以下代码:第一行应该返回一个python列表行.我想看到第一个值:
mvv_list = mvv_count_df.select('mvv').collect()
firstvalue = mvv_list[0].getInt(0)
Run Code Online (Sandbox Code Playgroud)
但是我收到第二行的错误消息:
AttributeError:getInt
apache-spark ×10
scala ×5
python ×3
dataframe ×2
pyspark ×2
rdd ×2
eclipse ×1
partitioning ×1
python-3.x ×1
sbt ×1
scalatra-sbt ×1