plt*_*rdy 8 apache-spark apache-spark-sql pyspark pyspark-sql
我在3个VM上运行spark 1.6(即1x主站; 2x从站),全部带有4个内核和16GB RAM.
我可以看到在spark-master webUI上注册的工人.
我想从我的Vertica数据库中检索数据来处理它.由于我没有设法运行复杂的查询,我尝试了虚拟查询来理解.我们认为这是一项简单的任务.
我的代码是:
df = sqlContext.read.format('jdbc').options(url='xxxx', dbtable='xxx', user='xxxx', password='xxxx').load()
four = df.take(4)
Run Code Online (Sandbox Code Playgroud)
输出是(注意:我用@IPSLAVE从属VM IP 替换:端口):
16/03/08 13:50:41 INFO SparkContext: Starting job: take at <stdin>:1
16/03/08 13:50:41 INFO DAGScheduler: Got job 0 (take at <stdin>:1) with 1 output partitions
16/03/08 13:50:41 INFO DAGScheduler: Final stage: ResultStage 0 (take at <stdin>:1)
16/03/08 13:50:41 INFO DAGScheduler: Parents of final stage: List()
16/03/08 13:50:41 INFO DAGScheduler: Missing parents: List()
16/03/08 13:50:41 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at take at <stdin>:1), which has no missing parents
16/03/08 13:50:41 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 5.4 KB, free 5.4 KB)
16/03/08 13:50:41 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.6 KB, free 7.9 KB)
16/03/08 13:50:41 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on @IPSLAVE (size: 2.6 KB, free: 511.5 MB)
16/03/08 13:50:41 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
16/03/08 13:50:41 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at take at <stdin>:1)
16/03/08 13:50:41 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/03/08 13:50:41 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, @IPSLAVE, partition 0,PROCESS_LOCAL, 1922 bytes)
16/03/08 13:50:41 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on @IPSLAVE (size: 2.6 KB, free: 511.5 MB)
16/03/08 15:02:20 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 4299240 ms on @IPSLAVE (1/1)
16/03/08 15:02:20 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/03/08 15:02:20 INFO DAGScheduler: ResultStage 0 (take at <stdin>:1) finished in 4299.248 s
16/03/08 15:02:20 INFO DAGScheduler: Job 0 finished: take at <stdin>:1, took 4299.460581 s
Run Code Online (Sandbox Code Playgroud)
正如你所看到的,它需要花费很长时间.我的表实际上非常大(存储大约2.2亿行,每行11个字段)但是这样的查询将使用"普通"sql(例如pyodbc)立即执行.
我想我很想念/错过Spark,你会有这样的想法或建议让它更好用吗?
zer*_*323 11
虽然Spark支持对JDBC进行有限的谓词下推,但所有其他操作(如限制,组,聚合)都在内部执行.不幸的是,这意味着take(4)首先获取数据然后应用limit.换句话说,您的数据库将执行(假设没有投影过滤器)相当于:
SELECT * FROM table
Run Code Online (Sandbox Code Playgroud)
其余的将由Spark处理.涉及一些优化(特别是Spark迭代地评估分区以获得所请求的记录数LIMIT)但与数据库端优化相比,它仍然是非常低效的过程.
如果要推limit送到数据库,则必须使用子查询作为dbtable参数静态地执行此操作:
(sqlContext.read.format('jdbc')
.options(url='xxxx', dbtable='(SELECT * FROM xxx LIMIT 4) tmp', ....))
Run Code Online (Sandbox Code Playgroud)
sqlContext.read.format("jdbc").options(Map(
"url" -> "xxxx",
"dbtable" -> "(SELECT * FROM xxx LIMIT 4) tmp",
))
Run Code Online (Sandbox Code Playgroud)
请注意,子查询中的别名是必需的.
注意:
一旦Data Source API v2准备就绪,将来可能会改进此行为:
| 归档时间: |
|
| 查看次数: |
3393 次 |
| 最近记录: |