我正在尝试将数据从spark数据帧导出到.csv文件:
df.coalesce(1)\
.write\
.format("com.databricks.spark.csv")\
.option("header", "true")\
.save(output_path)
Run Code Online (Sandbox Code Playgroud)
它正在创建一个文件名"part-r-00001-512872f2-9b51-46c5-b0ee-31d626063571.csv"
我希望文件名为"part-r-00000.csv"或"part-00000.csv"
在AWS S3上创建文件时,我对如何使用os.system命令的限制.
如何在保留文件中的标题的同时设置文件名?
谢谢!
import pygeohash as pgh
pgh.encode(45,55)
'tpzpgxczbzur'
Run Code Online (Sandbox Code Playgroud)
上述步骤效果很好。下面我试图创建一个数据框:
l = [(45,25),(75,22),(85,20),(89,26)]
rdd = sc.parallelize(l)
geoCords = rdd.map(lambda x: Row(lat=x[0], long=int(x[1])))
geoCordsSchema = sqlContext.createDataFrame(geoCords)
geoCordsSchema.show()
+---+----+
|lat|long|
+---+----+
| 45| 25|
| 75| 22|
| 85| 20|
| 89| 26|
+---+----+
Run Code Online (Sandbox Code Playgroud)
这成功创建了一个火花数据框。现在我正在使用Pygeohash编码,并抛出如下错误:
pgh.encode(geoCordsSchema.lat, geoCordsSchema.long, precision = 7)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/pygeohash/geohash.py", line 96, in encode
if longitude > mid:
File "/usr/local/spark/python/pyspark/sql/column.py", line 427, in __nonzero__
raise ValueError("Cannot convert column into bool: please use …Run Code Online (Sandbox Code Playgroud)