Chr*_*lis 5 apache-spark parquet apache-spark-sql
当我将Spark DataFrame保存为实木复合地板文件然后将其读回时,所得DataFrame的行与原始行不同,如下面的会话所示。这是DataFrames还是Parquet文件的“功能”?以行顺序保留方式保存DataFrame的最佳方法是什么?
>>> import numpy as np
>>> import pandas as pd
>>> pdf = pd.DataFrame(np.random.random((10,2)))
>>> pdf
0 1
0 0.191519 0.622109
1 0.437728 0.785359
2 0.779976 0.272593
3 0.276464 0.801872
4 0.958139 0.875933
5 0.357817 0.500995
6 0.683463 0.712702
7 0.370251 0.561196
8 0.503083 0.013768
9 0.772827 0.882641
>>> df = sqlContext.createDataFrame(pdf)
>>> df.show()
+-------------------+--------------------+
| 0| 1|
+-------------------+--------------------+
| 0.1915194503788923| 0.6221087710398319|
| 0.4377277390071145| 0.7853585837137692|
| 0.7799758081188035| 0.2725926052826416|
| 0.2764642551430967| 0.8018721775350193|
| 0.9581393536837052| 0.8759326347420947|
|0.35781726995786667| 0.5009951255234587|
| 0.6834629351721363| 0.7127020269829002|
|0.37025075479039493| 0.5611961860656249|
| 0.5030831653078097|0.013768449590682241|
| 0.772826621612374| 0.8826411906361166|
+-------------------+--------------------+
>>> df.write.parquet('test.parquet')
>>> df2 = sqlContext.read.parquet('test.parquet')
>>> df2.show()
+-------------------+--------------------+
| 0| 1|
+-------------------+--------------------+
| 0.6834629351721363| 0.7127020269829002|
|0.37025075479039493| 0.5611961860656249|
| 0.5030831653078097|0.013768449590682241|
| 0.772826621612374| 0.8826411906361166|
| 0.7799758081188035| 0.2725926052826416|
| 0.2764642551430967| 0.8018721775350193|
| 0.1915194503788923| 0.6221087710398319|
| 0.4377277390071145| 0.7853585837137692|
| 0.9581393536837052| 0.8759326347420947|
|0.35781726995786667| 0.5009951255234587|
+-------------------+--------------------+
Run Code Online (Sandbox Code Playgroud)