use*_*481 11 python pyspark-sql
我使用 Spark 执行加载到 Redshift 中的数据转换。Redshift 不支持 NaN 值,所以我需要用 NULL 替换所有出现的 NaN。
我试过这样的事情:
some_table = sql('SELECT * FROM some_table')
some_table = some_table.na.fill(None)
Run Code Online (Sandbox Code Playgroud)
但我收到以下错误:
ValueError: value 应该是 float、int、long、string、bool 或 dict
所以它似乎na.fill()不支持无。我特别需要替换为NULL,而不是其他值,例如0.
小智 20
df = spark.createDataFrame([(1, float('nan')), (None, 1.0)], ("a", "b"))
df.show()
+----+---+
| a| b|
+----+---+
| 1|NaN|
|null|1.0|
+----+---+
df = df.replace(float('nan'), None)
df.show()
+----+----+
| a| b|
+----+----+
| 1|null|
|null| 1.0|
+----+----+
Run Code Online (Sandbox Code Playgroud)
您可以使用该.replace函数null在一行代码中更改值。
use*_*481 11
I finally found the answer after Googling around a bit.
df = spark.createDataFrame([(1, float('nan')), (None, 1.0)], ("a", "b"))
df.show()
+----+---+
| a| b|
+----+---+
| 1|NaN|
|null|1.0|
+----+---+
import pyspark.sql.functions as F
columns = df.columns
for column in columns:
df = df.withColumn(column,F.when(F.isnan(F.col(column)),None).otherwise(F.col(column)))
sqlContext.registerDataFrameAsTable(df, "df2")
sql('select * from df2').show()
+----+----+
| a| b|
+----+----+
| 1|null|
|null| 1.0|
+----+----+
Run Code Online (Sandbox Code Playgroud)
It doesn't use na.fill(), but it accomplished the same result, so I'm happy.