Eda*_*ame 1 scala dataframe apache-spark
我试图将我的dataFrame保存在s3中,如下所示:
myDF.write.format("com.databricks.spark.csv").options(codec="org.apache.hadoop.io.compress.GzipCodec").save("s3n://myPath/myData.csv")
Run Code Online (Sandbox Code Playgroud)
然后我得到了错误:
<console>:132: error: overloaded method value options with alternatives:
(options: java.util.Map[String,String])org.apache.spark.sql.DataFrameWriter <and>
(options: scala.collection.Map[String,String])org.apache.spark.sql.DataFrameWriter
cannot be applied to (codec: String)
Run Code Online (Sandbox Code Playgroud)
有人知道我错过了吗?谢谢!
小智 5
Scala不是Python。它没有**变形。您必须提供Map:
myDF.write.format("com.databricks.spark.csv")
.options(Map("codec" -> "org.apache.hadoop.io.compress.GzipCodec"))
.save("s3n://myPath/myData.csv")
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1692 次 |
| 最近记录: |