too*_*lik 3 scala resultset apache-spark apache-spark-sql
我正在查询 mysql 表
val url = "jdbc:mysql://XXX-XX-XXX-XX-XX.compute-1.amazonaws.com:3306/pg_partner"
val driver = "com.mysql.jdbc.Driver"
val username = "XXX"
val password = "XXX"
var connection:Connection = DriverManager.getConnection(url, username, password)
val statement = connection.createStatement()
val patnerName = statement.executeQuery("SELECT id,name FROM partner")
Run Code Online (Sandbox Code Playgroud)
我确实得到了结果,patnerName但我需要转换为数据框。
我可以通过以下代码打印数据:
while (patnerName.next) {
val id = patnerName.getString("id")
val name = patnerName.getString("name")
println("id = %s, name = %s".format(id,name))
}
Run Code Online (Sandbox Code Playgroud)
现在我如何转换patnerName为 DataFrame?
所以你必须分几个步骤来完成:
val columns = Seq("id", "name")
val schema = StructType(List(
StructField("id", StringType, nullable = true),
StructField("name", StringType, nullable = true)
))
Run Code Online (Sandbox Code Playgroud)
def parseResultSet(rs: ResultSet): Row = {
val resultSetRecord = columns.map(c => rs.getString(c))
Row(resultSetRecord:_*)
}
Run Code Online (Sandbox Code Playgroud)
def resultSetToIter(rs: ResultSet)(f: ResultSet => Row): Iterator[Row] =
new Iterator[Row] {
def hasNext: Boolean = rs.next()
def next(): Row = f(rs)
}
Run Code Online (Sandbox Code Playgroud)
def parallelizeResultSet(rs: ResultSet, spark: SparkSession): DataFrame = {
val rdd = spark.sparkContext.parallelize(resultSetToIter(rs)(parseResultSet).toSeq)
spark.createDataFrame(rdd, schema) // use the schema you defined in step 1
}
Run Code Online (Sandbox Code Playgroud)
val df: DataFrame = parallelizeResultSet(patner, spark)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6758 次 |
| 最近记录: |