我正在查询information_schema.columnsPostgreSQL数据库中的表.使用表名,结果集将查找所有列名,类型以及它是否可为空(主键除外,'id').这是使用的查询:
SELECT column_name, is_nullable,data_type FROM information_schema.columns
WHERE lower(table_name) = lower('TABLE1') AND column_name != 'id'
ORDER BY ordinal_position;
Run Code Online (Sandbox Code Playgroud)
我为每个结果都有一个字符串数组,我试图使用ResultSet方法getArray(String columnLabel)来避免循环结果.我想将返回的数组存储在字符串数组中,但是会出现类型不匹配错误
Type mismatch: cannot convert from Array to String[]
Run Code Online (Sandbox Code Playgroud)
有没有办法将SQL Array对象转换或类型转换为String []?
相关守则:
String[] columnName, type, nullable;
//Get Field Names, Type, & Nullability
String query = "SELECT column_name, is_nullable,data_type FROM information_schema.columns "
+ "WHERE lower(table_name) = lower('"+tableName+"') AND column_name != 'id' "
+ "ORDER BY ordinal_position";
try{
ResultSet rs = Query.executeQueryWithRS(c, query);
columnName = rs.getArray(rs.getArray("column_name"));
type = rs.getArray("data_type"); …Run Code Online (Sandbox Code Playgroud) 我有一些spark scala代码在spark-shell中没有问题.
这个问题的核心在于这几条线.我想在数据框中添加一行:
object SparkPipeline {
def main(args: Array[String]) {
val spark = (SparkSession
.builder()
.appName("SparkPipeline")
.getOrCreate()
)
df = (spark
.read
.format("com.databricks.spark.avro")
.load(DATA_PATH)
)
case class DataRow(field1: String, field2: String)
val row_df = Seq(DataRow("FOO", "BAR")).toDF() // THIS FAILS
val df_augmented = df.union(row_df)
//
// Additional code here
//
}
}
Run Code Online (Sandbox Code Playgroud)
但是,当我使用sbt将其打包为jar时,sbt失败并出现以下错误:
value toDF is not a member of Seq[DataRow]
Run Code Online (Sandbox Code Playgroud)
我试着按照这个问题来做:
val spark = (SparkSession
.builder()
.appName("TrainSimpleRF")
.getOrCreate()
)
val sc = spark.sparkContext
val sqlContext= new org.apache.spark.sql.SQLContext(sc)
import …Run Code Online (Sandbox Code Playgroud)