Jos*_*lor 16 java apache-spark apache-spark-ml apache-spark-mllib
我有一个名为ab的DataFrame .当我将ab指定为StringIndexer的输入列名时,AnalysisException的消息"无法解析'ab'给定输入列ab".我正在使用Spark 1.6.0.
我知道旧版本的Spark可能在列名中遇到点问题,但在更新版本中,可以在Spark shell和SQL查询中使用反引号.例如,这是解决另一个问题,如何在Spark SQL中使用连字符转义列名.其中一些问题是SPARK-6898报告 的,列名中的特殊字符被破坏,但是在1.4.0中得到了解决.
这是一个最小的例子和堆栈跟踪:
public class SparkMLDotColumn {
public static void main(String[] args) {
// Get the contexts
SparkConf conf = new SparkConf()
.setMaster("local[*]")
.setAppName("test")
.set("spark.ui.enabled", "false"); // http://permalink.gmane.org/gmane.comp.lang.scala.spark.user/21385
JavaSparkContext sparkContext = new JavaSparkContext(conf);
SQLContext sqlContext = new SQLContext(sparkContext);
// Create a schema with a single string column named "a.b"
StructType schema = new StructType(new StructField[] {
DataTypes.createStructField("a.b", DataTypes.StringType, false)
});
// Create an empty RDD and DataFrame
JavaRDD<Row> rdd = sparkContext.parallelize(Collections.emptyList());
DataFrame df = sqlContext.createDataFrame(rdd, schema);
StringIndexer indexer = new StringIndexer()
.setInputCol("a.b")
.setOutputCol("a.b_index");
df = indexer.fit(df).transform(df);
}
}
Run Code Online (Sandbox Code Playgroud)
现在,使用反引号列名称尝试相同类型的示例是值得的,因为我们得到了一些奇怪的结果.这是一个具有相同模式的示例,但这次我们在框架中有数据.在尝试任何索引之前,我们将命名a.b的列复制到名为的列a_b.这需要使用反引号,它没有问题.然后,我们将尝试索引a_b列,这没有问题.当我们尝试a.b使用反引号对列进行索引时,会发生一些非常奇怪的事情.我们没有得到任何错误,但是没有得到任何结果:
public class SparkMLDotColumn {
public static void main(String[] args) {
// Get the contexts
SparkConf conf = new SparkConf()
.setMaster("local[*]")
.setAppName("test")
.set("spark.ui.enabled", "false");
JavaSparkContext sparkContext = new JavaSparkContext(conf);
SQLContext sqlContext = new SQLContext(sparkContext);
// Create a schema with a single string column named "a.b"
StructType schema = new StructType(new StructField[] {
DataTypes.createStructField("a.b", DataTypes.StringType, false)
});
// Create an empty RDD and DataFrame
List<Row> rows = Arrays.asList(RowFactory.create("foo"), RowFactory.create("bar"));
JavaRDD<Row> rdd = sparkContext.parallelize(rows);
DataFrame df = sqlContext.createDataFrame(rdd, schema);
df = df.withColumn("a_b", df.col("`a.b`"));
StringIndexer indexer0 = new StringIndexer();
indexer0.setInputCol("a_b");
indexer0.setOutputCol("a_bIndex");
df = indexer0.fit(df).transform(df);
StringIndexer indexer1 = new StringIndexer();
indexer1.setInputCol("`a.b`");
indexer1.setOutputCol("abIndex");
df = indexer1.fit(df).transform(df);
df.show();
}
}
Run Code Online (Sandbox Code Playgroud)
+---+---+--------+
|a.b|a_b|a_bIndex| // where's the abIndex column?
+---+---+--------+
|foo|foo| 0.0|
|bar|bar| 1.0|
+---+---+--------+
Run Code Online (Sandbox Code Playgroud)
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'a.b' given input columns a.b;
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:60)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:319)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:319)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:316)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:316)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:265)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:305)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:316)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:316)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:316)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:265)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:305)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:316)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:107)
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:117)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2$1.apply(QueryPlan.scala:121)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:121)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:125)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:125)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:57)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:105)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:50)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$withPlan(DataFrame.scala:2165)
at org.apache.spark.sql.DataFrame.select(DataFrame.scala:751)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:84)
at SparkMLDotColumn.main(SparkMLDotColumn.java:38)
Run Code Online (Sandbox Code Playgroud)
我在 Spark 2.1 上遇到了同样的问题。我最终创建了一个通过替换所有点来“验证”(TM)所有列名的函数。斯卡拉实现:
def validifyColumnnames[T](df : Dataset[T], spark : SparkSession) : DataFrame = {
val newColumnNames = ArrayBuffer[String]()
for(oldCol <- df.columns) {
newColumnNames += oldCol.replaceAll("\\.","") // append
}
val newColumnNamesB = spark.sparkContext.broadcast(newColumnNames.toArray)
df.toDF(newColumnNamesB.value : _*)
}
Run Code Online (Sandbox Code Playgroud)
抱歉,这可能不是您希望的答案,但是对于评论来说太长了。