为什么不接受Scala的符号作为列引用?

ari*_*ero 7 scala apache-spark-sql

尝试使用Spark SQL的例子,除非需要表达式,否则它们似乎运行良好:

scala> val teenagers = people.where('age >= 10).where('age <= 19).select('name)
<console>:23: error: value >= is not a member of Symbol
       val teenagers = people.where('age >= 10).where('age <= 19).select('name)

scala> val teenagers = people.select('name)
<console>:23: error: type mismatch;
 found   : Symbol
 required: org.apache.spark.sql.catalyst.expressions.Expression
       val teenagers = people.select('name)
Run Code Online (Sandbox Code Playgroud)

似乎我需要一个没有记录的导入.

如果我批量导入所有内容

import org.apache.spark.sql.catalyst.analysis._
import org.apache.spark.sql.catalyst.dsl._
import org.apache.spark.sql.catalyst.errors._
import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.catalyst.rules._ 
import org.apache.spark.sql.catalyst.types._
import org.apache.spark.sql.catalyst.util._
import org.apache.spark.sql.execution
import org.apache.spark.sql.hive._
Run Code Online (Sandbox Code Playgroud)

编辑:......和

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
Run Code Online (Sandbox Code Playgroud)

有用.

ser*_*jja 4

您缺少隐式转换。

val sqlContext: org.apache.spark.sql.SQLContext = ???
import sqlContext._
Run Code Online (Sandbox Code Playgroud)

然而,在 Spark 的最新(和支持的)版本中,这种情况发生了变化。