相关疑难解决方法(0)

如何在Spark 2.0中启用笛卡尔联接?

我必须在Spark 2.0中交叉加入2个数据帧我遇到以下错误:

用户类抛出异常:

org.apache.spark.sql.AnalysisException: Cartesian joins could be prohibitively expensive and are disabled by default. To explicitly enable them, please set spark.sql.crossJoin.enabled = true; 
Run Code Online (Sandbox Code Playgroud)

请帮我在哪里设置这个配置,我在eclipse中编码.

apache-spark apache-spark-sql spark-dataframe

1
推荐指数
1
解决办法
9694
查看次数

"无法找到存储在数据集中的类型的编码器"甚至spark.implicits._是导入的?

Error:(39, 12) Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
    dbo.map((r) => ods.map((s) => {

Error:(39, 12) not enough arguments for method map: (implicit evidence$6: org.apache.spark.sql.Encoder[org.apache.spark.sql.Dataset[Int]])org.apache.spark.sql.Dataset[org.apache.spark.sql.Dataset[Int]].
Unspecified value parameter evidence$6.
    dbo.map((r) => ods.map((s) => {
object Main extends App {
  ....

  def compare(sqlContext: org.apache.spark.sql.SQLContext, 
            dbo: Dataset[Cols], ods: Dataset[Cols]) = {
    import sqlContext.implicits._ …
Run Code Online (Sandbox Code Playgroud)

scala apache-spark

1
推荐指数
1
解决办法
1703
查看次数