我必须在Spark 2.0中交叉加入2个数据帧我遇到以下错误:
用户类抛出异常:
org.apache.spark.sql.AnalysisException: Cartesian joins could be prohibitively expensive and are disabled by default. To explicitly enable them, please set spark.sql.crossJoin.enabled = true;
Run Code Online (Sandbox Code Playgroud)
请帮我在哪里设置这个配置,我在eclipse中编码.
Error:(39, 12) Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
dbo.map((r) => ods.map((s) => {
Error:(39, 12) not enough arguments for method map: (implicit evidence$6: org.apache.spark.sql.Encoder[org.apache.spark.sql.Dataset[Int]])org.apache.spark.sql.Dataset[org.apache.spark.sql.Dataset[Int]].
Unspecified value parameter evidence$6.
dbo.map((r) => ods.map((s) => {
object Main extends App {
....
def compare(sqlContext: org.apache.spark.sql.SQLContext,
dbo: Dataset[Cols], ods: Dataset[Cols]) = {
import sqlContext.implicits._ …Run Code Online (Sandbox Code Playgroud)