ser*_*eda 14 scala apache-spark
我已经定义了两个这样的表:
val tableName = "table1"
val tableName2 = "table2"
val format = new SimpleDateFormat("yyyy-MM-dd")
val data = List(
List("mike", 26, true),
List("susan", 26, false),
List("john", 33, true)
)
val data2 = List(
List("mike", "grade1", 45, "baseball", new java.sql.Date(format.parse("1957-12-10").getTime)),
List("john", "grade2", 33, "soccer", new java.sql.Date(format.parse("1978-06-07").getTime)),
List("john", "grade2", 32, "golf", new java.sql.Date(format.parse("1978-06-07").getTime)),
List("mike", "grade2", 26, "basketball", new java.sql.Date(format.parse("1978-06-07").getTime)),
List("lena", "grade2", 23, "baseball", new java.sql.Date(format.parse("1978-06-07").getTime))
)
val rdd = sparkContext.parallelize(data).map(Row.fromSeq(_))
val rdd2 = sparkContext.parallelize(data2).map(Row.fromSeq(_))
val schema = StructType(Array(
StructField("name", StringType, true),
StructField("age", IntegerType, true),
StructField("isBoy", BooleanType, false)
))
val schema2 = StructType(Array(
StructField("name", StringType, true),
StructField("grade", StringType, true),
StructField("howold", IntegerType, true),
StructField("hobby", StringType, true),
StructField("birthday", DateType, false)
))
val df = sqlContext.createDataFrame(rdd, schema)
val df2 = sqlContext.createDataFrame(rdd2, schema2)
df.createOrReplaceTempView(tableName)
df2.createOrReplaceTempView(tableName2)
Run Code Online (Sandbox Code Playgroud)
我正在尝试构建查询以返回table1中没有table2中匹配行的行.我尝试使用此查询来执行此操作:
Select * from table1 LEFT JOIN table2 ON table1.name = table2.name AND table1.age = table2.howold AND table2.name IS NULL AND table2.howold IS NULL
Run Code Online (Sandbox Code Playgroud)
但这只是给了我table1的所有行:
列表({"name":"john","age":33,"isBoy":true},{"name":"susan","age":26,"isBoy":false},{"name" : "迈克", "年龄":26, "isBoy":真})
如何有效地在Spark中进行这种类型的连接?
我正在寻找一个SQL查询,因为我需要能够指定要在两个表之间进行比较的列,而不是像在其他推荐的问题中那样逐行进行比较.喜欢使用减法,除了等.
Tza*_*har 28
您可以使用"左反"连接类型 - 使用DataFrame API或SQL(DataFrame API支持SQL支持的所有内容,包括您需要的任何连接条件):
DataFrame API:
df.as("table1").join(
df2.as("table2"),
$"table1.name" === $"table2.name" && $"table1.age" === $"table2.howold",
"leftanti"
)
Run Code Online (Sandbox Code Playgroud)
SQL:
sqlContext.sql(
"""SELECT table1.* FROM table1
| LEFT ANTI JOIN table2
| ON table1.name = table2.name AND table1.age = table2.howold
""".stripMargin)
Run Code Online (Sandbox Code Playgroud)
注意:还值得注意的是,有一种更简洁,更简洁的方法来创建示例数据,而无需单独指定架构,使用元组和隐式toDF
方法,然后在需要时"修复"自动推断的架构:
import spark.implicits._
val df = List(
("mike", 26, true),
("susan", 26, false),
("john", 33, true)
).toDF("name", "age", "isBoy")
val df2 = List(
("mike", "grade1", 45, "baseball", new java.sql.Date(format.parse("1957-12-10").getTime)),
("john", "grade2", 33, "soccer", new java.sql.Date(format.parse("1978-06-07").getTime)),
("john", "grade2", 32, "golf", new java.sql.Date(format.parse("1978-06-07").getTime)),
("mike", "grade2", 26, "basketball", new java.sql.Date(format.parse("1978-06-07").getTime)),
("lena", "grade2", 23, "baseball", new java.sql.Date(format.parse("1978-06-07").getTime))
).toDF("name", "grade", "howold", "hobby", "birthday").withColumn("birthday", $"birthday".cast(DateType))
Run Code Online (Sandbox Code Playgroud)
您可以使用内置函数来做到这一点except
(我会使用您提供的代码,但是您没有包含导入,所以我不能只对它进行c / p :()
val a = sc.parallelize(Seq((1,"a",123),(2,"b",456))).toDF("col1","col2","col3")
val b= sc.parallelize(Seq((4,"a",432),(2,"t",431),(2,"b",456))).toDF("col1","col2","col3")
scala> a.show()
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 1| a| 123|
| 2| b| 456|
+----+----+----+
scala> b.show()
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 4| a| 432|
| 2| t| 431|
| 2| b| 456|
+----+----+----+
scala> a.except(b).show()
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 1| a| 123|
+----+----+----+
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
33970 次 |
最近记录: |