如何在连接两个数据帧时提供更多列条件.例如,我想运行以下内容:
val Lead_all = Leads.join(Utm_Master,
Leaddetails.columns("LeadSource","Utm_Source","Utm_Medium","Utm_Campaign") ==
Utm_Master.columns("LeadSource","Utm_Source","Utm_Medium","Utm_Campaign"),
"left")
Run Code Online (Sandbox Code Playgroud)
我想只在这些列匹配时才加入.但是上面的语法无效,因为cols只需要一个字符串.那我怎么得到我想要的东西.
当我们使用时,我对这种差异感到困惑
df.filter(col("c1") === null) and df.filter(col("c1").isNull)
Run Code Online (Sandbox Code Playgroud)
相同的数据帧我在=== null中得到计数,但在isNull中计数为零.请帮我理解其中的区别.谢谢
我正在尝试在 pyspark 中加入 2 个数据帧。我的问题是我希望我的“内部联接”能够通过,而不管空值如何。我可以看到在 scala 中,我有一个<=>的替代品。但是,<=>在 pyspark 中不起作用。
userLeft = sc.parallelize([
Row(id=u'1',
first_name=u'Steve',
last_name=u'Kent',
email=u's.kent@email.com'),
Row(id=u'2',
first_name=u'Margaret',
last_name=u'Peace',
email=u'marge.peace@email.com'),
Row(id=u'3',
first_name=None,
last_name=u'hh',
email=u'marge.hh@email.com')]).toDF()
userRight = sc.parallelize([
Row(id=u'2',
first_name=u'Margaret',
last_name=u'Peace',
email=u'marge.peace@email.com'),
Row(id=u'3',
first_name=None,
last_name=u'hh',
email=u'marge.hh@email.com')]).toDF()
Run Code Online (Sandbox Code Playgroud)
当前工作版本:
userLeft.join(userRight, (userLeft.last_name==userRight.last_name) & (userLeft.first_name==userRight.first_name)).show()
当前结果:
+--------------------+----------+---+---------+--------------------+----------+---+---------+
| email|first_name| id|last_name| email|first_name| id|last_name|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
|marge.peace@email...| Margaret| 2| Peace|marge.peace@email...| Margaret| 2| Peace|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
Run Code Online (Sandbox Code Playgroud)
预期结果:
+--------------------+----------+---+---------+--------------------+----------+---+---------+
| email|first_name| id|last_name| email|first_name| id|last_name|
+--------------------+----------+---+---------+--------------------+----------+---+---------+
| marge.hh@email.com| null| 3| hh| marge.hh@email.com| null| 3| hh|
|marge.peace@email...| Margaret| 2| …Run Code Online (Sandbox Code Playgroud) 我有两个试图使用PySpark 2.3.0加入的具有空值的数据框:
dfA:
# +----+----+
# |col1|col2|
# +----+----+
# | a|null|
# | b| 0|
# | c| 0|
# +----+----+
Run Code Online (Sandbox Code Playgroud)
dfB:
# +----+----+----+
# |col1|col2|col3|
# +----+----+----+
# | a|null| x|
# | b| 0| x|
# +----+----+----+
Run Code Online (Sandbox Code Playgroud)
可以使用以下脚本创建数据框:
dfA = spark.createDataFrame(
[
('a', None),
('b', '0'),
('c', '0')
],
('col1', 'col2')
)
dfB = spark.createDataFrame(
[
('a', None, 'x'),
('b', '0', 'x')
],
('col1', 'col2', 'col3')
)
Run Code Online (Sandbox Code Playgroud)
加入通话:
dfA.join(dfB, dfB.columns[:2], how='left').orderBy('col1').show()
Run Code Online (Sandbox Code Playgroud)
结果:
# +----+----+----+
# |col1|col2|col3|
# …Run Code Online (Sandbox Code Playgroud) 如何使用多列作为键来计算两个Dataframe的连接?例如DF1,DF2是两个dataFrame.
这是我们计算连接的方式,
JoinDF = DF1.join(DF2, DF1("column1") === DF2("column11") && DF1("column2") === DF2("column22"), "outer")
Run Code Online (Sandbox Code Playgroud)
但我的问题是如果它们存储在如下数组中,如何访问多列:
DF1KeyArray=Array{column1,column2}
DF2KeyArray=Array{column11,column22}
Run Code Online (Sandbox Code Playgroud)
那么用这种方法计算连接是不可能的
JoinDF = DF1.join(DF2, DF1(DF1KeyArray)=== DF2(DF2KeyArray), "outer")
Run Code Online (Sandbox Code Playgroud)
在这种情况下错误是:
<console>:128: error: type mismatch;
found : Array[String]
required: String
Run Code Online (Sandbox Code Playgroud)
有没有办法访问多个列作为存储在数组中的键来计算连接?