我想从功能追加到数据框“ df”上的新列get_distance:
def get_distance(x, y):
dfDistPerc = hiveContext.sql("select column3 as column3, \
from tab \
where column1 = '" + x + "' \
and column2 = " + y + " \
limit 1")
result = dfDistPerc.select("column3").take(1)
return result
df = df.withColumn(
"distance",
lit(get_distance(df["column1"], df["column2"]))
)
Run Code Online (Sandbox Code Playgroud)
但是,我得到这个:
TypeError: 'Column' object is not callable
Run Code Online (Sandbox Code Playgroud)
我认为发生这种情况是因为x和y是Column对象,我需要转换为String在查询中使用。我对吗?如果是这样,我该怎么做?
考虑以下DataFrame:
#+------+---+
#|letter|rpt|
#+------+---+
#| X| 3|
#| Y| 1|
#| Z| 2|
#+------+---+
Run Code Online (Sandbox Code Playgroud)
可以使用以下代码创建:
df = spark.createDataFrame([("X", 3),("Y", 1),("Z", 2)], ["letter", "rpt"])
Run Code Online (Sandbox Code Playgroud)
假设我想在每一行中重复列中指定的次数rpt,就像这个问题一样。
一种方法是使用以下查询将我的解决方案复制到该问题pyspark-sql:
query = """
SELECT *
FROM
(SELECT DISTINCT *,
posexplode(split(repeat(",", rpt), ",")) AS (index, col)
FROM df) AS a
WHERE index > 0
"""
query = query.replace("\n", " ") # replace newlines with spaces, avoid EOF error
spark.sql(query).drop("col").sort('letter', 'index').show()
#+------+---+-----+
#|letter|rpt|index|
#+------+---+-----+
#| X| 3| 1|
#| …Run Code Online (Sandbox Code Playgroud)