Sam*_*ber 0 python apache-spark-sql pyspark
我正在尝试计算 Pyspark 数据框中两个 ArrayType 列之间的按元素乘积。我尝试使用下面的方法来实现这一点,但似乎无法得到正确的结果......
from pyspark.sql import functions as F
data.withColumn("array_product", F.expr("transform(CASUAL_TOPS_SIMILARITY_SCORE, (x, PER_UNA_SIMILARITY_SCORE) -> x * PER_UNA_SIMILARITY_SCORE)"))
Run Code Online (Sandbox Code Playgroud)
有人对我如何在这里获得正确的结果有任何提示吗?我在下面的 DataFrame 中附加了一个测试行...我需要将列CASUAL_TOPS_SIMILARITY_SCORE与PER_UNA_SIMILARITY_SCORE
import json
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local").appName("test").getOrCreate()
js = '{"PER_UNA_SIMILARITY_SCORE":{"category_list":[0.9736891648,0.9242207186,0.9717901106,0.9763716155,0.9440944231,0.9708032326,0.9599383329,0.9705343027,0.804267581,0.9597317177,0.9316773281,0.8076725314,0.9555369889,0.9753550725,0.9811865431,1.0,0.8231541809,0.9738989392,0.9780283991,0.9644088011,0.9798529418,0.9347357116,0.9727502648,0.9778486916,0.8621780792,0.9735844196,0.9582644436,0.9579092722,0.8890027888,0.9394986243,0.9563411605,0.9811867597,0.9738380108,0.9577698381,0.7912932623,0.9778158279]},"CASUAL_TOPS_SIMILARITY_SCORE":{"category_list":[0.7924168764,0.7511316884,0.7925161719,0.8007234107,0.7953468064,0.7882556409,0.7778519374,0.7881058994,1.0,0.7785517364,0.7733458123,0.7426205538,0.7905195275,0.7925983778,0.7983386701,0.804267581,0.6749185095,0.7924821952,0.8016348085,0.7895650508,0.7985721918,0.772656847,0.7897495222,0.7948759958,0.6996340275,0.8024327668,0.7784598142,0.7942396044,0.7159431296,0.7850145414,0.7768001023,0.7983372946,0.7971616495,0.7927845035,0.6462844274,0.799555357]}}'
a_json = json.loads(js)
data = spark.createDataFrame(pd.DataFrame.from_dict(a_json))
Run Code Online (Sandbox Code Playgroud)
最简单(但不一定最有效)是使用 UDF:
from pyspark.sql import functions as F, types as T
@F.udf(T.ArrayType(T.FloatType()))
def product(A, B):
return [x * y for x, y in zip(A, B)]
data.withColumn(
"array_product",
product(F.col("CASUAL_TOPS_SIMILARITY_SCORE"), F.col("PER_UNA_SIMILARITY_SCORE")),
).show()
+----------------------------+------------------------+--------------------+
|CASUAL_TOPS_SIMILARITY_SCORE|PER_UNA_SIMILARITY_SCORE| array_product|
+----------------------------+------------------------+--------------------+
| [0.7924168764, 0....| [0.9736891648, 0....|[0.7715677, 0.694...|
+----------------------------+------------------------+--------------------+
Run Code Online (Sandbox Code Playgroud)
编辑:从 2.4 开始,您还可以使用SQL 内置函数:
from pyspark.sql import functions as F, types as T
@F.udf(T.ArrayType(T.FloatType()))
def product(A, B):
return [x * y for x, y in zip(A, B)]
data.withColumn(
"array_product",
product(F.col("CASUAL_TOPS_SIMILARITY_SCORE"), F.col("PER_UNA_SIMILARITY_SCORE")),
).show()
+----------------------------+------------------------+--------------------+
|CASUAL_TOPS_SIMILARITY_SCORE|PER_UNA_SIMILARITY_SCORE| array_product|
+----------------------------+------------------------+--------------------+
| [0.7924168764, 0....| [0.9736891648, 0....|[0.7715677, 0.694...|
+----------------------------+------------------------+--------------------+
Run Code Online (Sandbox Code Playgroud)
注意:这两种方法在浮点方面返回不同的结果。
| 归档时间: |
|
| 查看次数: |
1255 次 |
| 最近记录: |