使用python将多列合并为pyspark数据帧中的一列

Shu*_*wal 11 python dataframe pyspark

我需要使用 python 中的 pyspark 将数据帧的多列合并为一个列表(或元组)作为列的值的单列。

Input dataframe:

+-------+-------+-------+-------+-------+
| name  |mark1  |mark2  |mark3  | Grade |
+-------+-------+-------+-------+-------+
| Jim   | 20    | 30    | 40    |  "C"  |
+-------+-------+-------+-------+-------+
| Bill  | 30    | 35    | 45    |  "A"  |
+-------+-------+-------+-------+-------+
| Kim   | 25    | 36    | 42    |  "B"  |
+-------+-------+-------+-------+-------+

Output dataframe should be

+-------+-----------------+
| name  |marks            |
+-------+-----------------+
| Jim   | [20,30,40,"C"]  |
+-------+-----------------+
| Bill  | [30,35,45,"A"]  |
+-------+-----------------+
| Kim   | [25,36,42,"B"]  |
+-------+-----------------+
Run Code Online (Sandbox Code Playgroud)

Mic*_*nko 13

列可以与 sparks 数组函数合并:

import pyspark.sql.functions as f

columns = [f.col("mark1"), ...] 

output = input.withColumn("marks", f.array(columns)).select("name", "marks")
Run Code Online (Sandbox Code Playgroud)

您可能需要更改条目的类型才能成功合并


fjc*_*cf1 3

看看这个文档:https ://spark.apache.org/docs/2.1.0/ml-features.html#vectorassembler

from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler

assembler = VectorAssembler(
    inputCols=["mark1", "mark2", "mark3"],
    outputCol="marks")

output = assembler.transform(dataset)
output.select("name", "marks").show(truncate=False)
Run Code Online (Sandbox Code Playgroud)

  • 我也有需要合并的字符串列。对于字符串列,它会给出以下错误,并显示消息 StringType is not support: `File "tester.py",line 34, in <module> output = assembler.transform(mydata_df) File"/usr/local/Cellar/apache-spark /2.1.0/libexec/python/pyspark/ml/base.py”,第 105 行,在转换中返回 self._transform(dataset) 。。Spark/2.1.0/libexec/python/pyspark/sql/utils.py",第 79 行,在装饰中引发 IllegalArgumentException(s.split(': ', 1)[1], stackTrace) pyspark.sql.utils.IllegalArgumentException : u'不支持数据类型 StringType。'` (2认同)