将字符串转换为单独的行,然后转换为 Pyspark 数据框

Abd*_*eeb -2 apache-spark pyspark

我有一个这样的字符串,每一行都用 \n 分隔。

我尝试了多种方法,但找不到任何合适的方法来做到这一点。

列名称 \n 第一行 \n 第二行 例如

"Name,ID,Number\n abc,1,123 \n xyz,2,456"

I want to convert it into pyspark dataframe like this

Name     ID   Number
abc      1      123
xyz      2      456

Run Code Online (Sandbox Code Playgroud)

Man*_*ish 5

你可以试试这个

from pyspark.sql.functions import *
from pyspark.sql.types import *

data = spark.sql("""select 'Name,ID,Number\n abc,1,123 \n xyz,2,456' as col1""")

data.show(20,False)
# +-------------------------------------+
# |col1                                 |
# +-------------------------------------+
# |Name,ID,Number
#  abc,1,123 
#  xyz,2,456|
# +-------------------------------------+
data.createOrReplaceTempView("data")
data = spark.sql("""
select posexplode(split(col1,'\n'))
from data
""")
data.show(20,False)
# +---+--------------+
# |pos|col           |
# +---+--------------+
# |0  |Name,ID,Number|
# |1  | abc,1,123    |
# |2  | xyz,2,456    |
# +---+--------------+

columnList = data.select('col').first()[0].split(",")
data.createOrReplaceTempView("data")

query = ""
for i,e in enumerate(columnList):
  query += "trim(split(col , ',')[{1}]) as {0}".format(e,i) if i == 0 else ",trim(split(col , ',')[{1}]) as {0}".format(e,i)

finalData = spark.sql("""
SELECT {0}
FROM data
where pos > 0
""".format(query))
finalData.show()

# +----+---+------+
# |Name| ID|Number|
# +----+---+------+
# | abc|  1|   123|
# | xyz|  2|   456|
# +----+---+------+
Run Code Online (Sandbox Code Playgroud)