在PySpark Dataframe中拆分String列的内容

Har*_*pta 1 apache-spark pyspark spark-dataframe apache-spark-mllib

我有一个pyspark数据框,其中有一个包含字符串的列.我想把这个专栏分成几个字

码:

>>> sentenceData = sqlContext.read.load('file://sample1.csv', format='com.databricks.spark.csv', header='true', inferSchema='true')
>>> sentenceData.show(truncate=False)
+---+---------------------------+
|key|desc                       |
+---+---------------------------+
|1  |Virat is good batsman      |
|2  |sachin was good            |
|3  |but modi sucks big big time|
|4  |I love the formulas        |
+---+---------------------------+


Expected Output
---------------

>>> sentenceData.show(truncate=False)
+---+-------------------------------------+
|key|desc                                 |
+---+-------------------------------------+
|1  |[Virat,is,good,batsman]              |
|2  |[sachin,was,good]                    |
|3  |....                                 |
|4  |...                                  |
+---+-------------------------------------+
Run Code Online (Sandbox Code Playgroud)

我怎样才能做到这一点?

小智 11

使用split功能:

from pyspark.sql.functions import split

df.withColumn("desc", split("desc", "\s+"))
Run Code Online (Sandbox Code Playgroud)