Sha*_*wat 5 scala apache-spark apache-spark-sql
我还有一个与 split 函数有关的问题。我是 Spark/Scala 的新手。
以下是示例数据框 -
+-------------------+---------+
| VALUES|Delimiter|
+-------------------+---------+
| 50000.0#0#0#| #|
| 0@1000.0@| @|
| 1$| $|
|1000.00^Test_string| ^|
+-------------------+---------+
Run Code Online (Sandbox Code Playgroud)
我希望输出是 -
+-------------------+---------+----------------------+
|VALUES |Delimiter|split_values |
+-------------------+---------+----------------------+
|50000.0#0#0# |# |[50000.0, 0, 0, ] |
|0@1000.0@ |@ |[0, 1000.0, ] |
|1$ |$ |[1, ] |
|1000.00^Test_string|^ |[1000.00, Test_string]|
+-------------------+---------+----------------------+
Run Code Online (Sandbox Code Playgroud)
我试图手动拆分这个 -
dept.select(split(col("VALUES"),"#|@|\\$|\\^").show()
Run Code Online (Sandbox Code Playgroud)
输出是 -
+-----------------------+
|split(VALUES,#|@|\$|\^)|
+-----------------------+
| [50000.0, 0, 0, ]|
| [0, 1000.0, ]|
| [1, ]|
| [1000.00, Test_st...|
+-----------------------+
Run Code Online (Sandbox Code Playgroud)
但我想为大型数据集自动提取分隔符。
您需要使用exprwithsplit()来使分割动态化
df = spark.createDataFrame([("50000.0#0#0#","#"),("0@1000.0@","@")],["VALUES","Delimiter"])
df = df.withColumn("split", F.expr("""split(VALUES, Delimiter)"""))
df.show()
+------------+---------+-----------------+
| VALUES|Delimiter| split|
+------------+---------+-----------------+
|50000.0#0#0#| #|[50000.0, 0, 0, ]|
| 0@1000.0@| @| [0, 1000.0, ]|
+------------+---------+-----------------+
Run Code Online (Sandbox Code Playgroud)