Spark - 从具有不同列类型的行数据框中删除特殊字符

Alg*_*g_D 1 regex scala dataframe apache-spark rdd

假设我有一个包含许多列的 Dataframe,有些是string类型,有些是int类型,有些是map类型。

例如 字段/列types: stringType|intType|mapType<string,int>|...

|--------------------------------------------------------------------------
|  myString1      |myInt1|  myMap1                                              |...
|--------------------------------------------------------------------------
|"this_is_#string"| 123 |{"str11_in#map":1,"str21_in#map":2, "str31_in#map": 31}|...
|"this_is_#string"| 456 |{"str12_in#map":1,"str22_in#map":2, "str32_in#map": 32}|...
|"this_is_#string"| 789 |{"str13_in#map":1,"str23_in#map":2, "str33_in#map": 33}|...
|--------------------------------------------------------------------------
Run Code Online (Sandbox Code Playgroud)

我想从 String 和 Map 类型的所有列中删除一些像 '_' 和 '#' 这样的字符,因此结果Dataframe/RDD 将是:

|------------------------------------------------------------------------
|myString1     |myInt1|     myMap1|...                                 |
|------------------------------------------------------------------------
|"thisisstring"| 123 |{"str11inmap":1,"str21inmap":2, "str31inmap": 31}|...
|"thisisstring"| 456 |{"str12inmap":1,"str22inmap":2, "str32inmap": 32}|...
|"thisisstring"| 789 |{"str13inmap":1,"str23inmap":2, "str33inmap": 33}|...
|-------------------------------------------------------------------------
Run Code Online (Sandbox Code Playgroud)

我不确定将 Dataframe 转换为 RDD 并使用它或在 Dataframe 中执行工作是否更好。

另外,不确定如何以最佳方式处理具有不同列类型的正则表达式(我正在唱scala)。我想对这两种类型(字符串和映射)的所有列执行此操作,尽量避免使用如下列名:

def cleanRows(mytabledata: DataFrame): RDD[String] = {

//this will do the work for a specific column (myString1) of type string
val oneColumn_clean = mytabledata.withColumn("myString1", regexp_replace(col("myString1"),"[_#]",""))

       ...
//return type can be RDD or Dataframe...
}
Run Code Online (Sandbox Code Playgroud)

是否有任何简单的解决方案来执行此操作?谢谢

Psi*_*dom 7

一种选择是定义两个 udfs 来分别处理字符串类型列和映射类型列:

import org.apache.spark.sql.functions.udf
val df = Seq(("this_is#string", 3, Map("str1_in#map" -> 3))).toDF("myString", "myInt", "myMap")
df.show
+--------------+-----+--------------------+
|      myString|myInt|               myMap|
+--------------+-----+--------------------+
|this_is#string|    3|Map(str1_in#map -...|
+--------------+-----+--------------------+
Run Code Online (Sandbox Code Playgroud)

1) Udf 处理字符串类型列:

def remove_string: String => String = _.replaceAll("[_#]", "")
def remove_string_udf = udf(remove_string)
Run Code Online (Sandbox Code Playgroud)

2) Udf 来处理 Map 类型的列:

def remove_map: Map[String, Int] => Map[String, Int] = _.map{ case (k, v) => k.replaceAll("[_#]", "") -> v }
def remove_map_udf = udf(remove_map)
Run Code Online (Sandbox Code Playgroud)

3)将udfs应用到对应的列进行清理:

df.withColumn("myString", remove_string_udf($"myString")).
   withColumn("myMap", remove_map_udf($"myMap")).show

+------------+-----+-------------------+
|    myString|myInt|              myMap|
+------------+-----+-------------------+
|thisisstring|    3|Map(str1inmap -> 3)|
+------------+-----+-------------------+
Run Code Online (Sandbox Code Playgroud)