y.s*_*hyk 3 apache-spark apache-spark-sql pyspark
给定一个DataFrame,可以在PySpark中过滤掉Column 集合的一些键(MapType(StringType,StringType,True)),同时保持模式的完整性?
root
|-- id: string (nullable = true)
|-- collection: map (nullable = true)
| |-- key: string
| |-- value: string
Run Code Online (Sandbox Code Playgroud)
是的,这是可能的.您应该创建udf负责从地图过滤键并使用withColumn转换来从collection字段中过滤键.
下面Scala中的示例实现:
// Start from implementing method in Scala responsible for filtering keys from Map
def filterKeys(collection: Map[String, String], keys: Iterable[String]): Map[String, String] =
collection.filter{case (k,_) => !keys.exists(_ == k)}
// Create Spark UDF based on above function
val filterKeysUdf = udf((collection: Map[String, String], keys: Iterable[String]) => filterKeys(collection, keys))
// Use above udf to filter keys
val newDf = df.withColumn("collection", filterKeysUdf(df("collection"), lit(Array("k1"))))
Run Code Online (Sandbox Code Playgroud)
在Python中实现:
# Start from implementing method in Python responsible for filtering keys from dict
def filterKeys(collection, keys):
return {k:collection[k] for k in collection if k not in keys}
# Create Spark UDF based on above function
filterKeysUdf = udf(filterKeys, MapType(StringType(), StringType()))
# Create array literal based on Python list
keywords_lit = array(*[lit(k) for k in ["k1","k2"]])
# Use above udf to filter keys
newDf = df.withColumn("collection", filterKeysUdf(df.collection, keywords_lit))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1613 次 |
| 最近记录: |