S. *_*. K 7 apache-spark apache-spark-sql
如何在Spark SQL 1.6.1 udf中使用广播集合.应该从Main SQL调用Udf,如下所示
sqlContext.sql("""Select col1,col2,udf_1(key) as value_from_udf FROM table_a""")
Run Code Online (Sandbox Code Playgroud)
udf_1() 应该通过广播小集合查找返回值到主sql.
这是一个可重复性最小的示例pySpark,说明使用广播变量执行查找,将lambda函数用作语句UDF内部SQL.
# Create dummy data and register as table
df = sc.parallelize([
(1,"a"),
(2,"b"),
(3,"c")]).toDF(["num","let"])
df.registerTempTable('table')
# Create broadcast variable from local dictionary
myDict = {1: "y", 2: "x", 3: "z"}
broadcastVar = sc.broadcast(myDict)
# Alternatively, if your dict is a key-value rdd,
# you can do sc.broadcast(rddDict.collectAsMap())
# Create lookup function and apply it
sqlContext.registerFunction("lookup", lambda x: broadcastVar.value.get(x))
sqlContext.sql('select num, let, lookup(num) as test from table').show()
+---+---+----+
|num|let|test|
+---+---+----+
| 1| a| y|
| 2| b| x|
| 3| c| z|
+---+---+----+
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
5588 次 |
| 最近记录: |