如果你使用的情况是,您要变换孤立的阵列(而不是作为一个表的一部分),那么组合explode,lower和collect_list应该做的伎俩。例如(请原谅可怕的执行时间,我在一个动力不足的虚拟机上运行):
hive> SELECT collect_list(lower(val))
> FROM (SELECT explode(array('AN', 'EXAMPLE', 'ARRAY')) AS val) t;
...
... Lots of MapReduce spam
...
MapReduce Total cumulative CPU time: 4 seconds 10 msec
Ended Job = job_1422453239049_0017
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 4.01 sec HDFS Read: 283 HDFS Write: 17 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 10 msec
OK
["an","example","array"]
Time taken: 33.05 seconds, Fetched: 1 row(s)
Run Code Online (Sandbox Code Playgroud)
(注意:用array('AN', 'EXAMPLE', 'ARRAY')您用来生成数组的任何表达式替换上面的查询。
相反,如果您的用例是您的数组存储在 Hive 表的列中,并且您需要对它们应用小写转换,则据我所知,您有两个主要选项:
方式#1:使用的组合explode和LATERAL VIEW到阵列分离。使用lower改造的各个元素,然后collect_list粘上他们重新走到一起。一个带有愚蠢的虚构数据的简单示例:
hive> DESCRIBE foo;
OK
id int
data array<string>
Time taken: 0.774 seconds, Fetched: 2 row(s)
hive> SELECT * FROM foo;
OK
1001 ["ONE","TWO","THREE"]
1002 ["FOUR","FIVE","SIX","SEVEN"]
Time taken: 0.434 seconds, Fetched: 2 row(s)
hive> SELECT
> id, collect_list(lower(exploded))
> FROM
> foo LATERAL VIEW explode(data) exploded_table AS exploded
> GROUP BY id;
...
... Lots of MapReduce spam
...
MapReduce Total cumulative CPU time: 3 seconds 310 msec
Ended Job = job_1422453239049_0014
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 3.31 sec HDFS Read: 358 HDFS Write: 44 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 310 msec
OK
1001 ["one","two","three"]
1002 ["four","five","six","seven"]
Time taken: 34.268 seconds, Fetched: 2 row(s)
Run Code Online (Sandbox Code Playgroud)
方法#2:编写一个简单的 UDF 来应用转换。就像是:
package my.package_name;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
public class LowerArray extends UDF {
public List<Text> evaluate(List<Text> input) {
List<Text> output = new ArrayList<Text>();
for (Text element : input) {
output.add(new Text(element.toString().toLowerCase()));
}
return output;
}
}
Run Code Online (Sandbox Code Playgroud)
然后直接对数据调用UDF:
hive> ADD JAR my_jar.jar;
Added my_jar.jar to class path
Added resource: my_jar.jar
hive> CREATE TEMPORARY FUNCTION lower_array AS 'my.package_name.LowerArray';
OK
Time taken: 2.803 seconds
hive> SELECT id, lower_array(data) FROM foo;
...
... Lots of MapReduce spam
...
MapReduce Total cumulative CPU time: 2 seconds 760 msec
Ended Job = job_1422453239049_0015
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 2.76 sec HDFS Read: 358 HDFS Write: 44 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 760 msec
OK
1001 ["one","two","three"]
1002 ["four","five","six","seven"]
Time taken: 27.243 seconds, Fetched: 2 row(s)
Run Code Online (Sandbox Code Playgroud)
这两种方法之间存在一些权衡。#2 在运行时通常可能比 #1 更有效,因为 #1 中的GROUP BY子句强制缩减阶段,而 UDF 方法则没有。但是,#1 在 HiveQL 中执行所有操作,并且更容易泛化(lower如果需要,您可以在查询中替换为其他类型的字符串转换)。使用 #2 的 UDF 方法,您可能必须为要应用的每种不同类型的转换编写一个新的 UDF。
| 归档时间: |
|
| 查看次数: |
6986 次 |
| 最近记录: |