按生成的列分组

dai*_*isy 3 sql hadoop hive

我试图按分钟分组数据,所以我尝试了这个查询:

SELECT FROM_UNIXTIME(
     unix_timestamp (time, 'yyyy-mm-dd hh:mm:ss'), 'yyyy-mm-dd hh:mm') as ts,
     count (*) as cnt 
     from toucher group by ts limit 10;
Run Code Online (Sandbox Code Playgroud)

然后蜂巢告诉我没有这样的专栏,

FAILED:SemanticException [错误10004]:第1行:134无效的表别名或列引用'ts':(可能的列名是:time,ip,username,code)

蜂巢不支持它吗?

And*_*lev 6

SELECT FROM_UNIXTIME(unix_timestamp (time, 'yyyy-mm-dd hh:mm:ss'), 'yyyy-mm-dd hh:mm') as ts,
     count (*) as cnt 
from toucher 
group by FROM_UNIXTIME(unix_timestamp (time, 'yyyy-mm-dd hh:mm:ss'), 'yyyy-mm-dd hh:mm') limit 10;
Run Code Online (Sandbox Code Playgroud)

或者更好

 select t.ts, count(*) from
(SELECT FROM_UNIXTIME(unix_timestamp (time, 'yyyy-mm-dd hh:mm:ss'), 'yyyy-mm-dd hh:mm') as ts             
    from toucher ) t
    group by t.ts limit 10;
Run Code Online (Sandbox Code Playgroud)