无法在sparksql中选择前10个记录

Tej*_*iya 3 sql apache-spark-sql

嗨,我是新来的火花sql.我有这样的数据框.

  ---+----------+----+----+----+------------------------+
 |tag id|timestamp|listner| orgid |org2id|RSSI
 +---+----------+----+----+----+------------------------+
 |  4|1496745912| 362|   4|   3|                    0.60|
 |  4|1496745924|1901|   4|   3|                    0.60|
 |  4|1496746030|1901|   4|   3|                    0.60|
 |  4|1496746110| 718|   4|   3|                    0.30|
 |  2|1496746128| 718|   4|   3|                    0.60|
 |  2|1496746188|1901|   4|   3|                    0.10|
Run Code Online (Sandbox Code Playgroud)

我想为spark sql中的每个listner前10个时间戳值选择.

我尝试了以下查询.它会抛出错误.

  val avg = sqlContext.sql("select top 10 * from avg_table") // throws error.

  val avg = sqlContext.sql("select rssi,timestamp,tagid from avg_table order by desc limit 10")  // it prints only 10 records.
Run Code Online (Sandbox Code Playgroud)

我想为每个列表器选择我需要获取前10个时间戳值.任何帮助将不胜感激.

Gor*_*off 8

这不行吗?

select rssi, timestamp, tagid
from avg_table
order by timestamp desc
limit 10;
Run Code Online (Sandbox Code Playgroud)

编辑:

哦,我明白了.你想要row_number():

select rssi, timestamp, tagid
from (select a.*,
             row_number() over (partition by listner order by timestamp desc) as seqnum
      from avg_table
     ) a
where seqnum <= 10
order by a.timestamp desc;
Run Code Online (Sandbox Code Playgroud)