spark2数据帧中的reduceByKeyAndWindow

Mar*_*iak 5 java scala apache-spark spark-dataframe apache-spark-2.0

在Spark 1.6中,StreamingContext我可以使用函数reduceByKeyAndWindow

        events
            .mapToPair(x-> new Tuple2<String,MyPojo>(x.getID(),x))
            .reduceByKeyAndWindow((a, b) -> 
                     a.getQuality() > b.getQuality() ? a : b
                , Durations.seconds(properties.getWindowLenght()), 
                  Durations.seconds(properties.getSlidingWindow()))
            .map(y->y._2);
Run Code Online (Sandbox Code Playgroud)

现在我试图用spark 2.0.2和Dataframes重现这个逻辑.我能够重现丢失的函数reduceByKey但没有窗口

        events
            .groupByKey(x-> x.getID() ,Encoders.STRING())
            .reduceGroups((a,b)-> a.getQuality()>=b.getQuality() ? a : b)
            .map(x->x._2, Encoders.bean(MyPojo.class))
Run Code Online (Sandbox Code Playgroud)

我能够实现窗口 groupBy

        events
            .groupBy(functions.window(col("timeStamp"), "10 minutes", "5 minutes"),col("id"))
            .max("quality")
            .join(events, "id");
Run Code Online (Sandbox Code Playgroud)

当我使用groupBy时,我只有15列中的两列,所以我试图让他们回来加入,但后来我得到了解雇: join between two streaming DataFrames/Datasets is not supported;

有没有办法让我重现reduceByKeyAndWindow火花2的逻辑?