当无水印的流式 DataFrame/DataSet 上有流式聚合时,不支持追加输出模式;;\nJoin Inner

Eri*_*let 6 python apache-spark spark-structured-streaming

我想加入 2 个流,但收到下一个错误,但我不知道如何修复它:

当无水印的流式 DataFrame/DataSet 上有流式聚合时,不支持追加输出模式;;\nJoin Inner

df_stream = spark.readStream.schema(schema_clicks).option("ignoreChanges", True).option("header", True).format("csv").load("s3://mybucket/*.csv")
display(df_stream.select("SendID", "EventType", "EventDate"))
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

我想将 df1 与 df2 一起加入:

df1 = df_stream \
              .withColumn('timestamp', unix_timestamp(col('EventDate'), "MM/dd/yyyy hh:mm:ss aa").cast(TimestampType())) \
              .select(col("SendID"), col("timestamp"), col("EventType")) \
              .withColumnRenamed("SendID", "SendID_update") \
              .withColumnRenamed("timestamp", "timestamp_update") \
              .withWatermark("timestamp_update", "1 minutes")

df2 = df_stream \
              .withColumn('timestamp', unix_timestamp(col('EventDate'), "MM/dd/yyyy hh:mm:ss aa").cast(TimestampType())) \
              .withWatermark("timestamp", "1 minutes") \
              .groupBy(col("SendID")) \
              .agg(max(col('timestamp')).alias("timestamp")) \
              .orderBy('timestamp', ascending=False)

join = df2.alias("A").join(df1.alias("B"),  expr(
      "A.SendID = B.SendID_update" +
        " AND " +
        "B.timestamp_update >= A.timestamp " +
        " AND " +
        "B.timestamp_update <= A.timestamp + interval 1 hour"))
Run Code Online (Sandbox Code Playgroud)

最后当我以追加模式写入结果时:

join \
.writeStream \
.outputMode("Append") \
.option("checkpointLocation", "s3://checkpointjoin_delta")  \
.format("delta")  \
.table("test_join")
Run Code Online (Sandbox Code Playgroud)

我收到了之前的错误。

AnalysisException Traceback(最近一次调用最后)在 () ----> 1 join.writeStream.outputMode("Append").option("checkpointLocation", "s3://checkpointjoin_delta").format("delta")。表(“test_join”)

/databricks/spark/python/pyspark/sql/streaming.py in table(self, tableName) 1137 """ 1138 if isinstance(tableName, basestring): -> 1139 return self._sq(self._jwrite.table(tableName) ) 1140 否则: 1141 raise TypeError(“tableName 只能是单个字符串”)

调用中的/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py (self, *args) 1255 答案 = self.gateway_client.send_command(command) 1256 return_value = get_return_value( ->第1257章 第1258章 1259

/databricks/spark/python/pyspark/sql/utils.py 在 deco(*a, **kw) 67 e.java_exception.getStackTrace()))

Eri*_*let 13

问题是.groupBy,有必要添加时间戳。例如:

df2 = df_stream \
              .withColumn('timestamp', unix_timestamp(col('EventDate'), "MM/dd/yyyy hh:mm:ss aa").cast(TimestampType())) \
              .withWatermark("timestamp", "1 minutes") \
              .groupBy(col("SendID"), "timestamp") \
              .agg(max(col('timestamp')).alias("timestamp")) \
              .orderBy('timestamp', ascending=False)
Run Code Online (Sandbox Code Playgroud)