我的场景
问题:
inData = spark.readstream().format("eventhub")
udfdata = indata.select(from_json(myudf("column"), schema)).as("result").select(result.*)
filter1 = udfdata.filter("column =='filter1'")
filter 2 = udfdata.filter("column =='filter2'")
# write filter1 to two differnt sinks
filter1.writestream().format(delta).start(table1)
filter1.writestream().format(eventhub).start()
# write filter2 to two differnt sinks
filter2.writestream().format(delta).start(table2)
filter2.writestream().format(eventhub).start()
Run Code Online (Sandbox Code Playgroud) 我正在尝试删除使用 writestream 创建的 Delta Lake 表。我尝试删除表但失败
#table created as
df.writestream().outputmode("append").format("delta").start("/mnt/mytable")
#attempt to drop table
spark.sql("drop table '/mnt/mytable'")
Run Code Online (Sandbox Code Playgroud)