Databricks 6.1 初始化 Metastore 连接时没有名为 global_temp 的数据库错误

Mar*_*cin 6 azure databricks azure-databricks

在集群 6.1(包括 Apache Spark 2.4.4、Scala 2.11)(Azure)上初始化 hive Metastore 连接(第一次将数据帧保存为表)时,我可以看到数据库 global_temp 的运行状况检查失败并显示错误:

20/02/18 12:11:17 INFO HiveUtils: Initializing HiveMetastoreConnection version 0.13.0 using file:
...
20/02/18 12:11:21 INFO HiveMetaStore: 0: get_database: global_temp
20/02/18 12:11:21 INFO audit: ugi=root  ip=unknown-ip-addr  cmd=get_database: global_temp   
20/02/18 12:11:21 ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named global_temp)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:487)
    at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:498)
...
    at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:430)
...
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)
Run Code Online (Sandbox Code Playgroud)

这不会导致 python 脚本失败,但会污染日志。

global_temp 数据库不应该自动创建吗?可以关闭支票吗?还是错误被抑制?