use*_*972 5 api jobs cluster-computing apache-spark databricks
我试图找出为什么在使用 Databricks Job API 时出现以下错误。
{ "error_code": "INVALID_PARAMETER_VALUE", "message": "集群验证错误:缺少必填字段:settings.cluster_spec.new_cluster.size" }
我做了什么:
{
"new_cluster": {
"spark_version": "7.5.x-scala2.12",
"spark_conf": {
"spark.master": "local[*]",
"spark.databricks.cluster.profile": "singleNode"
},
"azure_attributes": {
"availability": "ON_DEMAND_AZURE",
"first_on_demand": 1,
"spot_bid_max_price": -1
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"custom_tags": {
"ResourceClass": "SingleNode"
},
"enable_elastic_disk": true
},
"libraries": [
{
"pypi": {
"package": "koalas==1.5.0"
}
}
],
"notebook_task": {
"notebook_path": "/pathtoNotebook/TheNotebook",
"base_parameters": {
"param1": "test"
}
},
"email_notifications": {},
"name": " jobName",
"max_concurrent_runs": 1
}
Run Code Online (Sandbox Code Playgroud)
API 的文档没有帮助(找不到有关 settings.cluster_spec.new_cluster.size 的任何内容)。json是从UI复制的,所以我想它应该是正确的。
感谢您的帮助。
小智 7
来源:https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/clusters#--create
要创建单节点集群,请包含示例中所示的spark_conf
和条目并设置为 0。custom_tags
num_workers
{
"cluster_name": "single-node-cluster",
"spark_version": "7.6.x-scala2.12",
"node_type_id": "Standard_DS3_v2",
"num_workers": 0,
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]"
},
"custom_tags": {
"ResourceClass": "SingleNode"
}
}
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
4012 次 |
最近记录: |