Sør*_*sen 3 grafana kubernetes prometheus azure-aks
我使用社区图表kube-prometheus-stack使用 Helm 在 Kubernetes 集群上安装了 Prometheus,并且在捆绑的 Grafana 实例中获得了一些漂亮的仪表板。我现在希望 Vertical Pod Autoscaler 的推荐器使用 Prometheus 作为历史指标的数据源,如此处所述。这意味着,我必须对 cAdvisor 的 Prometheus scraper 设置进行更改,这个答案为我指明了正确的方向,因为在进行更改后,我现在可以job在 cAdvisor 的指标上看到正确的标签。
不幸的是,现在 Grafana 仪表板中的一些图表已损坏。它看起来不再获取 CPU 指标 - 而是仅显示与 CPU 相关的图表的“无数据”。
因此,我认为我必须调整图表才能再次正确获取指标,但我在 Grafana 中没有看到任何明显的地方可以做到这一点?
不确定它是否与问题相关,但我正在 Azure Kubernetes 服务 (AKS) 上运行我的 Kubernetes 集群。
values.yaml这是我在安装 Prometheus 时向 Helm 图表提供的完整内容:
kubeControllerManager:
enabled: false
kubeScheduler:
enabled: false
kubeEtcd:
enabled: false
kubeProxy:
enabled: false
kubelet:
serviceMonitor:
# Diables the normal cAdvisor scraping, as we add it with the job name "kubernetes-cadvisor" under additionalScrapeConfigs
# The reason for doing this is to enable the VPA to use the metrics for the recommender
# https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md#how-can-i-use-prometheus-as-a-history-provider-for-the-vpa-recommender
cAdvisor: false
prometheus:
prometheusSpec:
retention: 15d
storageSpec:
volumeClaimTemplate:
spec:
# the azurefile storage class is created automatically on AKS
storageClassName: azurefile
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 50Gi
additionalScrapeConfigs:
- job_name: 'kubernetes-cadvisor'
scheme: https
metrics_path: /metrics/cadvisor
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
Run Code Online (Sandbox Code Playgroud)
库伯内特版本:1.21.2
kube-prometheus-stack 版本:18.1.1
舵版本:version.BuildInfo{版本:“v3.6.3”,GitCommit:“d506314abfb5d21419df8c7e7e68012379db2354”,GitTreeState:“脏”,GoVersion:“go1.16.5”}
小智 7
不幸的是,我无权访问 Azure AKS,因此我在我的 GKE 集群上重现了此问题。下面我将提供一些可能有助于解决您的问题的解释。
首先你可以尝试执行这个node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate规则,看看它是否返回任何结果:
如果没有返回任何记录,请阅读以下段落。
我建议不要为 cadvisor 创建全新的抓取配置,而是使用默认生成的配置kubelet.serviceMonitor.cAdvisor: true,但进行一些修改,例如将标签更改为job=kubernetes-cadvisor。
在我的示例中,“kubernetes-cadvisor”抓取配置如下所示:
注意:我将此配置添加到文件中additionalScrapeConfigs(文件values.yaml的其余部分values.yaml可能与您的类似)。
- job_name: 'kubernetes-cadvisor'
honor_labels: true
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics/cadvisor
scheme: https
authorization:
type: Bearer
credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
follow_redirects: true
relabel_configs:
- source_labels: [job]
separator: ;
regex: (.*)
target_label: __tmp_prometheus_job_name
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
separator: ;
regex: kubelet
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_k8s_app]
separator: ;
regex: kubelet
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: https-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: https-metrics
action: replace
- source_labels: [__metrics_path__]
separator: ;
regex: (.*)
target_label: metrics_path
replacement: $1
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
kubeconfig_file: ""
follow_redirects: true
namespaces:
names:
- kube-system
Run Code Online (Sandbox Code Playgroud)
默认情况下,Prometheus 规则从 cAdvisor 获取数据并job="kubelet"在其 PromQL 表达式中使用:
更改为 后job=kubelet,job=kubernetes-cadvisor我们还需要在 Prometheus 规则中修改此标签:
注意:我们只需要修改具有的规则metrics_path="/metrics/cadvisor(这些是从 cAdvisor 检索数据的规则)。
$ kubectl get prometheusrules prom-1-kube-prometheus-sta-k8s.rules -o yaml
...
- name: k8s.rules
rules:
- expr: |-
sum by (cluster, namespace, pod, container) (
irate(container_cpu_usage_seconds_total{job="kubernetes-cadvisor", metrics_path="/metrics/cadvisor", image!=""}[5m])
) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
1, max by(cluster, namespace, pod, node) (kube_pod_info{node!=""})
)
record: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate
...
here we have a few more rules to modify...
Run Code Online (Sandbox Code Playgroud)
修改Prometheus规则后,等待一段时间,我们可以看看是否能按预期工作。我们可以尝试执行node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate像一开始一样执行。
此外,让我们检查一下 Grafana 以确保它已开始正确显示我们的仪表板:

| 归档时间: |
|
| 查看次数: |
2415 次 |
| 最近记录: |