kubernetes 上的 prometheus 节点导出器

roy*_*roy 5 kubernetes prometheus amazon-eks prometheus-node-exporter

我已经在 kubernetes 集群(EKS)上部署了 prometheus。我能够成功地刮擦prometheustraefik跟随

scrape_configs:
  # A scrape configuration containing exactly one endpoint to scrape:

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    static_configs:
      - targets: ['prometheus.kube-monitoring.svc.cluster.local:9090']

  - job_name: 'traefik'
    static_configs:
      - targets: ['traefik.kube-system.svc.cluster.local:8080']
Run Code Online (Sandbox Code Playgroud)

但是按照DaemonSet以下定义部署的节点导出器不会公开节点指标。

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-monitoring
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      name: node-exporter
      labels:
        app: node-exporter
    spec:
      hostNetwork: true
      hostPID: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:v0.18.1
        args:
        - "--path.procfs=/host/proc"
        - "--path.sysfs=/host/sys"
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: scrape
        resources:
          requests:
            memory: 30Mi
            cpu: 100m
          limits:
            memory: 50Mi
            cpu: 200m
        volumeMounts:
        - name: proc
          readOnly:  true
          mountPath: /host/proc
        - name: sys
          readOnly: true
          mountPath: /host/sys
      tolerations:
        - effect: NoSchedule
          operator: Exists
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys
Run Code Online (Sandbox Code Playgroud)

并遵循 prometheus 中的 scrape_configs

scrape_configs:
  - job_name: 'kubernetes-nodes'
    scheme: http
    kubernetes_sd_configs:
    - role: node
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - target_label: __address__
      replacement: kubernetes.kube-monitoring.svc.cluster.local:9100
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}/proxy/metrics 
Run Code Online (Sandbox Code Playgroud)

我也尝试curl http://localhost:9100/metrics从其中一个容器中取出,但得到了curl: (7) Failed to connect to localhost port 9100: Connection refused

我在这里缺少什么配置?

在建议通过 helm 安装 Prometheus 后,我没有将其安装在测试集群上,并尝试将我的原始配置与 helm 安装的 Prometheus 进行比较。

以下 Pod 正在运行:

NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0   2/2     Running   0          4m33s
prometheus-grafana-66c7bcbf4b-mh42x                      2/2     Running   0          4m38s
prometheus-kube-state-metrics-7fbb4697c-kcskq            1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-6bf9f                1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-gbrzr                1/1     Running   0          4m38s
prometheus-prometheus-node-exporter-j6l9h                1/1     Running   0          4m38s
prometheus-prometheus-oper-operator-648f9ddc47-rxszj     1/1     Running   0          4m38s
prometheus-prometheus-prometheus-oper-prometheus-0       3/3     Running   0          4m23s
Run Code Online (Sandbox Code Playgroud)

我在 podprometheus-prometheus-prometheus-oper-prometheus-0中没有找到节点导出器的任何配置/etc/prometheus/prometheus.yml

ale*_*r96 1

您是如何部署 Prometheus 的?每当我使用 helm-chart ( https://github.com/helm/charts/tree/master/stable/prometheus ) 时,节点导出器就已部署。也许这是一个更简单的解决方案。

  • 我的建议:使用它。helm 为你做的所有事情都是亲手做的,这并不有趣。相信我,您不想维护多个标准应用程序部署。 (4认同)