Prometheus Exporter - 直接检测与自定义收集器

Luc*_*asi 2 go prometheus

我目前正在为遥测网络应用程序编写 Prometheus 导出器。

我已经阅读了此处的文档编写导出器,虽然我了解实现自定义收集器以避免竞争条件的用例,但我不确定我的用例是否适合直接检测。

基本上,网络指标由网络设备通过 gRPC 流式传输,因此我的出口商只需接收它们而不必有效地抓取它们。

我使用了以下代码的直接检测:

  • 我使用 promauto 包声明我的指标以保持代码紧凑:
package metrics

import (
    "github.com/lucabrasi83/prom-high-obs/proto/telemetry"
    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promauto"
)

var (
    cpu5Sec = promauto.NewGaugeVec(

        prometheus.GaugeOpts{
            Name: "cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
            Help: "The IOSd daemon CPU busy percentage over the last 5 seconds",
        },
        []string{"node"},
    )
Run Code Online (Sandbox Code Playgroud)
  • 下面是我如何从 gRPC 协议缓冲区解码的消息中简单地设置度量值:
cpu5Sec.WithLabelValues(msg.GetNodeIdStr()).Set(float64(val))
Run Code Online (Sandbox Code Playgroud)
  • 最后,这是我的主循环,它基本上处理我感兴趣的指标的遥测 gRPC 流:
for {

        req, err := stream.Recv()
        if err == io.EOF {
            return nil
        }
        if err != nil {
            logging.PeppaMonLog(
                "error",
                fmt.Sprintf("Error while reading client %v stream: %v", clientIPSocket, err))

            return err
        }

        data := req.GetData()

        msg := &telemetry.Telemetry{}

        err = proto.Unmarshal(data, msg)

        if err != nil {
            log.Fatalln(err)
        }

        if !logFlag {
            logging.PeppaMonLog(
                "info",
                fmt.Sprintf(
                    "Telemetry Subscription Request Received - Client %v - Node %v - YANG Model Path %v",
                    clientIPSocket, msg.GetNodeIdStr(), msg.GetEncodingPath(),
                ),
            )
        }
        logFlag = true

        // Flag to determine whether the Telemetry device streams accepted YANG Node path
        yangPathSupported := false

        for _, m := range metrics.CiscoMetricRegistrar {
            if msg.EncodingPath == m.EncodingPath {

                yangPathSupported = true
                go m.RecordMetricFunc(msg)
            }
        }
}
Run Code Online (Sandbox Code Playgroud)
  • 对于我感兴趣的每个指标,我使用记录指标函数 (m.RecordMetricFunc ) 注册它,该函数将协议缓冲区消息作为参数,如下所示。
package metrics

import "github.com/lucabrasi83/prom-high-obs/proto/telemetry"

var CiscoMetricRegistrar []CiscoTelemetryMetric

type CiscoTelemetryMetric struct {
    EncodingPath     string
    RecordMetricFunc func(msg *telemetry.Telemetry)
}

Run Code Online (Sandbox Code Playgroud)
  • 然后我使用 init 函数进行实际注册:


func init() {
    CiscoMetricRegistrar = append(CiscoMetricRegistrar, CiscoTelemetryMetric{
        EncodingPath:     CpuYANGEncodingPath,
        RecordMetricFunc: ParsePBMsgCpuBusyPercent,
    })
}
Run Code Online (Sandbox Code Playgroud)

我使用 Grafana 作为前端,到目前为止,在将 Prometheus 公开的指标与直接在设备上的检查指标相关联时,还没有发现任何特别的差异。

所以我想了解这是遵循 Prometheus 最佳实践还是我仍然应该通过自定义收集器路线。

提前致谢。

Pet*_*ter 5

您没有遵循最佳实践,因为您使用的是您链接到的文章警告反对的全局指标。在您当前的实现中,您的仪表板将在设备断开连接后(或更准确地说,直到您的导出器重新启动)永远显示 CPU 指标的一些任意且恒定的值。

相反,RPC 方法应该维护一组本地指标,并在方法返回后将其删除。这样,当设备断开连接时,设备的指标就会从抓取输出中消失。

这是执行此操作的一种方法。它使用包含当前活动指标的地图。每个地图元素都是一个特定流(我理解对应于一个设备)的一组指标。一旦流结束,该条目将被删除。

package main

import (
    "sync"

    "github.com/prometheus/client_golang/prometheus"
)

// Exporter is a prometheus.Collector implementation.
type Exporter struct {
    // We need some way to map gRPC streams to their metrics. Using the stream
    // itself as a map key is simple enough, but anything works as long as we
    // can remove metrics once the stream ends.
    sync.Mutex
    Metrics map[StreamServer]*DeviceMetrics
}

type DeviceMetrics struct {
    sync.Mutex

    CPU prometheus.Metric
}

// Globally defined descriptions are fine.
var cpu5SecDesc = prometheus.NewDesc(
    "cisco_iosxe_iosd_cpu_busy_5_sec_percentage",
    "The IOSd daemon CPU busy percentage over the last 5 seconds",
    []string{"node"},
    nil, // constant labels
)

// Collect implements prometheus.Collector.
func (e *Exporter) Collect(ch chan<- prometheus.Metric) {
    // Copy current metrics so we don't lock for very long if ch's consumer is
    // slow.
    var metrics []prometheus.Metric

    e.Lock()
    for _, deviceMetrics := range e.Metrics {
        deviceMetrics.Lock()
        metrics = append(metrics,
            deviceMetrics.CPU,
        )
        deviceMetrics.Unlock()
    }
    e.Unlock()

    for _, m := range metrics {
        if m != nil {
            ch <- m
        }
    }
}

// Describe implements prometheus.Collector.
func (e *Exporter) Describe(ch chan<- *prometheus.Desc) {
    ch <- cpu5SecDesc
}

// Service is the gRPC service implementation.
type Service struct {
    exp *Exporter
}

func (s *Service) RPCMethod(stream StreamServer) (*Response, error) {
    deviceMetrics := new(DeviceMetrics)

    s.exp.Lock()
    s.exp.Metrics[stream] = deviceMetrics
    s.exp.Unlock()

    defer func() {
        // Stop emitting metrics for this stream.
        s.exp.Lock()
        delete(s.exp.Metrics, stream)
        s.exp.Unlock()
    }()

    for {
        req, err := stream.Recv()
        // TODO: handle error

        var msg *Telemetry = parseRequest(req) // Your existing code that unmarshals the nested message.

        var (
            metricField *prometheus.Metric
            metric      prometheus.Metric
        )

        switch msg.GetEncodingPath() {
        case CpuYANGEncodingPath:
            metricField = &deviceMetrics.CPU
            metric = prometheus.MustNewConstMetric(
                cpu5SecDesc,
                prometheus.GaugeValue,
                ParsePBMsgCpuBusyPercent(msg), // func(*Telemetry) float64
                "node", msg.GetNodeIdStr(),
            )
        default:
            continue
        }

        deviceMetrics.Lock()
        *metricField = metric
        deviceMetrics.Unlock()
    }

    return nil, &Response{}
}
Run Code Online (Sandbox Code Playgroud)