我正在使用这个普罗米修斯图表。在文档中它说
为了让 prometheus 抓取 pod,您必须向 pod 添加注释,如下所示:
metadata: annotations: prometheus.io/scrape: "true" prometheus.io/path: /metrics prometheus.io/port: "8080"
所以我创建了这样的服务
apiVersion: v1
kind: Service
metadata:
name: nodejs-client-service
labels:
app: nodejs-client-app
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "5000"
spec:
type: LoadBalancer
selector:
app: nodejs-client-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 5000
但是我的服务不会出现在普罗米修斯目标中。我错过了什么?
我在
stable/prometheus-operator
图表中遇到了同样的问题。 我尝试将上面的注释添加到 Pod 和服务中,但都不起作用。
对我来说,解决方案是添加一个 ServiceMonitor 对象。 添加后,Prometheus 动态发现我的服务:
这个命令解决了问题:
kubectl apply -f service-monitor.yml
# service-monitor.yml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
release: prom
name: eztype
namespace: default
spec:
endpoints:
- path: /actuator/prometheus
port: management
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app.kubernetes.io/name: eztype
在这里,我的 Pod 和服务用名称
eztype
进行了注释,并在指定路径下的端口 8282 上公开了指标。 为了完整起见,这是我的服务定义的相关部分:
# service definition (partial)
spec:
clusterIP: 10.128.156.246
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: management
port: 8282
protocol: TCP
targetPort: 8282
值得注意的是,ServiceMonitor 对象在 Prometheus 图表本身中使用:
$ kubectl get servicemonitors -n monitor
NAME AGE
prom-prometheus-operator-alertmanager 14d
prom-prometheus-operator-apiserver 14d
prom-prometheus-operator-coredns 14d
prom-prometheus-operator-grafana 14d
prom-prometheus-operator-kube-controller-manager 14d
prom-prometheus-operator-kube-etcd 14d
prom-prometheus-operator-kube-proxy 14d
prom-prometheus-operator-kube-scheduler 14d
prom-prometheus-operator-kube-state-metrics 14d
prom-prometheus-operator-kubelet 14d
prom-prometheus-operator-node-exporter 14d
prom-prometheus-operator-operator 14d
prom-prometheus-operator-prometheus 14d
您必须向 pod 添加注释。 不是服务
要使用 pod 注释,您需要将作业“kubernetes-pods”添加到 prometheus scrape 配置中https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml#L257
示例:
- job_name: 'kubernetes-pods'
honor_labels: true
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape_slow]
action: drop
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
regex: (https?)
target_label: __scheme__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+?)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+)
replacement: __param_$1
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__meta_kubernetes_pod_phase]
regex: Pending|Succeeded|Failed|Completed
action: drop
我找不到 prometheus.io/scrape 注释在 Kubernetes 中不起作用的问题的正确答案,因此我决定深入研究原始的 Prometheus Helm 图表。对于那些面临同样问题的人,这里有一个解释。
了解 Kubernetes 中的 Prometheus 配置
首先,需要注意的是,Prometheus 可以在 Kubernetes 中以多种方式进行配置。一种常见的方法是使用称为 ServiceMonitor 的自定义资源定义 (CRD)。在此设置中,Prometheus Operator 持续监视 ServiceMonitor 对象指定的资源。
• serviceMonitorSelector Parameter: This parameter in the Prometheus Operator configuration selects which ServiceMonitor resources to consider.
serviceMonitorSelector:matchLabels: 团队:前端
• No Default Annotations: By default, services or pods aren’t monitored based on annotations alone. You need to:
• Create a ServiceMonitor: Define a ServiceMonitor resource that matches your services or pods.
• Set Appropriate Labels: Ensure your services or pods have labels that match the matchLabels in your ServiceMonitor.
但是,默认的 kube-prometheus-stack Helm 图表不会为您的部署创建开箱即用的 ServiceMonitor。
prometheus.io/scrape 注解的由来
这就提出了一个问题:
Where does the prometheus.io/scrape annotation come from, and how can I use it?
答案就在原始的 Prometheus Helm 图表中,您可以在这里找到。与 kube-prometheus-stack 不同,此 Helm 图表不依赖于 Prometheus CRD。相反,它:
• Deploys Prometheus Directly: Runs Prometheus in a pod with manual configurations.
• Uses kubernetes_sd_configs: Specifies how Prometheus should discover services.
kubernetes_sd_configs:
这告诉 Prometheus 使用端点角色,允许它根据 prometheus.io/scrape 等注释来抓取目标。
• Relabeling Configurations: Includes additional settings to manipulate labels and target metadata.
如何解决问题
如果您使用 kube-prometheus-stack,您有两个主要选项:
1. Set Up a ServiceMonitor
• Create a ServiceMonitor Resource: Define it to match your services or pods.
• Adjust serviceMonitorSelector: Ensure the Prometheus Operator picks up your ServiceMonitor.
api版本:monitoring.coreos.com/v1 种类:服务监视器 元数据: 名称:我的服务监视器 标签: 团队:前端 规格: 选择器: 匹配标签: 应用程序:我的应用程序 端点: - 端口:网络
2. Modify Prometheus Configuration
• Include endpoints Role: Adjust your Prometheus config to use the endpoints role like in the original Helm chart.
• Leverage Annotations: This allows you to use annotations like prometheus.io/scrape without needing ServiceMonitor.
普罗米修斯: 普罗米修斯规格: 额外的ScrapeConfigs: - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - 角色:端点 重新标签配置: - 源标签:[__meta_kubernetes_service_annotation_prometheus_io_scrape] 行动:保留 正则表达式:true
总结
如果 prometheus.io/scrape 注释不适用于 kube-prometheus-stack:
• Use a ServiceMonitor: It’s the preferred method when using the Prometheus Operator.
• Copy Configuration from Original Helm Chart: Adjust your Prometheus configuration to manually include endpoint discovery based on annotations.
通过执行这些步骤,您应该能够使 Prometheus 按预期抓取您的服务或 pod。