如何使用Istio的Prometheus来配置kubernetes hpa?

问题描述 投票:2回答:1

我们有一个Istio集群,我们正在尝试为Kubernetes配置水平pod自动调节。我们希望将请求计数用作hpa的自定义指标。我们怎样才能将Istio的普罗米修斯用于同一目的?

kubernetes prometheus istio
1个回答
4
投票

这个问题比我想象的要复杂得多,但最后我在答案。

  1. 首先,您需要配置应用程序以提供自定义指标。它是在开发应用程序方面。这是一个例子,如何使用Go语言:Watching Metrics With Prometheus
  2. 其次,您需要定义和部署应用程序(或Pod,或任何您想要的)部署到Kubernetes,例如: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: podinfo spec: replicas: 2 template: metadata: labels: app: podinfo annotations: prometheus.io/scrape: 'true' spec: containers: - name: podinfod image: stefanprodan/podinfo:0.0.1 imagePullPolicy: Always command: - ./podinfo - -port=9898 - -logtostderr=true - -v=2 volumeMounts: - name: metadata mountPath: /etc/podinfod/metadata readOnly: true ports: - containerPort: 9898 protocol: TCP readinessProbe: httpGet: path: /readyz port: 9898 initialDelaySeconds: 1 periodSeconds: 2 failureThreshold: 1 livenessProbe: httpGet: path: /healthz port: 9898 initialDelaySeconds: 1 periodSeconds: 3 failureThreshold: 2 resources: requests: memory: "32Mi" cpu: "1m" limits: memory: "256Mi" cpu: "100m" volumes: - name: metadata downwardAPI: items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "annotations" fieldRef: fieldPath: metadata.annotations --- apiVersion: v1 kind: Service metadata: name: podinfo labels: app: podinfo spec: type: NodePort ports: - port: 9898 targetPort: 9898 nodePort: 31198 protocol: TCP selector: app: podinfo 注意字段annotations: prometheus.io/scrape: 'true'。需要请求Prometheus从资源中读取指标。另请注意,还有两个注释,它们具有默认值;但如果您在应用程序中更改它们,则需要使用正确的值添加它们: prometheus.io/path:如果度量标准路径不是/ metrics,请使用此批注对其进行定义。 prometheus.io/port:在指定的端口上刮取pod而不是pod的声明端口(如果没有声明端口,则默认为无端口目标)。
  3. 接下来,Istio中的Prometheus使用自己修改的Istio目的配置,默认情况下它会从Pod中跳过自定义指标。因此,您需要稍微修改一下。在我的例子中,我从this example获取了Pod度量标准的配置,并仅为Pods修改了Istio的Prometheus配置: kubectl edit configmap -n istio-system prometheus 我根据前面提到的例子更改了标签的顺序: # pod's declared ports (default is a port-free target if none are declared). - job_name: 'kubernetes-pods' # if you want to use metrics on jobs, set the below field to # true to prevent Prometheus from setting the `job` label # automatically. honor_labels: false kubernetes_sd_configs: - role: pod # skip verification so you can do HTTPS to pods tls_config: insecure_skip_verify: true # make sure your labels are in order relabel_configs: # these labels tell Prometheus to automatically attach source # pod and namespace information to each collected sample, so # that they'll be exposed in the custom metrics API automatically. - source_labels: [__meta_kubernetes_namespace] action: replace target_label: namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: pod # these labels tell Prometheus to look for # prometheus.io/{scrape,path,port} annotations to configure # how to scrape - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ 之后,自定义指标出现在普罗米修斯。但是,要小心更改Prometheus配置,因为Istio所需的某些指标可能会消失,请仔细检查所有内容。
  4. 现在是时候安装Prometheus custom metric adapter了。 下载this存储库 在文件<repository-directory>/deploy/manifests/custom-metrics-apiserver-deployment.yaml中更改Prometheus服务器的地址。例如,- --prometheus-url=http://prometheus.istio-system:9090/ 运行命令kubectl apply -f <repository-directory>/deploy/manifests一段时间后,custom.metrics.k8s.io/v1beta1应出现在命令'kubectl api-vesions'的输出中。 另外,使用命令kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .检查自定义API的输出。最后一个的输出应如下例所示: { "kind": "MetricValueList", "apiVersion": "custom.metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests" }, "items": [ { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-6b86c8ccc9-kv5g9", "apiVersion": "/__internal" }, "metricName": "http_requests", "timestamp": "2018-01-10T16:49:07Z", "value": "901m" }, { "describedObject": { "kind": "Pod", "namespace": "default", "name": "podinfo-6b86c8ccc9-nm7bl", "apiVersion": "/__internal" }, "metricName": "http_requests", "timestamp": "2018-01-10T16:49:07Z", "value": "898m" } ] } 如果是,您可以转到下一步。如果没有,请查看CustomMetrics kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/"和http_requests kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http"中可用于Pod的API。 MetricNames根据Prometheus从Pod收集的指标生成,如果它们是空的,您需要查看该方向。
  5. 最后一步是配置HPA并对其进行测试。所以在我的情况下,我为podinfo应用程序创建了HPA,之前定义: apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: podinfo spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: podinfo minReplicas: 2 maxReplicas: 10 metrics: - type: Pods pods: metricName: http_requests targetAverageValue: 10 并使用简单的Go应用程序来测试负载: #install hey go get -u github.com/rakyll/hey #do 10K requests rate limited at 25 QPS hey -n 10000 -q 5 -c 5 http://<K8S-IP>:31198/healthz 过了一段时间,我通过使用命令kubectl describe hpakubectl get hpa看到了缩放的变化

我使用了有关从Ensure High Availability and Uptime With Kubernetes Horizontal Pod Autoscaler and Prometheus文章创建自定义指标的说明

所有有用的链接在一个地方:

© www.soinside.com 2019 - 2024. All rights reserved.