EKS上通过Helm的Node-Exporter的Pods待定。

问题描述 投票:0回答:1

为了排除故障,我决定部署一个非常普通的Prometheus NodeExporter的实现,通过 helm install exporter stable/prometheus 但是我不能让豆荚启动。我已经找遍了所有的地方,我不知道还能去哪里找。我能够在我的集群上安装许多其他应用程序,只有这个例外。我已经附上了一些故障排除的输出,供你参考。我相信这可能与 "容忍 "有关,但我仍在深入研究。

EKS集群运行在3个t2.large上,每个节点最多可以支持35个pod,而我总共运行了43个pod。如果有任何其他排除故障的想法,我将非常感激。

描述Pods输出

✗ kubectl get pods
NAME                                                              READY   STATUS             RESTARTS   AGE
exporter-prometheus-node-exporter-bcwc4                           0/1     Pending            0          15m
exporter-prometheus-node-exporter-kr7z7                           0/1     Pending            0          15m
exporter-prometheus-node-exporter-lw87g                           0/1     Pending            0          15m

描述花苞

Name:           exporter-prometheus-node-exporter-bcwc4
Namespace:      monitoring
Priority:       0
Node:           <none>
Labels:         app=prometheus
                chart=prometheus-11.1.2
                component=node-exporter
                controller-revision-hash=668b4894bb
                heritage=Helm
                pod-template-generation=1
                release=exporter
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Pending
IP:
IPs:            <none>
Controlled By:  DaemonSet/exporter-prometheus-node-exporter
Containers:
  prometheus-node-exporter:
    Image:      prom/node-exporter:v0.18.1
    Port:       9100/TCP
    Host Port:  9100/TCP
    Args:
      --path.procfs=/host/proc
      --path.sysfs=/host/sys
    Environment:  <none>
    Mounts:
      /host/proc from proc (ro)
      /host/sys from sys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from exporter-prometheus-node-exporter-token-rl4fm (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  proc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:
  exporter-prometheus-node-exporter-token-rl4fm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  exporter-prometheus-node-exporter-token-rl4fm
    Optional:    false
QoS Class:       BestEffort

Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  2s (x24 over 29m)  default-scheduler  0/3 nodes are available: 2 node(s) didn't match node selector, 3 node(s) didn't
 have free ports for the requested pod ports.

Daemoneset配置

apiVersion: extensions/v1beta1                                                                                                                      
kind: DaemonSet
metadata:
  creationTimestamp: "2020-05-12T06:15:30Z"
  generation: 1
  labels:
    app: prometheus
    chart: prometheus-11.1.2
    component: node-exporter
    heritage: Helm
    release: exporter
  name: exporter-prometheus-node-exporter
  namespace: monitoring
  resourceVersion: "8131959"
  selfLink: /apis/extensions/v1beta1/namespaces/monitoring/daemonsets/exporter-prometheus-node-exporter
  uid: 5ede0739-cd05-4e3b-ace1-87fafb33314a
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: prometheus
      component: node-exporter
      release: exporter
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus
        chart: prometheus-11.1.2
        component: node-exporter
        heritage: Helm
        release: exporter
    spec:
      containers:
      - args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        image: prom/node-exporter:v0.18.1
        imagePullPolicy: IfNotPresent
        name: prometheus-node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: metrics
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        readOnly: true
        - mountPath: /host/sys
          name: sys
          readOnly: true
      dnsPolicy: ClusterFirst
      hostNetwork: true
      hostPID: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: exporter-prometheus-node-exporter
      serviceAccountName: exporter-prometheus-node-exporter
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /proc
          type: ""
        name: proc
      - hostPath:
          path: /sys
          type: ""
        name: sys
  templateGeneration: 1
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 3
  desiredNumberScheduled: 3
  numberMisscheduled: 0
  numberReady: 0
  numberUnavailable: 3
  observedGeneration: 1
  updatedNumberScheduled: 3
kubernetes kubernetes-helm amazon-eks
1个回答
1
投票

3个节点没有空闲的端口用于请求的pod端口。

从错误信息中可以看出,分配的节点端口已经在使用中。正如你所定义的 hostPort: 9100它限制了 pod 的排程位置数量,因为每个 <hostIP, hostPort, protocol> 组合必须是唯一的。参考文献。https:/kubernetes.iodocsconceptsconfigurationoverview#services。

© www.soinside.com 2019 - 2024. All rights reserved.