Kubernetes nodeaffinity 和 podAntiaffinity 无法根据需要部署 pod

问题描述 投票:0回答:1

我正在尝试为 mongodb 试验 2 节点集群(一旦稳定后将进行扩展)。这是使用 EKS。这 2 个节点在两个不同的 aws 区域中运行。描述符如下:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongod
  labels:
    name: mongo-repl
spec:
  serviceName: mongodb-service
  replicas: 2
  selector:
    matchLabels:
      app: mongod
      role: mongo
      environment: test
  template:
    metadata:
      labels:
        app: mongod
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 15
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: failure-domain.beta.kubernetes.io/zone
                operator: In
                values:
                - ap-south-1a
                - ap-south-1b
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - mongod
              - key: role
                operator: In
                values:
                - mongo
              - key: environment
                operator: In
                values:
                - test
            topologyKey: kubernetes.io/hostname
      containers:
        .....

这里的目标是不在同一节点上安排另一个 pod,其中已经有一个带有标签的 pod - app=mongod,role=mongo,environment=test 正在运行

当我部署规范时,在一个节点上仅创建了一组 mongo pod。

ubuntu@ip-192-170-0-18:~$ kubectl describe statefulset mongod
Name:               mongod
Namespace:          default
CreationTimestamp:  Sun, 16 Feb 2020 16:44:16 +0000
Selector:           app=mongod,environment=test,role=mongo
Labels:             name=mongo-repl
Annotations:        <none>
Replicas:           2 desired | 2 total
Update Strategy:    OnDelete
Pods Status:        1 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mongod
           environment=test
           role=mongo
  Containers:

kubectl 描述 pod mongod-1

Node:           <none>
Labels:         app=mongod
                controller-revision-hash=mongod-66f7c87bbb
                environment=test
                role=mongo
                statefulset.kubernetes.io/pod-name=mongod-1
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Pending
....
....
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  42s (x14 over 20m)  default-scheduler  0/2 nodes are available: 1 Insufficient pods, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.

0/2 个节点可用:1 个 pod 不足,1 个节点与 pod 亲和性/反亲和性不匹配,1 个节点不满足现有 pod 反亲和性规则。

无法弄清楚亲和力规范中存在哪些冲突。我真的很感激这里的一些见解!


2 月 21 日编辑:在下面添加了有关新错误的信息

根据建议,我现在已经扩展了工作节点并开始接收更清晰的错误消息 --

活动: 类型 原因 消息年龄 ---- ------ ---- ---- -------- 警告 FailedScheduling 51 秒(超过 13 小时 x554)默认调度程序 0/2 个节点可用:1 个节点与 pod 亲和性/反亲和性不匹配,1 个节点不满足现有 pod 反亲和性规则, 1 个节点存在卷节点关联性冲突。

所以现在的主要问题(在扩大工作节点之后)是——

1 个节点存在卷节点关联性冲突

再次在下面发布我的整个配置工件:

apiVersion: apps/v1beta1
    kind: StatefulSet
    metadata:
      name: mongod
      labels:
        name: mongo-repl
    spec:
      serviceName: mongodb-service
      replicas: 2
      selector:
        matchLabels:
          app: mongod
          role: mongo
          environment: test
      template:
        metadata:
          labels:
            app: mongod
            role: mongo
            environment: test
        spec:
          terminationGracePeriodSeconds: 15
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: failure-domain.beta.kubernetes.io/zone
                    operator: In
                    values:
                    - ap-south-1a
                    - ap-south-1b
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: app
                    operator: In
                    values:
                    - mongod
                  - key: role
                    operator: In
                    values:
                    - mongo
                  - key: environment
                    operator: In
                    values:
                    - test
                topologyKey: kubernetes.io/hostname
          containers:
        - name: mongod-container
          .......
      volumes:
        - name: mongo-vol
          persistentVolumeClaim:
            claimName: mongo-pvc

PVC --

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
spec:
  storageClassName: gp2-multi-az
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

PV--

apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: db-volume-0
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gp2-multi-az
  awsElasticBlockStore:
    volumeID: vol-06f12b1d6c5c93903
    fsType: ext4
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: failure-domain.beta.kubernetes.io/zone
        #- key: topology.kubernetes.io/zone
          operator: In
          values:
          - ap-south-1a

apiVersion: "v1"
kind: "PersistentVolume"
metadata:
  name: db-volume-1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gp2-multi-az
  awsElasticBlockStore:
    volumeID: vol-090ab264d4747f131
    fsType: ext4
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: failure-domain.beta.kubernetes.io/zone
        #- key: topology.kubernetes.io/zone
          operator: In
          values:
          - ap-south-1b

存储类--

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-multi-az
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: gp2
  fsType: ext4
allowedTopologies:
- matchLabelExpressions:
  - key: failure-domain.beta.kubernetes.io/zone
    values:
    - ap-south-1a
    - ap-south-1b

我不想选择动态 PVC。

根据 @rabello 的建议添加以下输出 --

kubectl get pods --show-labels
NAME       READY   STATUS    RESTARTS   AGE   LABELS
mongod-0   1/1     Running   0          14h   app=mongod,controller-revision-hash=mongod-5b4699fd85,environment=test,role=mongo,statefulset.kubernetes.io/pod-name=mongod-0
mongod-1   0/1     Pending   0          14h   app=mongod,controller-revision-hash=mongod-5b4699fd85,environment=test,role=mongo,statefulset.kubernetes.io/pod-name=mongod-1

kubectl get nodes --show-labels
NAME                                           STATUS   ROLES    AGE   VERSION              LABELS
ip-192-170-0-8.ap-south-1.compute.internal     Ready    <none>   14h   v1.14.7-eks-1861c5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.small,beta.kubernetes.io/os=linux,eks.amazonaws.com/nodegroup-image=ami-07fd6cdebfd02ef6e,eks.amazonaws.com/nodegroup=trl_compact_prod_db_node_group,failure-domain.beta.kubernetes.io/region=ap-south-1,failure-domain.beta.kubernetes.io/zone=ap-south-1a,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-192-170-0-8.ap-south-1.compute.internal,kubernetes.io/os=linux
ip-192-170-80-14.ap-south-1.compute.internal   Ready    <none>   14h   v1.14.7-eks-1861c5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.small,beta.kubernetes.io/os=linux,eks.amazonaws.com/nodegroup-image=ami-07fd6cdebfd02ef6e,eks.amazonaws.com/nodegroup=trl_compact_prod_db_node_group,failure-domain.beta.kubernetes.io/region=ap-south-1,failure-domain.beta.kubernetes.io/zone=ap-south-1b,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-192-170-80-14.ap-south-1.compute.internal,kubernetes.io/os=linux
kubernetes amazon-eks
1个回答
0
投票

EBS 卷是分区的。它们只能由与卷位于同一可用区的 Pod 访问。 StatefulSet 允许在多个区域(ap-south-1a 和 ap-south-1b)中调度 pod。考虑到您的其他限制,调度程序可能会尝试在与其卷不同的可用区的节点上调度 Pod。我会尝试将您的 StatefulSet 限制在单个可用区或使用 operator 来安装 Mongo。

© www.soinside.com 2019 - 2024. All rights reserved.