kubernetes:在绑定之前等待创建第一个使用者

问题描述 投票:0回答:1

我一直试图在Kubernetes上运行kafka / zookeeper。使用helm图表我可以在集群上安装zookeeper。然而,ZK吊舱陷入待定状态。当我发布其中一个pod“didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.”的描述是调度失败的原因。但是当我在PVC上发布描述时,我得到了“waiting for first consumer to be created before binding”。我试图重新生成整个集群,但结果是一样的。尝试使用https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/作为指导。

有人可以在这里指导我吗?

kubectl获得pods -n zoo-keeper

kubectl get pods -n zoo-keeper
NAME                         READY   STATUS    RESTARTS   AGE
zoo-keeper-zk-0              0/1     Pending   0          20m
zoo-keeper-zk-1              0/1     Pending   0          20m
zoo-keeper-zk-2             0/1     Pending   0          20m

kubectl得到了sc

kubectl get sc
NAME            PROVISIONER                    AGE
local-storage   kubernetes.io/no-provisioner   25m

kubectl描述sc

kubectl describe  sc
Name:            local-storage
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}

Provisioner:           kubernetes.io/no-provisioner
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

goldectl description pod foob-zookeeper-0 -n zoo-keeper

ubuntu@kmaster:~$ kubectl describe pod foob-zookeeper-0 -n zoo-keeper
Name:               foob-zookeeper-0
Namespace:          zoo-keeper
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=foob-zookeeper
                    app.kubernetes.io/instance=data-coord
                    app.kubernetes.io/managed-by=Tiller
                    app.kubernetes.io/name=foob-zookeeper
                    app.kubernetes.io/version=foob-zookeeper-9.1.0-15
                    controller-revision-hash=foob-zookeeper-5321f8ff5
                    release=data-coord
                    statefulset.kubernetes.io/pod-name=foob-zookeeper-0
Annotations:        foobar.com/product-name: zoo-keeper ZK
                    foobar.com/product-revision: ABC
Status:             Pending
IP:
Controlled By:      StatefulSet/foob-zookeeper
Containers:
  foob-zookeeper:
    Image:       repo.data.foobar.se/latest/zookeeper-3.4.10:1.6.0-15
    Ports:       2181/TCP, 2888/TCP, 3888/TCP, 10007/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Liveness:   exec [zkOk.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  tcp-socket :2181 delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZK_REPLICAS:           3
      ZK_HEAP_SIZE:          1G
      ZK_TICK_TIME:          2000
      ZK_INIT_LIMIT:         10
      ZK_SYNC_LIMIT:         5
      ZK_MAX_CLIENT_CNXNS:   60
      ZK_SNAP_RETAIN_COUNT:  3
      ZK_PURGE_INTERVAL:     1
      ZK_LOG_LEVEL:          INFO
      ZK_CLIENT_PORT:        2181
      ZK_SERVER_PORT:        2888
      ZK_ELECTION_PORT:      3888
      JMXPORT:               10007
    Mounts:
      /var/lib/zookeeper from datadir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nfcfx (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-foob-zookeeper-0
    ReadOnly:   false
  default-token-nfcfx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nfcfx
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  69s (x4 over 3m50s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.

kubectl得到pv

ubuntu@kmaster:~$ kubectl get  pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
local-pv   50Gi       RWO            Retain           Available           local-storage            10m
ubuntu@kmaster:~$

kubectl获得pvc local-claim

ubuntu@kmaster:~$ kubectl get  pvc local-claim
NAME          STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
local-claim   Pending                                      local-storage   8m9s
ubuntu@kmaster:~$

kubectl描述了pvc local-claim

ubuntu@kmaster:~$ kubectl describe pvc local-claim
Name:          local-claim
Namespace:     default
StorageClass:  local-storage
Status:        Pending
Volume:
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                    From                         Message
  ----       ------                ----                   ----                         -------
  Normal     WaitForFirstConsumer  2m3s (x26 over 7m51s)  persistentvolume-controller  waiting for first consumer to be created before binding
Mounted By:  <none>

我的光伏文件:

cat create-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/kafka-mount
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster

猫pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-claim
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: local-storage
  resources:
    requests:
      storage: 50Gi
kubernetes
1个回答
3
投票

看起来您在主节点上创建了PV。默认情况下,主节点被普通的pod使用所谓的污点标记为不可调度。为了能够在主节点上运行某些服务,您有两个选择:

1)为某些服务添加容忍以允许它在主节点上运行:

tolerations:
- effect: NoSchedule
  key: node-role.kubernetes.io/master

您甚至可以指定某些服务仅在主节点上运行:

nodeSelector:
  node-role.kubernetes.io/master: ""

2)您可以从主节点中删除污点,因此任何pod都可以在其上运行。您应该知道这很危险,因为它会使您的群集非常不稳定。

kubectl taint nodes --all node-role.kubernetes.io/master-

在这里阅读更多,taints和容忍:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

© www.soinside.com 2019 - 2024. All rights reserved.