Istio BookInfo示例窗格未在Minishift 3.11.0上启动 - 初始化:CrashLoopBackOff - 消息:'状态不完整的容器:[istio-init]'

问题描述 投票:0回答:1

我在MacOS Mojave上运行了全新的minishift(v1.32.0 + 009893b)安装。

  1. 我开始使用4个CPU和8GB RAM进行minishift:minishift start --cpus 4 --memory 8GB
  2. 我按照说明准备了这里描述的Openshift(minishift)环境:https://istio.io/docs/setup/kubernetes/prepare/platform-setup/openshift/
  3. 我已按照文档安装了Istio,没有任何错误:https://istio.io/docs/setup/kubernetes/install/kubernetes/

istio-system命名空间pods

$> kubectl get pod -n istio-system

grafana-7b46bf6b7c-27pn8                  1/1     Running     1          26m
istio-citadel-5878d994cc-5tsx2            1/1     Running     1          26m
istio-cleanup-secrets-1.1.1-vwzq5         0/1     Completed   0          26m
istio-egressgateway-976f94bd-pst7g        1/1     Running     1          26m
istio-galley-7855cc97dc-s7wvt             1/1     Running     0          1m
istio-grafana-post-install-1.1.1-nvdvl    0/1     Completed   0          26m
istio-ingressgateway-794cfcf8bc-zkfnc     1/1     Running     1          26m
istio-pilot-746995884c-6l8jm              2/2     Running     2          26m
istio-policy-74c95b5657-g2cvq             2/2     Running     10         26m
istio-security-post-install-1.1.1-f4524   0/1     Completed   0          26m
istio-sidecar-injector-59fc9d6f7d-z48rc   1/1     Running     1          26m
istio-telemetry-6c5d7b55bf-cmnvp          2/2     Running     10         26m
istio-tracing-75dd89b8b4-pp9c5            1/1     Running     2          26m
kiali-5d68f4c676-5lsj9                    1/1     Running     1          26m
prometheus-89bc5668c-rbrd7                1/1     Running     1          26m
  1. 我在我的istio-test命名空间中部署了BookInfo sampleistioctl kube-inject -f bookinfo.yaml | kubectl -n istio-test apply -f -但是pods没有启动。

oc命令信息

$> oc get svc
NAME          CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       172.30.204.102   <none>        9080/TCP   21m
productpage   172.30.72.33     <none>        9080/TCP   21m
ratings       172.30.10.155    <none>        9080/TCP   21m
reviews       172.30.169.6     <none>        9080/TCP   21m

$> kubectl get pods
NAME                              READY   STATUS                  RESTARTS   AGE
details-v1-5c879644c7-vtb6g       0/2     Init:CrashLoopBackOff   12         21m
productpage-v1-59dff9bdf9-l2r2d   0/2     Init:CrashLoopBackOff   12         21m
ratings-v1-89485cb9c-vk58r        0/2     Init:CrashLoopBackOff   12         21m
reviews-v1-5db4f45f5d-ddqrm       0/2     Init:CrashLoopBackOff   12         21m
reviews-v2-575959b5b7-8gppt       0/2     Init:CrashLoopBackOff   12         21m
reviews-v3-79b65d46b4-zs865       0/2     Init:CrashLoopBackOff   12         21m

由于某种原因,init容器(istio-init)崩溃了:

oc描述pod details-v1-5c879644c7-vtb6g

Name:       details-v1-5c879644c7-vtb6g
Namespace:  istio-test
Node:       localhost/192.168.64.13
Start Time: Sat, 30 Mar 2019 14:38:49 +0100
Labels:     app=details
        pod-template-hash=1743520073
        version=v1
Annotations:    openshift.io/scc=privileged
        sidecar.istio.io/status={"version":"b83fa303cbac0223b03f9fc5fbded767303ad2f7992390bfda6b9be66d960332","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status:     Pending
IP:     172.17.0.24
Controlled By:  ReplicaSet/details-v1-5c879644c7
Init Containers:
  istio-init:
    Container ID:   docker://0d8b62ad72727f39d8a4c9278592c505ccbcd52ed8038c606b6256056a3a8d12
    Image:      docker.io/istio/proxy_init:1.1.1
    Image ID:       docker-pullable://docker.io/istio/proxy_init@sha256:5008218de88915f0b45930d69c5cdd7cd4ec94244e9ff3cfe3cec2eba6d99440
    Port:       <none>
    Args:
      -p
      15001
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      9080
      -d
      15020
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 30 Mar 2019 14:58:18 +0100
      Finished:     Sat, 30 Mar 2019 14:58:19 +0100
    Ready:      False
    Restart Count:  12
    Limits:
      cpu:  100m
      memory:   50Mi
    Requests:
      cpu:      10m
      memory:       10Mi
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-58j6f (ro)
Containers:
  details:
    Container ID:   
    Image:      istio/examples-bookinfo-details-v1:1.10.1
    Image ID:       
    Port:       9080/TCP
    State:      Waiting
      Reason:       PodInitializing
    Ready:      False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-58j6f (ro)
  istio-proxy:
    Container ID:   
    Image:      docker.io/istio/proxyv2:1.1.1
    Image ID:       
    Port:       15090/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      details.$(POD_NAMESPACE)
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15010
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --proxyAdminPort
      15000
      --concurrency
      2
      --controlPlaneAuthPolicy
      NONE
      --statusPort
      15020
      --applicationPorts
      9080
    State:      Waiting
      Reason:       PodInitializing
    Ready:      False
    Restart Count:  0
    Limits:
      cpu:  2
      memory:   128Mi
    Requests:
      cpu:  10m
      memory:   40Mi
    Readiness:  http-get http://:15020/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
    Environment:
      POD_NAME:             details-v1-5c879644c7-vtb6g (v1:metadata.name)
      POD_NAMESPACE:            istio-test (v1:metadata.namespace)
      INSTANCE_IP:           (v1:status.podIP)
      ISTIO_META_POD_NAME:      details-v1-5c879644c7-vtb6g (v1:metadata.name)
      ISTIO_META_CONFIG_NAMESPACE:  istio-test (v1:metadata.namespace)
      ISTIO_META_INTERCEPTION_MODE: REDIRECT
      ISTIO_METAJSON_LABELS:        {"app":"details","version":"v1"}

    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-58j6f (ro)
Conditions:
  Type          Status
  Initialized       False 
  Ready         False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  istio-envoy:
    Type:   EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium: Memory
  istio-certs:
    Type:   Secret (a volume populated by a Secret)
    SecretName: istio.default
    Optional:   true
  default-token-58j6f:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-58j6f
    Optional:   false
QoS Class:  Burstable
Node-Selectors: <none>
Tolerations:    node.kubernetes.io/memory-pressure:NoSchedule
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath           Type        Reason      Message
  --------- --------    -----   ----            -------------           --------    ------      -------
  23m       23m     1   default-scheduler                   Normal      Scheduled   Successfully assigned istio-test/details-v1-5c879644c7-vtb6g to localhost
  23m       23m     1   kubelet, localhost  spec.initContainers{istio-init} Normal      Pulling     pulling image "docker.io/istio/proxy_init:1.1.1"
  22m       22m     1   kubelet, localhost  spec.initContainers{istio-init} Normal      Pulled      Successfully pulled image "docker.io/istio/proxy_init:1.1.1"
  22m       21m     5   kubelet, localhost  spec.initContainers{istio-init} Normal      Created     Created container
  22m       21m     5   kubelet, localhost  spec.initContainers{istio-init} Normal      Started     Started container
  22m       21m     4   kubelet, localhost  spec.initContainers{istio-init} Normal      Pulled      Container image "docker.io/istio/proxy_init:1.1.1" already present on machine
  22m       17m     24  kubelet, localhost  spec.initContainers{istio-init} Warning     BackOff     Back-off restarting failed container
  9m        9m      1   kubelet, localhost                  Normal      SandboxChanged  Pod sandbox changed, it will be killed and re-created.
  9m        8m      4   kubelet, localhost  spec.initContainers{istio-init} Normal      Pulled      Container image "docker.io/istio/proxy_init:1.1.1" already present on machine
  9m        8m      4   kubelet, localhost  spec.initContainers{istio-init} Normal      Created     Created container
  9m        8m      4   kubelet, localhost  spec.initContainers{istio-init} Normal      Started     Started container
  9m        3m      31  kubelet, localhost  spec.initContainers{istio-init} Warning     BackOff     Back-off restarting failed container

我看不到任何从退出代码中提供任何提示appart的信息:1和

status:
  conditions:
    - lastProbeTime: null
      lastTransitionTime: '2019-03-30T13:38:50Z'
      message: 'containers with incomplete status: [istio-init]'
      reason: ContainersNotInitialized
      status: 'False'
      type: Initialized

更新:

这是istio-init Init容器日志:

kubectl -n istio-test logs -f details-v1-5c879644c7-m9k6q istio-init
Environment:
------------
ENVOY_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_MARK=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
Variables:
----------
PROXY_PORT=15001
INBOUND_CAPTURE_PORT=15001
PROXY_UID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=9080
INBOUND_PORTS_EXCLUDE=15020
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
KUBEVIRT_INTERFACES=
ENABLE_INBOUND_IPV6=
# Generated by iptables-save v1.6.0 on Sat Mar 30 22:21:52 2019
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:ISTIO_REDIRECT - [0:0]
COMMIT
# Completed on Sat Mar 30 22:21:52 2019
# Generated by iptables-save v1.6.0 on Sat Mar 30 22:21:52 2019
*filter
:INPUT ACCEPT [3:180]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3:120]
COMMIT
# Completed on Sat Mar 30 22:21:52 2019
+ iptables -t nat -N ISTIO_REDIRECT
+ iptables -t nat -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port 15001
iptables: No chain/target/match by that name.
+ dump
+ iptables-save
+ ip6tables-save
kubernetes openshift istio minishift okd
1个回答
1
投票

我解决了在istio-init pod securityContext配置中添加privileged: true的问题:

  name: istio-init
  resources:
    limits:
      cpu: 100m
      memory: 50Mi
    requests:
      cpu: 10m
      memory: 10Mi
  securityContext:
    capabilities:
      add:
        - NET_ADMIN
    privileged: true
© www.soinside.com 2019 - 2024. All rights reserved.