通过Kubernetes Services进行的基本网络在Minikube中不起作用

问题描述 投票:0回答:1

我正在运行一个集群

  • 3个为以下项目部署的服务:Mongodb,Postgres和Rest服务器
  • Mongo和Postgres服务作为ClusterIP,但其余服务器使用NodePort
  • [当我kubectl exec自己装入Pod时,我可以使用docker网络IP地址访问Mongo / Postgres
  • 当我尝试使用kubernetes服务IP地址(由Minikube上的ClusterIP给出时)我无法通过

这是一些显示问题的示例命令

Shell in:

HOST$ kubectl exec -it my-system-mongo-54b8c75798-lptzq /bin/bash

一旦进入,我将使用docker网络IP连接到mongo:

MONGO-POD# mongo mongodb://172.17.0.6
Welcome to the MongoDB shell.
> exit
bye

现在我尝试使用K8服务IP(DNS可以正常工作,因为它将转换为10.96.154.36,如下所示)

MONGO-POD# mongo mongodb://my-system-mongo
MongoDB shell version v3.6.3
connecting to: mongodb://my-system-mongo
2020-01-03T02:39:55.883+0000 W NETWORK  [thread1] Failed to connect to 10.96.154.36:27017 after 5000ms milliseconds, giving up.
2020-01-03T02:39:55.903+0000 E QUERY    [thread1] Error: couldn't connect to server my-system-mongo:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

Ping也不起作用

MONGO-POD# ping my-system-mongo
PING my-system-mongo.default.svc.cluster.local (10.96.154.36) 56(84) bytes of data.
--- my-system-mongo.default.svc.cluster.local ping statistics ---
112 packets transmitted, 0 received, 100% packet loss, time 125365ms

我的设置运行的是带有Kubernetes 1.17和Helm 3.0.2的Minikube 1.6.2。这是我完整的(帮助创建的)空运行yaml文件:

NAME: mysystem-1578018793
LAST DEPLOYED: Thu Jan  2 18:33:13 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
HOOKS:
---
# Source: mysystem/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "my-system-test-connection"
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args:  ['my-system:']
  restartPolicy: Never
MANIFEST:
---
# Source: mysystem/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-system-configmap
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
data:
  _lots_of_key_value_pairs: here-I-shortened-it
---
# Source: mysystem/templates/my-system-mongo-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-system-mongo
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
  - port: 27107
    targetPort: 27017
    protocol: TCP
    name: mongo
  selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
---
# Source: mysystem/templates/my-system-pg-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-system-postgres
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
  - port: 5432
    targetPort: 5432
    protocol: TCP
    name: postgres
  selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
---
# Source: mysystem/templates/my-system-restsrv-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-system-rest-server
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: NodePort
  ports:
  #- port: 8009
  #  targetPort: 8009
  #  protocol: TCP
  #  name: jpda
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
---
# Source: mysystem/templates/my-system-mongo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-system-mongo
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mysystem
      app.kubernetes.io/instance: mysystem-1578018793
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mysystem
        app.kubernetes.io/instance: mysystem-1578018793
    spec:
      imagePullSecrets:
        - name: regcred
      serviceAccountName: default
      securityContext:
        {}
      containers:
      - name: my-system-mongo-pod
        securityContext:
            {}
        image: private.hub.net/my-system-mongo:latest
        imagePullPolicy: Always
        envFrom:
          - configMapRef:
              name: my-system-configmap
        ports:
        - name: "mongo"
          containerPort: 27017
          protocol: TCP
        resources:
            {}
---
# Source: mysystem/templates/my-system-pg-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-system-postgres
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mysystem
      app.kubernetes.io/instance: mysystem-1578018793
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mysystem
        app.kubernetes.io/instance: mysystem-1578018793
    spec:
      imagePullSecrets:
        - name: regcred
      serviceAccountName: default
      securityContext:
        {}
      containers:
      - name: mysystem
        securityContext:
            {}
        image: private.hub.net/my-system-pg:latest
        imagePullPolicy: Always
        envFrom:
          - configMapRef:
              name: my-system-configmap
        ports:
        - name: postgres
          containerPort: 5432
          protocol: TCP
        resources:
            {}
---
# Source: mysystem/templates/my-system-restsrv-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-system-rest-server
  labels:
    helm.sh/chart: mysystem-0.1.0
    app.kubernetes.io/name: mysystem
    app.kubernetes.io/instance: mysystem-1578018793
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: mysystem
      app.kubernetes.io/instance: mysystem-1578018793
  template:
    metadata:
      labels:
        app.kubernetes.io/name: mysystem
        app.kubernetes.io/instance: mysystem-1578018793
    spec:
      imagePullSecrets:
        - name: regcred
      serviceAccountName: default
      securityContext:
        {}
      containers:
      - name: mysystem
        securityContext:
            {}
        image: private.hub.net/my-system-restsrv:latest
        imagePullPolicy: Always
        envFrom:
          - configMapRef:
              name: my-system-configmap
        ports:
        - name: rest-server
          containerPort: 8080
          protocol: TCP
        #- name: "jpda"
        #  containerPort: 8009
        #  protocol: TCP
        resources:
            {}

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mysystem,app.kubernetes.io/instance=mysystem-1578018793" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

我的最佳理论(部分在working through this之后)是kube-proxy在minikube中无法正常工作,但是我不确定如何解决此问题。什么时候通过journalctl将shell封装到minikube和grep中以获得代理:

# grep proxy journal.log
Jan 03 02:16:02 minikube sudo[2780]:   docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05808666 -0800 /var/lib/minikube/certs/proxy-client.crt
Jan 03 02:16:02 minikube sudo[2784]:   docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05908666 -0800 /var/lib/minikube/certs/proxy-client.key
Jan 03 02:16:15 minikube kubelet[2821]: E0103 02:16:15.423027    2821 reflector.go:156] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503466    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-n78g9" (UniqueName: "kubernetes.io/secret/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy-token-n78g9") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503965    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-xtables-lock") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.530948    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-lib-modules") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.538938    2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/50fbf70b-724a-4b76-af7f-5f4b91735c84/volumes/kubernetes.io~secret/kube-proxy-token-n78g9.
Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670527    2821 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670670    2821 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\" (\"50fbf70b-724a-4b76-af7f-5f4b91735c84\")" failed. No retries permitted until 2020-01-03 02:16:17.170632812 +0000 UTC m=+13.192986021 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\") pod \"kube-proxy-pbs6s\" (UID: \"50fbf70b-724a-4b76-af7f-5f4b91735c84\") : failed to sync configmap cache: timed out waiting for the condition"

尽管这确实显示出一些问题,但我不确定如何对它们采取行动或予以纠正。

更新:

我在浏览日记时发现了这一点:

# grep conntrack journal.log
Jan 03 02:16:04 minikube kubelet[2821]: W0103 02:16:04.286682    2821 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.

进入conntrack,尽管minikube VM没有yum或apt!

docker kubernetes kubernetes-helm kubectl minikube
1个回答
0
投票

您的mongodb服务定义中有错字。

 - port: 27107
   targetPort: 27017

将端口更改为27017。

© www.soinside.com 2019 - 2024. All rights reserved.