helm init错误:安装错误:在gitlab runner中运行时禁止使用deployments.extensions

问题描述 投票:1回答:1

我有Gitlab(11.8.1)(自托管)连接到自托管的K8s Cluster(1.13.4)。 gitlab中有3个项目名称为shipmentauthentication_serviceshipment_mobile_service

所有项目都添加相同的K8s配置异常项目命名空间。

在Gitlab UI中安装Helm Tiller和Gitlab Runner时,第一个项目是成功的。

第二个和第三个项目仅安装Helm Tiller成功,Gitlab Runner错误与登录安装跑步器pod:

 Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: cannot connect to Tiller
+ sleep 1s
+ echo 'Retrying (30)...'
+ helm repo add runner https://charts.gitlab.io
Retrying (30)...
"runner" has been added to your repositories
+ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "runner" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
+ helm upgrade runner runner/gitlab-runner --install --reset-values --tls --tls-ca-cert /data/helm/runner/config/ca.pem --tls-cert /data/helm/runner/config/cert.pem --tls-key /data/helm/runner/config/key.pem --version 0.2.0 --set 'rbac.create=true,rbac.enabled=true' --namespace gitlab-managed-apps -f /data/helm/runner/config/values.yaml
Error: UPGRADE FAILED: remote error: tls: bad certificate 

我没有在第一个项目中使用K8s集群配置gitlab-ci,只为第二个和第三个项目设置。奇怪的是使用相同的helm-data(只是名称不同),第二次成功但第三次没有。

因为只有一个gitlab跑步者可用(来自第一个项目),所以我将第二个和第三个项目分配给这个跑步者。

我将这个gitlab-ci.yml用于两个项目,在helm upgrade命令中只有不同的名称。

stages:
  - test
  - build
  - deploy

variables:
  CONTAINER_IMAGE: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:${CI_PIPELINE_ID}
  CONTAINER_IMAGE_LATEST: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:latest
  CI_REGISTRY: dockerhub.linhnh.vn
  DOCKER_DRIVER: overlay2
  DOCKER_HOST: tcp://localhost:2375 # required when use dind

# test phase and build phase using docker:dind success

deploy_beta:
  stage: deploy
  image: alpine/helm
  script:
    - echo "Deploy test start ..."
    - helm init --upgrade
    - helm upgrade --install --force shipment-mobile-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
    - echo "Deploy test completed!"
  environment:
    name: staging
  tags: ["kubernetes_beta"]
  only:
  - master

helm-data非常简单,所以我觉得不需要粘贴到这里。这是第二个项目部署成功时的日志:

Running with gitlab-runner 11.7.0 (8bb608ff)
  on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image linkyard/docker-helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Running on runner-xrmajzy2-project-15-concurrent-0x2bms via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning into '/root/authentication_service'...
Cloning repository...
Checking out 5068bf1f as master...
Skipping Git submodules setup
$ echo "Deploy start ...."
Deploy start ....
$ helm init --upgrade --dry-run --debug
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.13.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

...
$ helm upgrade --install --force authentication-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
WARNING: Namespace "gitlab-managed-apps" doesn't match with previous. Release will be deployed to default
Release "authentication-service" has been upgraded. Happy Helming!
LAST DEPLOYED: Tue Mar 26 05:27:51 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME                    READY  UP-TO-DATE  AVAILABLE  AGE
authentication-service  1/1    1           1          17d

==> v1/Pod(related)
NAME                                    READY  STATUS       RESTARTS  AGE
authentication-service-966c997c4-mglrb  0/1    Pending      0         0s
authentication-service-966c997c4-wzrkj  1/1    Terminating  0         49m

==> v1/Service
NAME                    TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
authentication-service  NodePort  10.108.64.133  <none>       80:31340/TCP  17d


NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services authentication-service)
  echo http://$NODE_IP:$NODE_PORT
$ echo "Deploy completed"
Deploy completed
Job succeeded

第三个项目失败了:

Running with gitlab-runner 11.7.0 (8bb608ff)
  on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image alpine/helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Running on runner-xrmajzy2-project-18-concurrent-0bv4bx via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning repository...
Cloning into '/canhnv5/shipmentmobile'...
Checking out 278cbd3d as master...
Skipping Git submodules setup
$ echo "Deploy test start ..."
Deploy test start ...
$ helm init --upgrade
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Error: error installing: deployments.extensions is forbidden: User "system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account" cannot create resource "deployments" in API group "extensions" in the namespace "kube-system"
ERROR: Job failed: command terminated with exit code 1

我可以看到他们使用我在第一个项目中安装的相同的跑步者XrmajZY2,相同的k8s名称空间gitlab-managed-apps

我认为他们使用特权模式,但不知道为什么第二个可以获得正确的权限,第三个不能?我应该创建用户system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account并分配给cluster-admin吗?

感谢@ cookiedough的指示。我这样做了:

  • canhv5/shipment-mobile-service分成我的root帐户root/shipment-mobile-service
  • 删除没有任何内容的gitlab-managed-apps命名空间,运行kubectl delete -f gitlab-admin-service-account.yaml
  • 应用此文件,然后获取令牌为@cookiedough指南。
  • 回到Gitlab中的root/shipment-mobile-service,删除以前的Cluster。使用新令牌添加群集。在Gitlab UI中安装Helm Tiller然后安装Gitlab Runner。
  • 重新开始工作然后神奇的事情发生了。但我仍然不清楚为什么canhv5/shipment-mobile-service仍然会得到同样的错误。
kubernetes gitlab-ci-runner kubernetes-helm
1个回答
2
投票

在执行以下操作之前,请删除gitlab-managed-apps名称空间:

kubectl delete namespace gitlab-managed-apps

GitLab tutorial背诵你将需要创建一个serviceaccountclusterrolebinding得到GitLab,你将需要创建的秘密,以便将你的项目连接到你的集群。

使用内容创建名为gitlab-admin-service-account.yaml的文件:

 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: gitlab-admin
   namespace: kube-system
 ---
 apiVersion: rbac.authorization.k8s.io/v1beta1
 kind: ClusterRoleBinding
 metadata:
   name: gitlab-admin
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 - kind: ServiceAccount
   name: gitlab-admin
   namespace: kube-system

将服务帐户和集群角色绑定应用于您的集群:

kubectl apply -f gitlab-admin-service-account.yaml

输出:

 serviceaccount "gitlab-admin" created
 clusterrolebinding "gitlab-admin" created

检索gitlab-admin服务帐户的令牌:

 kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')

从输出中复制<authentication_token>值:

Name:         gitlab-admin-token-b5zv4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=gitlab-admin
              kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      <authentication_token>

按照本教程将您的群集连接到项目,否则您将不得不在整个过程中拼凑相同的东西,更加痛苦!

© www.soinside.com 2019 - 2024. All rights reserved.