如何解决尝试使用 ansible 部署 Microk8s 集群时不断出现的错误

问题描述 投票:0回答:1

我正在尝试使用 Ansible 在 GCP VM 上创建 microk8s 集群。我想创建一个三节点集群,一个主节点和两个工作节点。

这是我正在使用的剧本;

---
- name: Ansible playbook to create Microk8s Cluster
  hosts: kubenodes
  become: true

  tasks:

  - name: Update web servers
    apt:
      upgrade: yes
      update_cache: yes

  - name: Install snap package installer
    apt:
      name: snapd
      state: present

  - name: Install Microk8s
    snap:
      name: microk8s
      classic: true

  - name: Add current user to microk8s group
    shell:
      cmd: "sudo usermod -a -G microk8s {{ ansible_user }}"

- hosts: "master"
  become: true
  vars:
    workers_count: "{{ groups.workers | length }}"
  tasks:   

  - name: Create join node command
    shell: /snap/bin/microk8s add-node
    register: join_token
    ignore_errors: true
    loop: "{{ range(1, num_iterations|int + 1 ) | list }}"
    vars:
      num_iterations: "{{ workers_count }}"

  - set_fact:
      microk8s_join_list: "{{ microk8s_join_list | default([]) + ['/snap/bin/' + item.stdout_lines[4]] }}"
      loop: "{{ join_token.results }}"

  - name: Add Cluster dashboard, Ingress and Cert Manager addon
    shell: /snap/bin/microk8s enable community ingress dashboard cert-manager 

  - name: Add Argocd addon
    shell: /snap/bin/microk8s enable argocd
        
  - name: Store dashboard token
    shell: /snap/bin/microk8s kubectl create token default
    register: dashboard_token
    ignore_errors: true

  - name: Save Kubernetes dashboard token
    set_fact:
      dashboard_token: "{{ dashboard_token.stdout }}"
  
  - name: Run command on worker nodes
    shell: "{{ item }}"
    delegate_to: "{{ groups.workers[ansible_loop.index0] }}"
    loop: "{{ microk8s_join_list }}"
    loop_control:
      extended: true

这是我的库存

[master]
35.224.xx.xx

[workers]
107.178.xx.xx
35.225.xx.xx

[kubenodes:children]
workers
master

每个任务都运行良好,但最后一个任务

Run command on worker nodes

这是我当前遇到的错误;

failed: [35.224.xx.xx -> 107.178.xx.xx] (item=/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker) => {"ansible_loop": {"allitems": ["/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker"], "first": true, "index": 1, "index0": 0, "last": false, "length": 2, "nextitem": "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker", "revindex": 2, "revindex0": 1}, "ansible_loop_var": "item", "item": "/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname inventory_hostname: Name does not resolve", "unreachable": true}

failed: [35.224.xx.xx -> 35.225.xx.xx] (item=/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker) => {"ansible_loop": {"allitems": ["/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker"], "first": false, "index": 2, "index0": 1, "last": true, "length": 2, "previtem": "/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "revindex": 1, "revindex0": 0}, "ansible_loop_var": "item", "item": "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker", "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname inventory_hostname: Name does not resolve", "unreachable": true}

fatal: [35.224.xx.xx -> {{ groups.workers[ansible_loop.index0] }}]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop": {"allitems": ["/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker"], "first": true, "index": 1, "index0": 0, "last": false, "length": 2, "nextitem": "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker", "revindex": 2, "revindex0": 1}, "ansible_loop_var": "item", "item": "/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname inventory_hostname: Name does not resolve", "unreachable": true}, {"ansible_loop": {"allitems": ["/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker"], "first": false, "index": 2, "index0": 1, "last": true, "length": 2, "previtem": "/snap/bin/microk8s join 10.128.0.26:25000/62733ae8aca2493665dc2fb3c8a7ec2a/04e060b6d262 --worker", "revindex": 1, "revindex0": 0}, "ansible_loop_var": "item", "item": "/snap/bin/microk8s join 10.128.0.26:25000/4dbfa58f2a996760ab51e41369d718ef/04e060b6d262 --worker", "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname inventory_hostname: Name does not resolve", "unreachable": true}]}

请注意,当仅针对一个工作节点进行修改时,剧本可以工作,但对于多节点集群,我会收到上述错误。

我尝试创建一个虚拟节点来保存连接命令,但我无法弄清楚。我还尝试使用

debug
打印输出
groups.workers[ansible_loop.index0]
来查看我是否获得了工作人员 IP 并当然打印出来而没有任何错误,但这不仅仅适用于主要任务。

kubernetes ansible microk8s
1个回答
0
投票

我相信

delegate_to
仅适用于单个主机(并且不会在每个循环操作中重新分配),因此您看到
groups.workers[ansible_loop.index0]
的错误是有道理的:

  - name: Run command on worker nodes
    shell: "{{ item }}"
    delegate_to: "{{ groups.workers[ansible_loop.index0] }}"
    loop: "{{ microk8s_join_list }}"
    loop_control:
      extended: true

本质上,当你拥有所有工作人员时,

groups.workers[ansible_loop.index0]
会转换为类似
107.178.xx.xx 35.225.xx.xx
的内容,这不是可解析的主机名。当你只有一个工作人员时,它会起作用,因为它会解析为类似
107.178.xx.xx
的内容,并且这是可以解析的。

✌️

© www.soinside.com 2019 - 2024. All rights reserved.