无效配置:没有提供配置,尝试设置KUBERNETES_MASTER环境变量

问题描述 投票:0回答:0

enter image description here

Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Error: Get "http://localhost/api/v1/namespaces/devcluster-ns": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.

Error: Get "http://localhost/api/v1/namespaces/devcluster-ns/secrets/devcluster-storage-secret": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.

我使用 Terraform 配置了 Azure AKS 集群。配置后,发生了一些代码修改。其他部分修改正常,但在增加或减少“default_node_pool”中的pod数量时,遇到如下错误。

特别是,当增加或减少 default_node_pool 的“os_disk_size_gb,node_count”时,似乎会出现 kubernetes provider 问题。

resource "azurerm_kubernetes_cluster" "aks" {
  name                                = "${var.cluster_name}-aks"
  location                            = var.location
  resource_group_name                 = data.azurerm_resource_group.aks-rg.name 
  node_resource_group                 = "${var.rgname}-aksnode"  
  dns_prefix                          = "${var.cluster_name}-aks"
  kubernetes_version                  = var.aks_version
  private_cluster_enabled             = var.private_cluster_enabled
  private_cluster_public_fqdn_enabled = var.private_cluster_public_fqdn_enabled
  private_dns_zone_id                 = var.private_dns_zone_id

  default_node_pool {
    name                         = "syspool01"
    vm_size                      = "${var.agents_size}"
    os_disk_size_gb              = "${var.os_disk_size_gb}"
    node_count                   = "${var.agents_count}"
    vnet_subnet_id               = data.azurerm_subnet.subnet.id
    zones                        = [1, 2, 3]
    kubelet_disk_type            = "OS"
    os_sku                       = "Ubuntu"
    os_disk_type                 = "Managed"
    ultra_ssd_enabled            = "false"
    max_pods                     = "${var.max_pods}"
    only_critical_addons_enabled = "true"
    # enable_auto_scaling = true
    # max_count           = ${var.max_count}
    # min_count           = ${var.min_count}
  }

我在provider.tf上正确指定了kubernetes和helm的provider如下,第一次分发没有问题

data "azurerm_kubernetes_cluster" "credentials" {
  name                = azurerm_kubernetes_cluster.aks.name
  resource_group_name = data.azurerm_resource_group.aks-rg.name
}

provider "kubernetes" {
  host                   = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
  username               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
  password               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
  client_certificate     = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
  client_key             = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
}

provider "helm" {
  kubernetes {
    host = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
    # username               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
    # password               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
    client_certificate     = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
    client_key             = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
    cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
  }
}

terrafrom 刷新、terraform 计划和 terraform 应用程序必须正常运行。

其他发现包括:在 provider.tf 上,阅读“/.Add and distribute “kube/config”。这似乎工作得很好。但我想要的是“/.It should work well without specifying “kube/config”。

provider "kubernetes" {
  config_path            = "~/.kube/config"
  host                   = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
  username               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
  password               = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
  client_certificate     = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
  client_key             = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
}

不出所料,当terraform plan,terraform refresh,terraform application,“data.azurm_kubernetes_cluster.credentials”我看不到这个吗?

azure kubernetes terraform localhost kubernetes-helm
© www.soinside.com 2019 - 2024. All rights reserved.