在 Kubernetes 对象上设置资源配额

问题描述 投票:0回答:2

我正在探索 kubernetes 中的资源配额。我的问题陈述是有一种情况,一个人不小心写了一个很大的内存限制值,比如 10Gi,导致触发了不需要的自动缩放。

我想限制资源配额。我正在阅读有关限制范围(https://kubernetes.io/docs/concepts/policy/limit-range/)和每个 PriorityClass 的资源配额(https://kubernetes.io/docs/concepts/policy/resource) -配额/)。我想限制 Pod/容器的内存和 CPU 限制请求值。对于此类用例,最佳实践或建议是什么。

kubernetes amazon-eks kubernetes-pod kubernetes-namespace
2个回答
1
投票

如果您使用 terraform 和 eks 蓝图,您可以按照此处

的说明定义每个团队的配额
  # EKS Application Teams

  application_teams = {
    # First Team
    team-blue = {
      "labels" = {
        "appName"     = "example",
        "projectName" = "example",
        "environment" = "example",
        "domain"      = "example",
        "uuid"        = "example",
      }
      "quota" = {
        "requests.cpu"    = "1000m",
        "requests.memory" = "4Gi",
        "limits.cpu"      = "2000m",
        "limits.memory"   = "8Gi",
        "pods"            = "10",
        "secrets"         = "10",
        "services"        = "10"
      }
      manifests_dir = "./manifests"
      # Belows are examples of IAM users and roles
      users = [
        "arn:aws:iam::123456789012:user/blue-team-user",
        "arn:aws:iam::123456789012:role/blue-team-sso-iam-role"
      ]
    }

    # Second Team
    team-red = {
      "labels" = {
        "appName"     = "example2",
        "projectName" = "example2",
      }
      "quota" = {
        "requests.cpu"    = "2000m",
        "requests.memory" = "8Gi",
        "limits.cpu"      = "4000m",
        "limits.memory"   = "16Gi",
        "pods"            = "20",
        "secrets"         = "20",
        "services"        = "20"
      }
      manifests_dir = "./manifests2"
      users = [

        "arn:aws:iam::123456789012:role/other-sso-iam-role"
      ]
    }
  }

在我的例子中,我在 vars.yaml 中为每个集群创建了每个名称空间的配额,并使用 for 表达式添加它们:

main.tf

locals {
  app_namespaces         = var.app_namespaces
}

...

application_teams = {
 for name, values in local.app_namespaces : name => {
   quota = values.quota
  }
} 

values.yaml

app_namespaces:
  backend:
    roles:
      - Backend-Engineers
    quota:
      requests.cpu: 1000m
      requests.memory: 4G
      limits.cpu: 2000m
      limits.memory: 8Gi
      pods: 10
      secrets: 10
      services: 10

0
投票

要控制命名空间的内存数量,您应该使用资源配额,您可以阅读这篇关于资源配额的文章。 它将向您展示sxquotas开源项目,帮助您根据当前消耗创建和调整资源配额。该工具允许您执行以下操作:

echo "---create a initial resource quota"
sxquotas create myquota stack-minimal
echo "---Adjust and add the double of capacity"
sxquotas adjust myquota 200%

在此示例中,如果您的应用程序消耗了 1Gi 的 request.memory,则资源配额将允许 request.memory 最多使用 2Gi。 myquota 资源配额中定义的所有资源都相同。

© www.soinside.com 2019 - 2024. All rights reserved.