InvalidParameterException:不支持指定的插件版本
InvalidParameterException: Addon version specified is not supported
一段时间以来,我一直在尝试部署一个自我管理的节点 EKS 集群,但没有成功。我现在遇到的错误是 EKS 插件:
错误:创建 EKS 附加组件时出错(DevOpsLabs2b-dev-test--eks:kube-proxy):InvalidParameterException:不支持指定的附加组件版本,AddonName:“kube-proxy”, ClusterName: "DevOpsLabs2b-dev-test--eks", Message_: "不支持指定的插件版本" }
使用 module.eks-ssp-kubernetes-addons.module.aws_kube_proxy[0].aws_eks_addon.kube_proxy
在 .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/aws-kube-proxy/main.tf 第 19 行,在资源“aws_eks_addon”“kube_proxy”中:
对于 coredns 也会重复此错误,但是 ebs_csi_driver 抛出:
错误:创建期间返回意外的 EKS 附加组件 (DevOpsLabs2b-dev-test--eks:aws-ebs-csi-driver) 状态:等待状态变为超时 'ACTIVE'(最后状态:'DEGRADED',超时:20m0s)[警告] 运行再次应用 terraform 将删除 kubernetes 附加组件并尝试再次创建它有效地清除以前的附加组件配置
我的 main.tf 看起来像这样:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
我的 eks.tf 看起来像这样:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
我到底错过了什么?
K8s 有时很难正确。 Github 上的示例针对版本 1.21
[1] 显示。因此,如果你只留下这个:
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
默认下载的镜像为K8s版本1.21
的镜像,如[2]所示。如果你真的需要使用 K8s 版本 1.19
,那么你将不得不为该版本找到相应的 Helm 图表。以下是如何配置所需图像的示例 [3]:
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.4-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
然而,这里的 CoreDNS 版本 (addon_version = v1.8.4-eksbuild.1
) 与 K8s 1.21
一起使用。要检查 1.19
所需的版本,请转到此处 [4]。 TL;DR:您需要指定的 CoreDNS 版本是 1.8.0
。为了使 add-on 适用于 1.19
,对于 CoreDNS(和其他基于图像版本的 add-on),你必须有这样的代码块:
enable_amazon_eks_coredns = true
# followed by
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.0-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
对于其他 EKS add-on,您可以在此处找到更多信息 [5]。如果单击 Name
列中的链接,它将直接带您到 AWS EKS 文档,其中包含 AWS 当前支持的 EKS 版本支持的 add-on 图像版本 (1.17
- 1.21
).
最后但同样重要的是,一个友好的建议:永远不要通过hard-coding[=26]中的访问密钥和秘密访问密钥配置AWS提供商=] 块。使用命名配置文件 [6] 或仅使用默认配置文件。而不是您当前拥有的块:
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
切换到:
provider "aws" {
region = "yourdefaultregion"
profile = "yourprofilename"
}
[4]https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
[6]https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
一段时间以来,我一直在尝试部署一个自我管理的节点 EKS 集群,但没有成功。我现在遇到的错误是 EKS 插件:
错误:创建 EKS 附加组件时出错(DevOpsLabs2b-dev-test--eks:kube-proxy):InvalidParameterException:不支持指定的附加组件版本,AddonName:“kube-proxy”, ClusterName: "DevOpsLabs2b-dev-test--eks", Message_: "不支持指定的插件版本" } 使用 module.eks-ssp-kubernetes-addons.module.aws_kube_proxy[0].aws_eks_addon.kube_proxy 在 .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/aws-kube-proxy/main.tf 第 19 行,在资源“aws_eks_addon”“kube_proxy”中:
对于 coredns 也会重复此错误,但是 ebs_csi_driver 抛出:
错误:创建期间返回意外的 EKS 附加组件 (DevOpsLabs2b-dev-test--eks:aws-ebs-csi-driver) 状态:等待状态变为超时 'ACTIVE'(最后状态:'DEGRADED',超时:20m0s)[警告] 运行再次应用 terraform 将删除 kubernetes 附加组件并尝试再次创建它有效地清除以前的附加组件配置
我的 main.tf 看起来像这样:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
我的 eks.tf 看起来像这样:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
我到底错过了什么?
K8s 有时很难正确。 Github 上的示例针对版本 1.21
[1] 显示。因此,如果你只留下这个:
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
默认下载的镜像为K8s版本1.21
的镜像,如[2]所示。如果你真的需要使用 K8s 版本 1.19
,那么你将不得不为该版本找到相应的 Helm 图表。以下是如何配置所需图像的示例 [3]:
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.4-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
然而,这里的 CoreDNS 版本 (addon_version = v1.8.4-eksbuild.1
) 与 K8s 1.21
一起使用。要检查 1.19
所需的版本,请转到此处 [4]。 TL;DR:您需要指定的 CoreDNS 版本是 1.8.0
。为了使 add-on 适用于 1.19
,对于 CoreDNS(和其他基于图像版本的 add-on),你必须有这样的代码块:
enable_amazon_eks_coredns = true
# followed by
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.0-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
对于其他 EKS add-on,您可以在此处找到更多信息 [5]。如果单击 Name
列中的链接,它将直接带您到 AWS EKS 文档,其中包含 AWS 当前支持的 EKS 版本支持的 add-on 图像版本 (1.17
- 1.21
).
最后但同样重要的是,一个友好的建议:永远不要通过hard-coding[=26]中的访问密钥和秘密访问密钥配置AWS提供商=] 块。使用命名配置文件 [6] 或仅使用默认配置文件。而不是您当前拥有的块:
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
切换到:
provider "aws" {
region = "yourdefaultregion"
profile = "yourprofilename"
}
[4]https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
[6]https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html