使用 Terraform 在 EKS 上部署 AWS 负载均衡器控制器
Deploying AWS Load Balancer Controller on EKS with Terraform
正在尝试在 Kubernetes 上部署 aws-load-balancer-controller。
我有以下TF代码:
resource "kubernetes_deployment" "ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
app.kubernetes.io/managed-by = "terraform"
}
}
spec {
replicas = 1
selector {
match_labels = {
app.kubernetes.io/name = "alb-ingress-controller"
}
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
}
}
spec {
dns_policy = "ClusterFirst"
restart_policy = "Always"
service_account_name = kubernetes_service_account.ingress.metadata[0].name
termination_grace_period_seconds = 60
container {
name = "alb-ingress-controller"
image = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
image_pull_policy = "Always"
args = [
"--ingress-class=alb",
"--cluster-name=${local.k8s[var.env].esk_cluster_name}",
"--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
"--aws-region=${local.k8s[var.env].region}"
]
volume_mount {
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
name = kubernetes_service_account.ingress.default_secret_name
read_only = true
}
}
volume {
name = kubernetes_service_account.ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.ingress.default_secret_name
}
}
}
}
}
depends_on = [kubernetes_cluster_role_binding.ingress]
}
resource "kubernetes_ingress" "app" {
metadata {
name = "owncloud-lb"
namespace = "fargate-node"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
labels = {
"app" = "owncloud"
}
}
spec {
backend {
service_name = "owncloud-service"
service_port = 80
}
rule {
http {
path {
path = "/"
backend {
service_name = "owncloud-service"
service_port = 80
}
}
}
}
}
depends_on = [kubernetes_service.app]
}
根据需要,这适用于版本 1.9
。一旦我升级到版本 2.2.3
,pod 无法更新并且在 pod 上出现以下错误:{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
我已阅读文档更新并修改了 IAM 政策,但他们还提到:
updating the TargetGroupBinding CRDs
我不确定如何使用 terraform 做到这一点
如果我尝试在新集群上进行部署(例如,不是从 1.9 升级,我得到同样的错误)我得到同样的错误。
使用 Terraform 代码,您可以应用 Deployment
和 Ingress
资源,但您还必须为 TargetGroupBinding
自定义资源添加 CustomResourceDefinitions
。
这在 Load Balancer Controller installation documentation 中的“将控制器添加到集群”下进行了描述 - 提供了 Helm 和 Kubernetes Yaml 的示例。
Terraform 有 beta support for applying CRDs including an example of deploying CustomResourceDefinition.
正在尝试在 Kubernetes 上部署 aws-load-balancer-controller。
我有以下TF代码:
resource "kubernetes_deployment" "ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
app.kubernetes.io/managed-by = "terraform"
}
}
spec {
replicas = 1
selector {
match_labels = {
app.kubernetes.io/name = "alb-ingress-controller"
}
}
strategy {
type = "Recreate"
}
template {
metadata {
labels = {
app.kubernetes.io/name = "alb-ingress-controller"
app.kubernetes.io/version = "v2.2.3"
}
}
spec {
dns_policy = "ClusterFirst"
restart_policy = "Always"
service_account_name = kubernetes_service_account.ingress.metadata[0].name
termination_grace_period_seconds = 60
container {
name = "alb-ingress-controller"
image = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
image_pull_policy = "Always"
args = [
"--ingress-class=alb",
"--cluster-name=${local.k8s[var.env].esk_cluster_name}",
"--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
"--aws-region=${local.k8s[var.env].region}"
]
volume_mount {
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
name = kubernetes_service_account.ingress.default_secret_name
read_only = true
}
}
volume {
name = kubernetes_service_account.ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.ingress.default_secret_name
}
}
}
}
}
depends_on = [kubernetes_cluster_role_binding.ingress]
}
resource "kubernetes_ingress" "app" {
metadata {
name = "owncloud-lb"
namespace = "fargate-node"
annotations = {
"kubernetes.io/ingress.class" = "alb"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
labels = {
"app" = "owncloud"
}
}
spec {
backend {
service_name = "owncloud-service"
service_port = 80
}
rule {
http {
path {
path = "/"
backend {
service_name = "owncloud-service"
service_port = 80
}
}
}
}
}
depends_on = [kubernetes_service.app]
}
根据需要,这适用于版本 1.9
。一旦我升级到版本 2.2.3
,pod 无法更新并且在 pod 上出现以下错误:{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
我已阅读文档更新并修改了 IAM 政策,但他们还提到:
updating the TargetGroupBinding CRDs
我不确定如何使用 terraform 做到这一点
如果我尝试在新集群上进行部署(例如,不是从 1.9 升级,我得到同样的错误)我得到同样的错误。
使用 Terraform 代码,您可以应用 Deployment
和 Ingress
资源,但您还必须为 TargetGroupBinding
自定义资源添加 CustomResourceDefinitions
。
这在 Load Balancer Controller installation documentation 中的“将控制器添加到集群”下进行了描述 - 提供了 Helm 和 Kubernetes Yaml 的示例。
Terraform 有 beta support for applying CRDs including an example of deploying CustomResourceDefinition.