Terraform 计划删除 AKS 节点池

Terraform Plan deleting AKS Node Pool

如果我们在节点池中增加工作节点,Terraform 计划总是强制重新创建 AKS 集群

尝试通过 Terraform 创建具有 1 个工作节点的 AKS 集群,一切顺利,集群已启动并且 运行。

Post 那,我尝试在我的 AKS,Terraform 展示计划中再添加一个工作节点:2 个添加,0 个更改,2 个销毁。

不确定如果删除现有节点池,我们如何增加 aks 节点池中的工作节点。

  default_node_pool {
    name                  = var.nodepool_name
    vm_size               = var.instance_type
    orchestrator_version  = data.azurerm_kubernetes_service_versions.current.latest_version
    availability_zones    = var.zones
    enable_auto_scaling   = var.node_autoscalling
    node_count            = var.instance_count
    enable_node_public_ip = var.publicip
    vnet_subnet_id        = data.azurerm_subnet.subnet.id
    node_labels = {
      "node_pool_type"         = var.tags[0].node_pool_type
      "environment"            = var.tags[0].environment
      "nodepool_os"            = var.tags[0].nodepool_os
      "application"            = var.tags[0].application
      "manged_by"              = var.tags[0].manged_by
 }
}

错误

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # azurerm_kubernetes_cluster.aks_cluster must be replaced
-/+ resource "azurerm_kubernetes_cluster" "aks_cluster" {

谢谢 萨蒂扬

我通过创建一个节点数为 2 的集群在我的环境中进行了相同的测试,然后使用如下所示将其更改为 3:

如果您正在使用 HTTP_proxy,那么它将 默认情况下强制替换该块 这就是原因整个集群将被替换为新配置。

因此,对于解决方案,您可以在代码中使用生命周期块,如下所示:

  lifecycle {
    ignore_changes = [http_proxy_config]
  }

代码将是:

resource "azurerm_kubernetes_cluster" "aks_cluster" {
  name                       = "${var.global-prefix}-${var.cluster-id}-${var.envid}-azwe-aks-01"
  location                   = data.azurerm_resource_group.example.location
  resource_group_name        = data.azurerm_resource_group.example.name
  dns_prefix                 = "${var.global-prefix}-${var.cluster-id}-${var.envid}-azwe-aks-01"
  kubernetes_version         = var.cluster-version
  private_cluster_enabled    = var.private_cluster



  default_node_pool {
    name                  = var.nodepool_name
    vm_size               = var.instance_type
    orchestrator_version  = data.azurerm_kubernetes_service_versions.current.latest_version
    availability_zones    = var.zones
    enable_auto_scaling   = var.node_autoscalling
    node_count            = var.instance_count
    enable_node_public_ip = var.publicip
    vnet_subnet_id        = azurerm_subnet.example.id  
  }


# RBAC and Azure AD Integration Block
  role_based_access_control {
    enabled = true
} 

  http_proxy_config {
    http_proxy       = "http://xxxx"
    https_proxy      = "http://xxxx"
    no_proxy         = ["localhost","xxx","xxxx"]
  }
# Identity (System Assigned or Service Principal)
  identity {
    type = "SystemAssigned"
}

# Add On Profiles
  addon_profile {
    azure_policy {enabled =  true}
  }
# Network Profile
  network_profile {
    network_plugin = "azure"
    network_policy = "calico"
  }
  lifecycle {
    ignore_changes = [http_proxy_config]
  }
}