Azure aks 集群 - 每个代理的额外磁盘存储

Azure aks cluster - additional disk storage for each agent

是否可以通过 Terraform 提供 AKS 集群,其中包含附加存储在每个 VM 中的代理?

目前 PV/PVC 基于 SMB 协议是个笑话,我的计划是使用 rook 或 glusterfs,但后来我喜欢通过 terraform 提供我的集群,其中我的每个节点也将处理适当的数量存储,而不是创建单独的常规节点来处理它们。

此致。

恐怕您无法通过 Terraform 在 AKS 中实现这一点。 AKS 是一项托管服务,因此您不能在其中执行许多个人操作。

根据您的要求,我建议您使用aks-engine,您可以自己管理集群,甚至主节点。你可以在agentPoolProfiles中使用属性diskSizesGB。这里的描述:

Describes an array of up to 4 attached disk sizes. Valid disk size values are between 1 and 1024.

clusterdefinitions. You can also take a look at the example for the diskSizesGB here 中有更多详细信息。

因为使用 aks-engine 只是为了有额外的存储空间对我来说有点浪费资源, 我终于找到了一种为每个 aks 节点添加额外存储的方法,完全通过 terraform 方式:

resource "azurerm_subnet" "subnet" {
  name                = "aks-subnet-${var.name}"
  resource_group_name = "${azurerm_resource_group.rg.name}"
  # Uncomment this line if you use terraform < 0.12
  network_security_group_id = "${azurerm_network_security_group.sg.id}"
  address_prefix            = "10.1.0.0/24"
  virtual_network_name      = "${azurerm_virtual_network.network.name}"
}

resource "azurerm_kubernetes_cluster" "cluster" {
  ...
  agent_pool_profile {
    ...
    count           = "${var.kubernetes_agent_count}"
    vnet_subnet_id  = "${azurerm_subnet.subnet.id}"
  }
}

# Find all agent node ids by extracting it's name from subnet assigned to cluster:
data "azurerm_virtual_machine" "aks-node" {
  # This resource represent each node created for aks cluster.
  count = "${var.kubernetes_agent_count}"
  name  = distinct([for x in azurerm_subnet.subnet.ip_configurations : replace(element(split("/", x), 8), "/nic-/", "")])[count.index]

  resource_group_name = azurerm_kubernetes_cluster.cluster.node_resource_group
  depends_on = [
    azurerm_kubernetes_cluster.cluster
  ]
}

# Create disk resource in size of aks nodes:
resource "azurerm_managed_disk" "aks-extra-disk" {
  count                = "${var.kubernetes_agent_count}"
  name                 = "${azurerm_kubernetes_cluster.cluster.name}-disk-${count.index}"
  location             = "${azurerm_kubernetes_cluster.cluster.location}"
  resource_group_name  = "${azurerm_kubernetes_cluster.cluster.resource_group_name}"
  storage_account_type = "Standard_LRS"
  create_option        = "Empty"
  disk_size_gb         = 10
}

# Attach our disks to each agents:
resource "azurerm_virtual_machine_data_disk_attachment" "aks-disk-attachment" {
  count              = "${var.kubernetes_agent_count}"
  managed_disk_id    = "${azurerm_managed_disk.aks-extra-disk[count.index].id}"
  virtual_machine_id = "${data.azurerm_virtual_machine.aks-node[count.index].id}"
  lun                = "10"
  caching            = "ReadWrite"
}