Terraform- 如何避免使用单一状态文件进行销毁和创建
Terraform- How to avoid destroy and create with single state file
我有一个创建流分析作业的 terraform 代码,还有该作业的输入和输出。
下面是我的地形代码:
provider "azurerm" {
version = "=1.44"
}
resource "azurerm_stream_analytics_job" "test_saj" {
name = "test-stj"
resource_group_name = "myrgname"
location = "Southeast Asia"
compatibility_level = "1.1"
data_locale = "en-US"
events_late_arrival_max_delay_in_seconds = 60
events_out_of_order_max_delay_in_seconds = 50
events_out_of_order_policy = "Adjust"
output_error_policy = "Drop"
streaming_units = 3
tags = {
environment = "test"
}
transformation_query = var.query
}
resource "azurerm_stream_analytics_output_blob" "mpl_saj_op_jk_blob" {
name = var.saj_jk_blob_output_name
stream_analytics_job_name = "test-stj"
resource_group_name = "myrgname"
storage_account_name = "mystaname"
storage_account_key = "mystakey"
storage_container_name = "testupload"
path_pattern = myfolder/{day}"
date_format = "yyyy-MM-dd"
time_format = "HH"
depends_on = [azurerm_stream_analytics_job.test_saj]
serialization {
type = "Json"
encoding = "UTF8"
format = "LineSeparated"
}
}
resource "azurerm_stream_analytics_stream_input_eventhub" "mpl_saj_ip_eh" {
name = var.saj_joker_event_hub_name
stream_analytics_job_name = "test-stj"
resource_group_name = "myrgname"
eventhub_name = "myehname"
eventhub_consumer_group_name = "myehcgname"
servicebus_namespace = "myehnamespacename"
shared_access_policy_name = "RootManageSharedAccessKey"
shared_access_policy_key = "ehnamespacekey"
serialization {
type = "Json"
encoding = "UTF8"
}
depends_on = [azurerm_stream_analytics_job.test_saj]
}
以下是我的 tfvars 输入文件:
query=<<EOT
myqueryhere
EOT
saj_jk_blob_output_name="outputtoblob01"
saj_joker_event_hub_name="inputventhub01"
我创作没问题。现在我的问题是,当我想为同一个流分析作业创建一个新的输入和输出时,我在 tfvars 文件中单独更改了名称值并给了 terraform apply(在第一次应用的同一目录中。相同的状态文件) .
Terraform 正在用新的 i/p 和 o/p 替换现有的,这不是我的要求。我想要旧的和新的。当使用 terraform import 在一个完全不同的文件夹中导入现有的流分析时,这个用例得到了满足,我使用了相同的代码。但是有没有办法在没有 terraform import 的情况下做到这一点。这可以用单个状态文件本身来完成吗?
State 允许 Terraform 知道要添加、更新或删除哪些 Azure 资源。除非您直接在配置文件中部署具有不同名称的资源,否则您无法通过单个状态文件本身完成您想要做的事情。
例如,如果你想创建两个虚拟网络。您可以像这样直接创建资源或在循环的资源级别使用 count
参数。
resource "azurerm_virtual_network" "example" {
name = "examplevnet1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_virtual_network" "example" {
name = "examplevnet2"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.2.0.0/16"]
}
在团队中使用 Terraform 时,您可以使用 remote state to write the state data to a remote data store, which can then be shared between all members of a team. It's recommended to store Terraform state in Azure Storage。
有关详细信息,您可以在 this blog 中查看 Terraform 工作流。
我有一个创建流分析作业的 terraform 代码,还有该作业的输入和输出。 下面是我的地形代码:
provider "azurerm" {
version = "=1.44"
}
resource "azurerm_stream_analytics_job" "test_saj" {
name = "test-stj"
resource_group_name = "myrgname"
location = "Southeast Asia"
compatibility_level = "1.1"
data_locale = "en-US"
events_late_arrival_max_delay_in_seconds = 60
events_out_of_order_max_delay_in_seconds = 50
events_out_of_order_policy = "Adjust"
output_error_policy = "Drop"
streaming_units = 3
tags = {
environment = "test"
}
transformation_query = var.query
}
resource "azurerm_stream_analytics_output_blob" "mpl_saj_op_jk_blob" {
name = var.saj_jk_blob_output_name
stream_analytics_job_name = "test-stj"
resource_group_name = "myrgname"
storage_account_name = "mystaname"
storage_account_key = "mystakey"
storage_container_name = "testupload"
path_pattern = myfolder/{day}"
date_format = "yyyy-MM-dd"
time_format = "HH"
depends_on = [azurerm_stream_analytics_job.test_saj]
serialization {
type = "Json"
encoding = "UTF8"
format = "LineSeparated"
}
}
resource "azurerm_stream_analytics_stream_input_eventhub" "mpl_saj_ip_eh" {
name = var.saj_joker_event_hub_name
stream_analytics_job_name = "test-stj"
resource_group_name = "myrgname"
eventhub_name = "myehname"
eventhub_consumer_group_name = "myehcgname"
servicebus_namespace = "myehnamespacename"
shared_access_policy_name = "RootManageSharedAccessKey"
shared_access_policy_key = "ehnamespacekey"
serialization {
type = "Json"
encoding = "UTF8"
}
depends_on = [azurerm_stream_analytics_job.test_saj]
}
以下是我的 tfvars 输入文件:
query=<<EOT
myqueryhere
EOT
saj_jk_blob_output_name="outputtoblob01"
saj_joker_event_hub_name="inputventhub01"
我创作没问题。现在我的问题是,当我想为同一个流分析作业创建一个新的输入和输出时,我在 tfvars 文件中单独更改了名称值并给了 terraform apply(在第一次应用的同一目录中。相同的状态文件) .
Terraform 正在用新的 i/p 和 o/p 替换现有的,这不是我的要求。我想要旧的和新的。当使用 terraform import 在一个完全不同的文件夹中导入现有的流分析时,这个用例得到了满足,我使用了相同的代码。但是有没有办法在没有 terraform import 的情况下做到这一点。这可以用单个状态文件本身来完成吗?
State 允许 Terraform 知道要添加、更新或删除哪些 Azure 资源。除非您直接在配置文件中部署具有不同名称的资源,否则您无法通过单个状态文件本身完成您想要做的事情。
例如,如果你想创建两个虚拟网络。您可以像这样直接创建资源或在循环的资源级别使用 count
参数。
resource "azurerm_virtual_network" "example" {
name = "examplevnet1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_virtual_network" "example" {
name = "examplevnet2"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.2.0.0/16"]
}
在团队中使用 Terraform 时,您可以使用 remote state to write the state data to a remote data store, which can then be shared between all members of a team. It's recommended to store Terraform state in Azure Storage。
有关详细信息,您可以在 this blog 中查看 Terraform 工作流。