Terraform 计划在导入 aws 资源后显示差异
Terraform plan shows differences after importing the aws resources
这是我最初问题的后续问题:
一些背景:我们有 terraform 代码来创建各种 AWS 资源。其中一些资源是为每个 AWS 账户创建的,因此被构造为存储在我们项目的账户范围文件夹中。那是我们只有一个 AWS 区域的时候。现在我们的应用程序是多区域的,因此这些资源将在每个区域为每个 AWS 账户创建。
为了做到这一点,我们现在将这些 TF 脚本移动到区域范围的文件夹中,每个区域 运行。由于这些资源不再是 'account scope' 的一部分,我们已将它们从帐户范围 Terraform 状态中删除。现在,当我尝试将这些资源导入 region scope
我的导入(运行ning 来自 xyz-region-scope 目录)和 terraform 计划:
terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
terraform import module.buckets.aws_s3_bucket.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.module.access_logging_bucket.aws_s3_bucket.default "ab-xyz-stage-access-logging-9d8e94ff"
terraform import module.buckets.module.access_logging_bucket.random_id.bucket_suffix nY6U_w
terraform import module.encryption.module.data_key.aws_iam_policy.decrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt"
terraform import module.encryption.module.data_key.aws_iam_policy.encrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt"
mycompanymachine:xyz-region-scope kuldeepjain$ ../scripts/terraform.sh xyz-stage plan -no-color
+ set -o posix
+ IFS='
'
++ blhome
+ BASH_LIB_HOME=/usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT
+ source /usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT/s3/bucket.sh
+ main xyz-stage plan -no-color
+ '[' 3 -lt 2 ']'
+ local env=xyz-stage
+ shift
+ local command=plan
+ shift
++ get_region xyz-stage
++ local env=xyz-stage
++ shift
+++ aws --profile xyz-stage configure get region
++ local region=us-west-2
++ '[' -z us-west-2 ']'
++ echo us-west-2
+ local region=us-west-2
++ _get_bucket xyz-stage xyz-stage-tfstate
++ local env=xyz-stage
++ shift
++ local name=xyz-stage-tfstate
++ shift
+++ _get_bucket_list xyz-stage xyz-stage-tfstate
+++ local env=xyz-stage
+++ shift
+++ local name=xyz-stage-tfstate
+++ shift
+++ aws --profile xyz-stage --output json s3api list-buckets --query 'Buckets[?contains(Name, `xyz-stage-tfstate`) == `true`].Name'
++ local 'bucket_list=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ _count_buckets_in_json '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ local 'json=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ shift
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq '. | length'
++ local number_of_buckets=1
++ '[' 1 == 0 ']'
++ '[' 1 -gt 1 ']'
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq -r '.[0]'
++ local bucket_name=ab-xyz-stage-tfstate-5b8873b8
++ echo ab-xyz-stage-tfstate-5b8873b8
+ local tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8
++ get_config_file xyz-stage us-west-2
++ local env=xyz-stage
++ shift
++ local region=us-west-2
++ shift
++ local config_file=config/us-west-2/xyz-stage.tfvars
++ '[' '!' -f config/us-west-2/xyz-stage.tfvars ']'
++ config_file=config/us-west-2/default.tfvars
++ echo config/us-west-2/default.tfvars
+ local config_file=config/us-west-2/default.tfvars
+ export TF_DATA_DIR=state/xyz-stage/
+ TF_DATA_DIR=state/xyz-stage/
+ terraform get
+ terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
module.encryption.module.data_key.data.null_data_source.key: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.dependencies: Refreshing state...
module.buckets.data.template_file.dependencies: Refreshing state...
data.aws_caller_identity.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_kms_alias.default: Refreshing state...
module.buckets.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_region.current: Refreshing state...
module.encryption.module.data_key.data.aws_caller_identity.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_kms_alias.encryption_key_alias: Refreshing state...
module.buckets.module.access_logging_bucket.random_id.bucket_suffix: Refreshing state... [id=nY6U_w]
module.buckets.module.access_logging_bucket.aws_s3_bucket.default: Refreshing state... [id=ab-xyz-stage-access-logging-9d8e94ff]
module.buckets.random_id.cloudtrail_bucket_suffix: Refreshing state... [id=cqLFzQ]
module.buckets.module.access_logging_bucket.data.template_file.encryption_configuration: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.encrypt: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.decrypt: Refreshing state...
module.encryption.module.data_key.aws_iam_policy.decrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt]
module.encryption.module.data_key.aws_iam_policy.encrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt]
module.buckets.aws_s3_bucket.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail: Refreshing state...
module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
<= read (data resources)
Terraform will perform the following actions:
# module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "restrict_access_cloudtrail" {
+ id = (known after apply)
+ json = (known after apply)
+ statement {
+ actions = [
+ "s3:GetBucketAcl",
]
+ effect = "Allow"
+ resources = [
+ "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd",
]
+ sid = "AWSCloudTrailAclCheck"
+ principals {
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
}
}
+ statement {
+ actions = [
+ "s3:PutObject",
]
+ effect = "Allow"
+ resources = [
+ "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd/*",
]
+ sid = "AWSCloudTrailWrite"
+ condition {
+ test = "StringEquals"
+ values = [
+ "bucket-owner-full-control",
]
+ variable = "s3:x-amz-acl"
}
+ principals {
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
}
}
}
# module.buckets.aws_s3_bucket.cloudtrail_logging_bucket will be updated in-place
~ resource "aws_s3_bucket" "cloudtrail_logging_bucket" {
+ acl = "private"
arn = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd"
bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
bucket_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.amazonaws.com"
bucket_regional_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.us-west-2.amazonaws.com"
+ force_destroy = false
hosted_zone_id = "Z3BJ6K6RIION7M"
id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
region = "us-west-2"
request_payer = "BucketOwner"
tags = {
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Cloudtrail logging bucket"
}
lifecycle_rule {
abort_incomplete_multipart_upload_days = 0
enabled = true
id = "intu-lifecycle-s3-int-tier"
tags = {}
transition {
days = 32
storage_class = "INTELLIGENT_TIERING"
}
}
logging {
target_bucket = "ab-xyz-stage-access-logging-9d8e94ff"
target_prefix = "logs/cloudtrail-logging/"
}
versioning {
enabled = false
mfa_delete = false
}
}
# module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket will be updated in-place
~ resource "aws_s3_bucket_policy" "cloudtrail_logging_bucket" {
bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
~ policy = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetBucketAcl"
- Effect = "Allow"
- Principal = {
- Service = "cloudtrail.amazonaws.com"
}
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd"
- Sid = "AWSCloudTrailAclCheck"
},
- {
- Action = "s3:PutObject"
- Condition = {
- StringEquals = {
- s3:x-amz-acl = "bucket-owner-full-control"
}
}
- Effect = "Allow"
- Principal = {
- Service = "cloudtrail.amazonaws.com"
}
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd/*"
- Sid = "AWSCloudTrailWrite"
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
}
# module.buckets.module.access_logging_bucket.aws_s3_bucket.default will be updated in-place
~ resource "aws_s3_bucket" "default" {
+ acl = "log-delivery-write"
arn = "arn:aws:s3:::ab-xyz-stage-access-logging-9d8e94ff"
bucket = "ab-xyz-stage-access-logging-9d8e94ff"
bucket_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.amazonaws.com"
bucket_regional_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.us-west-2.amazonaws.com"
+ force_destroy = false
hosted_zone_id = "Z3BJ6K6RIION7M"
id = "ab-xyz-stage-access-logging-9d8e94ff"
region = "us-west-2"
request_payer = "BucketOwner"
tags = {
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Access logging bucket"
}
- grant {
- permissions = [
- "READ_ACP",
- "WRITE",
] -> null
- type = "Group" -> null
- uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" -> null
}
- grant {
- id = "0343271a8c2f184152c171b223945b22ceaf5be5c9b78cf167660600747b5ad8" -> null
- permissions = [
- "FULL_CONTROL",
] -> null
- type = "CanonicalUser" -> null
}
- lifecycle_rule {
- abort_incomplete_multipart_upload_days = 0 -> null
- enabled = true -> null
- id = "intu-lifecycle-s3-int-tier" -> null
- tags = {} -> null
- transition {
- days = 32 -> null
- storage_class = "INTELLIGENT_TIERING" -> null
}
}
versioning {
enabled = false
mfa_delete = false
}
}
Plan: 0 to add, 3 to change, 0 to destroy.
如您所见,terraform 计划输出显示计划:0 添加,3 更改,0 销毁。
我的问题是:
- 为什么它尝试为
cloudtrail_logging_bucket
删除新的 aws_s3_bucket_policy
,即使政策没有变化。请参阅下面的屏幕截图和 TF 代码 cloudtrail_bucket.tf。
cloudtrail_bucket_suffix 的旧帐户范围 (LEFT) 与我当前的远程 TF 状态 (RIGHT) 的差异片段:
- 对于显示
module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail will be read during apply
的资源。它显示 +
标志,这是否意味着它会在这里修改某些内容或者只是按照它说的来阅读它。
- 为什么在删除
grant
和 lifecycle_rule
的地方显示此 module.buckets.module.access_logging_bucket.aws_s3_bucket.default will be updated in-place ~ resource "aws_s3_bucket" "default"
。见下文s3_bucket.tf
TF代码:
cloudtrail_bucket.tf:
data "aws_caller_identity" "current" {}
resource "random_id" "cloudtrail_bucket_suffix" {
keepers = {
# Keep the suffix per account id / environment
aws_account_id = "${data.aws_caller_identity.current.account_id}"
env = "${var.environment}"
}
byte_length = "4"
}
resource "aws_s3_bucket" "cloudtrail_logging_bucket" {
bucket = "ab-${var.environment}-cloudtrail-logging-${random_id.cloudtrail_bucket_suffix.hex}"
acl = "private"
depends_on = [data.template_file.dependencies]
tags = {
name = "Cloudtrail logging bucket"
"mycompany:finance:accountenvironment" = "${var.environment}"
"mycompany:finance:application" = "${module.constants.finance_application}"
"mycompany:finance:billablebusinessunit" = "${module.constants.finance_billablebusinessunit}"
}
lifecycle {
ignore_changes = [ "server_side_encryption_configuration" ]
}
logging {
target_bucket = "${module.access_logging_bucket.name}"
target_prefix = "logs/cloudtrail-logging/"
}
lifecycle_rule {
enabled = "true"
transition {
days = 32
storage_class = "INTELLIGENT_TIERING"
}
}
}
resource "aws_s3_bucket_policy" "cloudtrail_logging_bucket" {
bucket = "${aws_s3_bucket.cloudtrail_logging_bucket.id}"
policy = "${data.aws_iam_policy_document.restrict_access_cloudtrail.json}"
}
data aws_iam_policy_document "restrict_access_cloudtrail" {
statement {
sid = "AWSCloudTrailAclCheck"
effect = "Allow"
actions = ["s3:GetBucketAcl"]
resources = [ "${aws_s3_bucket.cloudtrail_logging_bucket.arn}" ]
principals {
identifiers = ["cloudtrail.amazonaws.com"]
type = "Service"
}
}
statement {
sid = "AWSCloudTrailWrite"
effect = "Allow"
actions = ["s3:PutObject"]
resources = [ "${aws_s3_bucket.cloudtrail_logging_bucket.arn}/*" ]
principals {
identifiers = ["cloudtrail.amazonaws.com"]
type = "Service"
}
condition {
test = "StringEquals"
values = ["bucket-owner-full-control"]
variable = "s3:x-amz-acl"
}
}
}
s3_bucket.tf
resource "random_id" "bucket_suffix" {
keepers = {
# Keep the suffix per account id / environment
aws_account_id = "${data.aws_caller_identity.current.account_id}"
env = "${var.environment}"
}
byte_length = "${var.byte_length}"
}
resource "aws_s3_bucket" "default" {
bucket = "ab-${var.environment}-${var.name}-${random_id.bucket_suffix.hex}"
acl = "${var.acl}"
depends_on = [data.template_file.dependencies]
tags = {
name = "${var.name_tag}"
"mycompany:finance:accountenvironment" = "${var.environment}"
"mycompany:finance:application" = "${module.constants.finance_application}"
"mycompany:finance:billablebusinessunit" = "${module.constants.finance_billablebusinessunit}"
}
lifecycle {
ignore_changes = [ "server_side_encryption_configuration" ]
}
logging {
target_bucket = "${lookup(var.logging, "target_bucket", "ab-${var.environment}-${var.name}-${random_id.bucket_suffix.hex}")}"
target_prefix = "logs/${lookup(var.logging, "target_folder_name", "access-logging")}/"
}
}
我的环境:
Local machine: macOS v10.14.6
Terraform v0.12.29
+ provider.aws v3.14.1
+ provider.null v2.1.2
+ provider.random v2.3.1
+ provider.template v2.1.2
如果 terraform-code 与导入的现有资源不同,则可能会显示这种差异。例如如果有人在没有 editing/applying 代码的情况下通过 AWS 管理控制台中的点击更改了资源。 terraform import 仅将资源导入 tfstate,但不会创建 terraform 代码。
在此示例中,您可以在 AWS console/cli 中验证 S3 存储桶“默认”是否确实配置了日志记录。根据计划,现有存储桶未配置用于登录 AWS,但您的 TF 代码包含此内容,因此将更改它。
您确定您的 TF 代码与导入资源的所有属性完全匹配吗?
为了进一步调查,您还需要 post 相应的 tf 代码。
只是回答我自己的问题,以便我可以提及我对每个问题所做的工作:
对于 2nd question
它不会用 terraform apply
修改任何内容,只是按照消息中的说明读取它。
我的 3rd question
在此处的单独 SO 线程中询问它: 并继续使用我在答案中提到的解决方案。
对于 1st question
,目前还不清楚它显示差异的原因。我试图将现有状态与 terraform state pull
进行比较并检查更新完成的原因,但没有帮助。但是 运行 terraform apply
一切顺利,它没有对政策做出任何改变,这正是我所期望的。
这是我最初问题的后续问题:
一些背景:我们有 terraform 代码来创建各种 AWS 资源。其中一些资源是为每个 AWS 账户创建的,因此被构造为存储在我们项目的账户范围文件夹中。那是我们只有一个 AWS 区域的时候。现在我们的应用程序是多区域的,因此这些资源将在每个区域为每个 AWS 账户创建。
为了做到这一点,我们现在将这些 TF 脚本移动到区域范围的文件夹中,每个区域 运行。由于这些资源不再是 'account scope' 的一部分,我们已将它们从帐户范围 Terraform 状态中删除。现在,当我尝试将这些资源导入 region scope
我的导入(运行ning 来自 xyz-region-scope 目录)和 terraform 计划:
terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
terraform import module.buckets.aws_s3_bucket.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import module.buckets.module.access_logging_bucket.aws_s3_bucket.default "ab-xyz-stage-access-logging-9d8e94ff"
terraform import module.buckets.module.access_logging_bucket.random_id.bucket_suffix nY6U_w
terraform import module.encryption.module.data_key.aws_iam_policy.decrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt"
terraform import module.encryption.module.data_key.aws_iam_policy.encrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt"
mycompanymachine:xyz-region-scope kuldeepjain$ ../scripts/terraform.sh xyz-stage plan -no-color
+ set -o posix
+ IFS='
'
++ blhome
+ BASH_LIB_HOME=/usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT
+ source /usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT/s3/bucket.sh
+ main xyz-stage plan -no-color
+ '[' 3 -lt 2 ']'
+ local env=xyz-stage
+ shift
+ local command=plan
+ shift
++ get_region xyz-stage
++ local env=xyz-stage
++ shift
+++ aws --profile xyz-stage configure get region
++ local region=us-west-2
++ '[' -z us-west-2 ']'
++ echo us-west-2
+ local region=us-west-2
++ _get_bucket xyz-stage xyz-stage-tfstate
++ local env=xyz-stage
++ shift
++ local name=xyz-stage-tfstate
++ shift
+++ _get_bucket_list xyz-stage xyz-stage-tfstate
+++ local env=xyz-stage
+++ shift
+++ local name=xyz-stage-tfstate
+++ shift
+++ aws --profile xyz-stage --output json s3api list-buckets --query 'Buckets[?contains(Name, `xyz-stage-tfstate`) == `true`].Name'
++ local 'bucket_list=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ _count_buckets_in_json '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ local 'json=[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ shift
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq '. | length'
++ local number_of_buckets=1
++ '[' 1 == 0 ']'
++ '[' 1 -gt 1 ']'
+++ echo '[
"ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq -r '.[0]'
++ local bucket_name=ab-xyz-stage-tfstate-5b8873b8
++ echo ab-xyz-stage-tfstate-5b8873b8
+ local tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8
++ get_config_file xyz-stage us-west-2
++ local env=xyz-stage
++ shift
++ local region=us-west-2
++ shift
++ local config_file=config/us-west-2/xyz-stage.tfvars
++ '[' '!' -f config/us-west-2/xyz-stage.tfvars ']'
++ config_file=config/us-west-2/default.tfvars
++ echo config/us-west-2/default.tfvars
+ local config_file=config/us-west-2/default.tfvars
+ export TF_DATA_DIR=state/xyz-stage/
+ TF_DATA_DIR=state/xyz-stage/
+ terraform get
+ terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
module.encryption.module.data_key.data.null_data_source.key: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.dependencies: Refreshing state...
module.buckets.data.template_file.dependencies: Refreshing state...
data.aws_caller_identity.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_kms_alias.default: Refreshing state...
module.buckets.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_region.current: Refreshing state...
module.encryption.module.data_key.data.aws_caller_identity.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_kms_alias.encryption_key_alias: Refreshing state...
module.buckets.module.access_logging_bucket.random_id.bucket_suffix: Refreshing state... [id=nY6U_w]
module.buckets.module.access_logging_bucket.aws_s3_bucket.default: Refreshing state... [id=ab-xyz-stage-access-logging-9d8e94ff]
module.buckets.random_id.cloudtrail_bucket_suffix: Refreshing state... [id=cqLFzQ]
module.buckets.module.access_logging_bucket.data.template_file.encryption_configuration: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.encrypt: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.decrypt: Refreshing state...
module.encryption.module.data_key.aws_iam_policy.decrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt]
module.encryption.module.data_key.aws_iam_policy.encrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt]
module.buckets.aws_s3_bucket.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail: Refreshing state...
module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
<= read (data resources)
Terraform will perform the following actions:
# module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "restrict_access_cloudtrail" {
+ id = (known after apply)
+ json = (known after apply)
+ statement {
+ actions = [
+ "s3:GetBucketAcl",
]
+ effect = "Allow"
+ resources = [
+ "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd",
]
+ sid = "AWSCloudTrailAclCheck"
+ principals {
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
}
}
+ statement {
+ actions = [
+ "s3:PutObject",
]
+ effect = "Allow"
+ resources = [
+ "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd/*",
]
+ sid = "AWSCloudTrailWrite"
+ condition {
+ test = "StringEquals"
+ values = [
+ "bucket-owner-full-control",
]
+ variable = "s3:x-amz-acl"
}
+ principals {
+ identifiers = [
+ "cloudtrail.amazonaws.com",
]
+ type = "Service"
}
}
}
# module.buckets.aws_s3_bucket.cloudtrail_logging_bucket will be updated in-place
~ resource "aws_s3_bucket" "cloudtrail_logging_bucket" {
+ acl = "private"
arn = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd"
bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
bucket_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.amazonaws.com"
bucket_regional_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.us-west-2.amazonaws.com"
+ force_destroy = false
hosted_zone_id = "Z3BJ6K6RIION7M"
id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
region = "us-west-2"
request_payer = "BucketOwner"
tags = {
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Cloudtrail logging bucket"
}
lifecycle_rule {
abort_incomplete_multipart_upload_days = 0
enabled = true
id = "intu-lifecycle-s3-int-tier"
tags = {}
transition {
days = 32
storage_class = "INTELLIGENT_TIERING"
}
}
logging {
target_bucket = "ab-xyz-stage-access-logging-9d8e94ff"
target_prefix = "logs/cloudtrail-logging/"
}
versioning {
enabled = false
mfa_delete = false
}
}
# module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket will be updated in-place
~ resource "aws_s3_bucket_policy" "cloudtrail_logging_bucket" {
bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
id = "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
~ policy = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetBucketAcl"
- Effect = "Allow"
- Principal = {
- Service = "cloudtrail.amazonaws.com"
}
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd"
- Sid = "AWSCloudTrailAclCheck"
},
- {
- Action = "s3:PutObject"
- Condition = {
- StringEquals = {
- s3:x-amz-acl = "bucket-owner-full-control"
}
}
- Effect = "Allow"
- Principal = {
- Service = "cloudtrail.amazonaws.com"
}
- Resource = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd/*"
- Sid = "AWSCloudTrailWrite"
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
}
# module.buckets.module.access_logging_bucket.aws_s3_bucket.default will be updated in-place
~ resource "aws_s3_bucket" "default" {
+ acl = "log-delivery-write"
arn = "arn:aws:s3:::ab-xyz-stage-access-logging-9d8e94ff"
bucket = "ab-xyz-stage-access-logging-9d8e94ff"
bucket_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.amazonaws.com"
bucket_regional_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.us-west-2.amazonaws.com"
+ force_destroy = false
hosted_zone_id = "Z3BJ6K6RIION7M"
id = "ab-xyz-stage-access-logging-9d8e94ff"
region = "us-west-2"
request_payer = "BucketOwner"
tags = {
"mycompany:finance:accountenvironment" = "xyz-stage"
"mycompany:finance:application" = "ab-platform"
"mycompany:finance:billablebusinessunit" = "my-dev"
"name" = "Access logging bucket"
}
- grant {
- permissions = [
- "READ_ACP",
- "WRITE",
] -> null
- type = "Group" -> null
- uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" -> null
}
- grant {
- id = "0343271a8c2f184152c171b223945b22ceaf5be5c9b78cf167660600747b5ad8" -> null
- permissions = [
- "FULL_CONTROL",
] -> null
- type = "CanonicalUser" -> null
}
- lifecycle_rule {
- abort_incomplete_multipart_upload_days = 0 -> null
- enabled = true -> null
- id = "intu-lifecycle-s3-int-tier" -> null
- tags = {} -> null
- transition {
- days = 32 -> null
- storage_class = "INTELLIGENT_TIERING" -> null
}
}
versioning {
enabled = false
mfa_delete = false
}
}
Plan: 0 to add, 3 to change, 0 to destroy.
如您所见,terraform 计划输出显示计划:0 添加,3 更改,0 销毁。
我的问题是:
- 为什么它尝试为
cloudtrail_logging_bucket
删除新的aws_s3_bucket_policy
,即使政策没有变化。请参阅下面的屏幕截图和 TF 代码 cloudtrail_bucket.tf。 cloudtrail_bucket_suffix 的旧帐户范围 (LEFT) 与我当前的远程 TF 状态 (RIGHT) 的差异片段: - 对于显示
module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail will be read during apply
的资源。它显示+
标志,这是否意味着它会在这里修改某些内容或者只是按照它说的来阅读它。 - 为什么在删除
grant
和lifecycle_rule
的地方显示此module.buckets.module.access_logging_bucket.aws_s3_bucket.default will be updated in-place ~ resource "aws_s3_bucket" "default"
。见下文s3_bucket.tf
TF代码:
cloudtrail_bucket.tf:
data "aws_caller_identity" "current" {}
resource "random_id" "cloudtrail_bucket_suffix" {
keepers = {
# Keep the suffix per account id / environment
aws_account_id = "${data.aws_caller_identity.current.account_id}"
env = "${var.environment}"
}
byte_length = "4"
}
resource "aws_s3_bucket" "cloudtrail_logging_bucket" {
bucket = "ab-${var.environment}-cloudtrail-logging-${random_id.cloudtrail_bucket_suffix.hex}"
acl = "private"
depends_on = [data.template_file.dependencies]
tags = {
name = "Cloudtrail logging bucket"
"mycompany:finance:accountenvironment" = "${var.environment}"
"mycompany:finance:application" = "${module.constants.finance_application}"
"mycompany:finance:billablebusinessunit" = "${module.constants.finance_billablebusinessunit}"
}
lifecycle {
ignore_changes = [ "server_side_encryption_configuration" ]
}
logging {
target_bucket = "${module.access_logging_bucket.name}"
target_prefix = "logs/cloudtrail-logging/"
}
lifecycle_rule {
enabled = "true"
transition {
days = 32
storage_class = "INTELLIGENT_TIERING"
}
}
}
resource "aws_s3_bucket_policy" "cloudtrail_logging_bucket" {
bucket = "${aws_s3_bucket.cloudtrail_logging_bucket.id}"
policy = "${data.aws_iam_policy_document.restrict_access_cloudtrail.json}"
}
data aws_iam_policy_document "restrict_access_cloudtrail" {
statement {
sid = "AWSCloudTrailAclCheck"
effect = "Allow"
actions = ["s3:GetBucketAcl"]
resources = [ "${aws_s3_bucket.cloudtrail_logging_bucket.arn}" ]
principals {
identifiers = ["cloudtrail.amazonaws.com"]
type = "Service"
}
}
statement {
sid = "AWSCloudTrailWrite"
effect = "Allow"
actions = ["s3:PutObject"]
resources = [ "${aws_s3_bucket.cloudtrail_logging_bucket.arn}/*" ]
principals {
identifiers = ["cloudtrail.amazonaws.com"]
type = "Service"
}
condition {
test = "StringEquals"
values = ["bucket-owner-full-control"]
variable = "s3:x-amz-acl"
}
}
}
s3_bucket.tf
resource "random_id" "bucket_suffix" {
keepers = {
# Keep the suffix per account id / environment
aws_account_id = "${data.aws_caller_identity.current.account_id}"
env = "${var.environment}"
}
byte_length = "${var.byte_length}"
}
resource "aws_s3_bucket" "default" {
bucket = "ab-${var.environment}-${var.name}-${random_id.bucket_suffix.hex}"
acl = "${var.acl}"
depends_on = [data.template_file.dependencies]
tags = {
name = "${var.name_tag}"
"mycompany:finance:accountenvironment" = "${var.environment}"
"mycompany:finance:application" = "${module.constants.finance_application}"
"mycompany:finance:billablebusinessunit" = "${module.constants.finance_billablebusinessunit}"
}
lifecycle {
ignore_changes = [ "server_side_encryption_configuration" ]
}
logging {
target_bucket = "${lookup(var.logging, "target_bucket", "ab-${var.environment}-${var.name}-${random_id.bucket_suffix.hex}")}"
target_prefix = "logs/${lookup(var.logging, "target_folder_name", "access-logging")}/"
}
}
我的环境:
Local machine: macOS v10.14.6
Terraform v0.12.29
+ provider.aws v3.14.1
+ provider.null v2.1.2
+ provider.random v2.3.1
+ provider.template v2.1.2
如果 terraform-code 与导入的现有资源不同,则可能会显示这种差异。例如如果有人在没有 editing/applying 代码的情况下通过 AWS 管理控制台中的点击更改了资源。 terraform import 仅将资源导入 tfstate,但不会创建 terraform 代码。
在此示例中,您可以在 AWS console/cli 中验证 S3 存储桶“默认”是否确实配置了日志记录。根据计划,现有存储桶未配置用于登录 AWS,但您的 TF 代码包含此内容,因此将更改它。
您确定您的 TF 代码与导入资源的所有属性完全匹配吗?
为了进一步调查,您还需要 post 相应的 tf 代码。
只是回答我自己的问题,以便我可以提及我对每个问题所做的工作:
对于 2nd question
它不会用 terraform apply
修改任何内容,只是按照消息中的说明读取它。
我的 3rd question
在此处的单独 SO 线程中询问它:
对于 1st question
,目前还不清楚它显示差异的原因。我试图将现有状态与 terraform state pull
进行比较并检查更新完成的原因,但没有帮助。但是 运行 terraform apply
一切顺利,它没有对政策做出任何改变,这正是我所期望的。