terraform copy/upload 文件到 aws ec2 实例

terraform copy/upload files to aws ec2 instance

我们有 cronjob 和 shell 脚本,我们希望在使用 terraform 创建实例时将其复制或上传到 aws ec2 实例。

我们试过了

  1. 文件供应商: 但它不工作,并且阅读此选项不适用于所有 terraform 版本
      provisioner "file" {
        source      = "abc.sh"
        destination = "/home/ec2-user/basic2.sh"
      }
  1. 尝试了数据模板文件选项
    data "template_file" "userdata_line" {
      template = <<EOF
    #!/bin/bash
    mkdir /home/ec2-user/files2
    cd /home/ec2-user/files2
    sudo touch basic2.sh
    sudo chmod 777 basic2.sh
    base64 basic.sh |base64 -d >basic2.sh
    EOF
    }

尝试了所有选项,但 none 有效。
你能帮忙或建议吗?
我是 terraform 的新手,所以长期以来一直在努力。

我为此使用了 provisioner "file",没问题...
但您必须提供连接:

resource "aws_instance" "foo" {
...
  provisioner "file" {
    source      = "~/foobar"
    destination = "~/foobar"

    connection {
      type        = "ssh"
      user        = "ubuntu"
      private_key = "${file("~/Downloads/AWS_keys/test.pem")}"
      host        = "${self.public_dns}"
    }
  }
...
}

这里有一些代码示例:
https://github.com/heldersepu/hs-scripts/blob/master/TerraForm/ec2_ubuntu.tf#L21

您必须使用包含 ec2 实例连接详细信息的文件配置器。示例配置如下所示:

provisioner "file" {
  source      = "${path.module}/files/script.sh"
  destination = "/tmp/script.sh"

  connection {
    type     = "ssh"
    user     = "root"
    password = "${var.root_password}"
    host     = "${var.host}"
  }
}

您可以使用用户名/密码、私钥甚至堡垒机进行连接。更多详情 https://www.terraform.io/docs/provisioners/connection.html

从具有 cloud-init installed (which is common in many official Linux distri), we can use cloud-init's write_files module 的 AMI 开始将任意文件放入文件系统时,只要它们足够小以符合 user_data 参数的约束以及所有其他 cloud-init 数据。

与所有 cloud-init 模块一样,我们使用 cloud-init's YAML-based configuration format, which begins with the special marker string #cloud-config on a line of its own, followed by a YAML data structure. Because JSON is a subset of YAML, we can use Terraform's jsonencode 配置 write_files 以生成有效值[1].

locals {
  cloud_config_config = <<-END
    #cloud-config
    ${jsonencode({
      write_files = [
        {
          path        = "/etc/example.txt"
          permissions = "0644"
          owner       = "root:root"
          encoding    = "b64"
          content     = filebase64("${path.module}/example.txt")
        },
      ]
    })}
  END
}

当我们设置encoding = "b64"时,write_files模块可以接受base64格式的数据,因此我们将其与Terraform的filebase64 function to include the contents of an external file. Other approaches are possible here, such as producing a string dynamically using Terraform templates and using base64encode结合使用,将其编码为文件内容。

如果您可以像上面那样在一个配置文件中表达您希望 cloud-init 执行的所有操作,那么您可以将 local.cloud_config_config 直接分配为您的实例 user_data,并且 cloud-config 应该在系统启动时识别并处理它:

  user_data = local.cloud_config_config

如果您需要将创建文件与其他一些操作结合起来,例如 运行 一个 shell 脚本,您可以使用 cloud-init 的 multipart archive 格式来编码多个 "files" 供 cloud-init 处理。 Terraform 有一个 cloudinit 提供程序,其中包含一个数据源,可以轻松地为 cloud-init 构建一个多部分存档:

data "cloudinit_config" "example" {
  gzip          = false
  base64_encode = false

  part {
    content_type = "text/cloud-config"
    filename     = "cloud-config.yaml"
    content      = local.cloud_config_config
  }

  part {
    content_type = "text/x-shellscript"
    filename     = "example.sh"
    content  = <<-EOF
      #!/bin/bash
      echo "Hello World"
    EOT
  }
}

此数据源将在 cloudinit_config.example.rendered 处生成一个字符串,这是一个多部分存档,适合用作 user_data 用于 cloud-init:

  user_data = cloudinit_config.example.rendered

EC2 施加最大 user-data size of 64 kilobytes, so all of the encoded data together must fit within that limit. If you need to place a large file that comes close to or exceeds that limit, it would probably be best to use an intermediate other system to transfer that file, such as having Terraform write the file into an Amazon S3 bucket and having the software in your instance retrieve that data using instance profile 凭据。不过,对于用于系统配置的小数据文件来说,这不是必需的。

需要注意的是,从 Terraform 和 EC2 的角度来看,user_data 的内容只是一个任意字符串。处理字符串的任何问题都必须在目标操作系统本身内进行调试,方法是阅读 cloud-init 日志以查看它如何解释配置以及在尝试执行这些操作时发生了什么。


[1]:我们也可以使用 yamlencode,但在我写这篇文章的时候,这个函数有一个警告,它的确切格式将来可能会改变Terraform 版本,这对 user_data 来说是不可取的,因为它会导致实例被替换。如果您以后阅读本文并且该警告不再出现在 yamldecode 文档中,请考虑改用 yamlencode

以某种方式在公司域中 none 选项有效。 但最终我们能够使用 s3 存储桶复制/下载文件。

创建 s3.tf 以上传此文件 basic2.sh

resource "aws_s3_bucket" "demo-s3" {

  bucket = "acom-demo-s3i-<bucketID>-us-east-1"
  acl    = "private"


  tags {
    Name = "acom-demo-s3i-<bucketID>-us-east-1"
    StackId = "demo-s3"
  }
}

resource "aws_s3_bucket_policy" "s3_policy" {

  bucket = "${aws_s3_bucket.demo-s3.id}"

  policy = <<EOF
{
    "Version": "2009-10-17",
    "Statement": [
            {
            "Sid": "Only allow specific role",
            "Effect": "allow",
            "Principal":{ "AWS": ["arn:aws:iam::<bucketID>:role/demo-s3i"]},
            "Action":  "s3:*",
            "Resource": [
          "arn:aws:s3:::acom-demo-s3i-<bucketID>-us-east-1",
          "arn:aws:s3:::acom-demo-s3i-<bucketID>-us-east-1/*"
            ]

        }
    ]
}
EOF
}


resource "aws_s3_bucket_object" "object" {
  bucket = "acom-demo-s3i-<bucketID>-us-east-1"
  key    = "scripts/basic2.sh"
  source = "scripts/basic2.sh"
  etag = "${filemd5("scripts/basic2.sh")}"
}

然后在其他tpl文件中声明文件下载部分。

 aws s3 cp s3://acom-demo-s3i-<bucketID>-us-east-1/scripts/basic2.sh /home/ec2-user/basic2.sh

这是一个更简单的示例,如何使用 write_filescloud-init,如 @martin-atkins

所述

templates/cloud-init.yml.tpl

的内容
#cloud-config
package_update: true
package_upgrade: true

packages:
  - ansible

write_files:
  - content: |
      ${base64encode("${ansible_playbook}")}
    encoding: b64
    owner: root:root
    path: /opt/ansible-playbook.yml
    permissions: '0750'

runcmd:
 - ansible-playbook /opt/ansible-playbook.yml

main.tf 文件的内容:

data "template_file" "instance_startup_script" {
  template = file(format("%s/templates/templates/cloud-init.yml.tpl", path.module))

  vars = {
    ansible_playbook = templatefile("${path.module}/templates/ansible-playbook.yml.tpl", {
      playbookvar = var.play_book_var
    })
    
    cloudinitvar = var.cloud_init_var
  }
}

可以对 cloud-init 和 ansible-playbook 模板使用变量插值

这对我有用:

resource "aws_instance" "myapp-server" {
  ami = data.aws_ami.ubuntu.id
  instance_type = xx
  subnet_id =  xx
  vpc_security_group_ids = xx
  availability_zone=xx
  associate_public_ip_address = true  
  key_name = xx  
  user_data = file(xx)

  connection {
    type     = "ssh"
    host     =  self.public_ip
    user     = "ubuntu"
    private_key     = file(xx) 
  }
 
   provisioner "file" {
    source      = "source-file"
    destination = "dest-file"
  }

}