如何在带 Terraform 的 AWS VPC 中的两个子网之间路由?
How to route between two subnets in an AWS VPC w/ Terraform?
更新:
断断续续地致力于此。似乎无法获得带有两个子网和一个 SSH 堡垒的工作配置。为完整的 .tf 文件配置设置赏金:
* 创建两个私有子网
* 创建堡垒
* 在通过堡垒配置的每个子网上旋转一个 ec2 实例(运行 通过堡垒的任意 shell 命令)
* 已配置互联网网关
* 为私有子网上的主机提供一个 nat 网关
* 已相应配置路由和安全组
原文post:
我正在尝试学习 Terraform 并构建原型。我有一个通过 Terraform 配置的 AWS VPC。除了 DMZ 子网之外,我还有一个 public 子网 'web' 从 Internet 接收流量。我有一个无法从 Internet 访问的私有子网 'app'。我正在尝试配置堡垒主机,以便 terraform 可以在私有 'app' 子网上提供实例。我还没有能够让它工作。
当我通过 SSH 连接到堡垒时,我无法从堡垒主机通过 SSH 连接到私有子网中的任何实例。我怀疑路由有问题。我一直在通过几个可用的示例和文档构建这个原型。许多示例通过 aws 提供商使用略有不同的技术和地形路由定义。
有人可以提供定义这三个子网的理想或正确方法吗(public 'web'、public 'dmz' w/ bastion 和私有 'app') 以便 'web' 子网上的实例可以访问 'app' 子网并且 DMZ 中的堡垒主机可以在私有 'app' 子网中提供实例?
下面是我的配置片段:
resource "aws_subnet" "dmz" {
vpc_id = "${aws_vpc.vpc-poc.id}"
cidr_block = "${var.cidr_block_dmz}"
}
resource "aws_route_table" "dmz" {
vpc_id = "${aws_vpc.vpc-poc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "dmz" {
subnet_id = "${aws_subnet.dmz.id}"
route_table_id = "${aws_route_table.dmz.id}"
}
resource "aws_subnet" "web" {
vpc_id = "${aws_vpc.vpc-poc.id}"
cidr_block = "10.200.2.0/24"
}
resource "aws_route_table" "web" {
vpc_id = "${aws_vpc.vpc-poc.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.bastion.id}"
}
}
resource "aws_route_table_association" "web" {
subnet_id = "${aws_subnet.web.id}"
route_table_id = "${aws_route_table.web.id}"
}
resource "aws_subnet" "app" {
vpc_id = "${aws_vpc.vpc-poc.id}"
cidr_block = "10.200.3.0/24"
}
resource "aws_route_table" "app" {
vpc_id = "${aws_vpc.vpc-poc.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.bastion.id}"
}
}
resource "aws_route_table_association" "app" {
subnet_id = "${aws_subnet.app.id}"
route_table_id = "${aws_route_table.app.id}"
}
除非堡垒主机也充当 NAT(我不建议您在同一实例上组合角色),否则 Web 和应用程序子网将没有任何出站 Web 访问,否则看起来路由明智因为TF会自动为VPC添加一条本地路由记录。
只要您拥有覆盖您的 VPC 范围的本地路由记录,那么路由就应该没问题。使用您的 Terraform 配置文件(并添加最少的必要资源)允许我在所有 3 个子网中创建一些基本实例并在它们之间成功路由,因此您可能会遗漏其他内容,例如安全组或 NACL。
您还没有提供完整的 Terraform,但您需要允许 SSH 从堡垒 IP 或堡垒主机的 CIDR 块进入您的 'app' VPC 实例,因此如下所示:
resource "aws_security_group" "allow_ssh" {
name = "allow_ssh"
description = "Allow inbound SSH traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${aws_instance.bastion.private_ip}/32"]
}
}
然后在您的 'app' 实例资源中,您需要添加安全组:
...
vpc_security_group_ids = ["${aws_security_group.allow_ssh.id}"]
...
https://www.terraform.io/docs/providers/aws/r/security_group_rule.html
您应该使用 tcpdump 和其他调试工具检查网络问题。
请检查:
- ip可达且网络设置正确(如10.200.2.X可以可达堡垒机ip)
- iptables/another 防火墙不会阻止您的流量
- ssh 服务器正在侦听(ssh 从那些主机到那些主机的 ip)
- 您拥有正确的主机安全组(您可以在 EC2 实例的属性中看到这一点)
- 尝试使用 tcpdump 嗅探流量
我没有看到堡垒主机的原因。
我使用 saltstack 有类似的东西,我只是使用 VPC 内的主服务器控制其余部分,为其分配特定的安全组以允许访问。
CIDR X/24
subnetX.0/26- subnet for control server. <aster server ip EC2-subnet1/32
subnetX.64/26 - private minions
subentX.128/26 - public minions
subnetX.192/26- private minions
然后为每个子网创建一条路由 table 以表达对隔离的热爱
将每条路由附加到单个子网。例如
rt-1 - subnetX.0/26
rt-2 - subnetX.64/26
rt-3 - subnetX.128/26
rt-4 - subnetX.192/26
确保你的路由 table 有这样的东西,这样 rt-1 实例连接到每个人的路由都是可能的
destination: CIDR X/24 Target: local
然后通过安全组inbound.e.g限制连接。
允许来自 EC2-subnet1/32
的 SSH
一旦我完成了与控制服务器的所有工作,我可以删除说 CIDR X/24 目标的特定路由:我的 public 子网中的本地,因此它不再能够路由流量到我当地的 CIDR。
我没有理由创建复杂的堡垒,因为我已经赋予了删除控制服务器中路由的权力。
下面是一个可能对您有所帮助的片段。这是未经测试的,但是是从我在私有子网中配置 VM 的一个 terraform 文件中提取的。我知道这适用于一个私有子网,我试着像你原来的问题一样在这里实现两个。
我跳过我的 NAT 实例来命中并使用 Terraform 提供私有子网框。如果您的安全组设置正确,它就可以工作。我做了一些实验。
/* VPC creation */
resource "aws_vpc" "vpc_poc" {
cidr_block = "10.200.0.0/16"
}
/* Internet gateway for the public subnets */
resource "aws_internet_gateway" "gateway" {
vpc_id = "${aws_vpc.vpc_poc.id}"
}
/* DMZ subnet - public */
resource "aws_subnet" "dmz" {
vpc_id = "${aws_vpc.vpc_poc.id}"
cidr_block = "10.200.1.0/24"
/* may help to be explicit here */
map_public_ip_on_launch = true
/* this is recommended in the docs */
depends_on = ["aws_internet_gateway.gateway"]
}
resource "aws_route_table" "dmz" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "dmz" {
subnet_id = "${aws_subnet.dmz.id}"
route_table_id = "${aws_route_table.dmz.id}"
}
/* Web subnet - public */
resource "aws_subnet" "web" {
vpc_id = "${aws_vpc.vpc_poc.id}"
cidr_block = "10.200.2.0/24"
map_public_ip_on_launch = true
depends_on = ["aws_internet_gateway.gateway"]
}
resource "aws_route_table" "web" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
/* your public web subnet needs access to the gateway */
/* this was set to bastion before so you had a circular arg */
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "web" {
subnet_id = "${aws_subnet.web.id}"
route_table_id = "${aws_route_table.web.id}"
}
/* App subnet - private */
resource "aws_subnet" "app" {
vpc_id = "${aws_vpc.vpc_poc.id}"
cidr_block = "10.200.3.0/24"
}
/* Create route for DMZ Bastion */
resource "aws_route_table" "app" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
/* this send traffic to the bastion to pass off */
instance_id = "${aws_instance.nat_dmz.id}"
}
}
/* Create route for App Bastion */
resource "aws_route_table" "app" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
/* this send traffic to the bastion to pass off */
instance_id = "${aws_instance.nat_web.id}"
}
}
resource "aws_route_table_association" "app" {
subnet_id = "${aws_subnet.app.id}"
route_table_id = "${aws_route_table.app.id}"
}
/* Default security group */
resource "aws_security_group" "default" {
name = "default-sg"
description = "Default security group that allows inbound and outbound traffic from all instances in the VPC"
vpc_id = "${aws_vpc.vpc_poc.id}"
ingress {
from_port = "0"
to_port = "0"
protocol = "-1"
self = true
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
self = true
}
}
/* Security group for the nat server */
resource "aws_security_group" "nat" {
name = "nat-sg"
description = "Security group for nat instances that allows SSH and VPN traffic from internet. Also allows outbound HTTP[S]"
vpc_id = "${aws_vpc.vpc_poc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
/* this your private subnet cidr */
cidr_blocks = ["10.200.3.0/24"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
/* this is your private subnet cidr */
cidr_blocks = ["10.200.3.0/24"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 22
to_port = 22
protocol = "tcp"
/* this is the vpc cidr block */
cidr_blocks = ["10.200.0.0/16"]
}
egress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
/* Security group for the web */
resource "aws_security_group" "web" {
name = "web-sg"
description = "Security group for web that allows web traffic from internet"
vpc_id = "${aws_vpc.vpc_poc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
/* Install deploy key for use with all of our provisioners */
resource "aws_key_pair" "deployer" {
key_name = "deployer-key"
public_key = "${file("~/.ssh/id_rsa")}"
}
/* Setup NAT in DMZ subnet */
resource "aws_instance" "nat_dmz" {
ami = "ami-67a54423"
availability_zone = "us-west-1a"
instance_type = "m1.small"
key_name = "${aws_key_pair.deployer.id}"
/* Notice we are assigning the security group here */
security_groups = ["${aws_security_group.nat.id}"]
/* this puts the instance in your public subnet, but translate to the private one */
subnet_id = "${aws_subnet.dmz.id}"
/* this is really important for nat instance */
source_dest_check = false
associate_public_ip_address = true
}
/* Give NAT EIP In DMZ */
resource "aws_eip" "nat_dmz" {
instance = "${aws_instance.nat_dmz.id}"
vpc = true
}
/* Setup NAT in Web subnet */
resource "aws_instance" "nat_web" {
ami = "ami-67a54423"
availability_zone = "us-west-1a"
instance_type = "m1.small"
key_name = "${aws_key_pair.deployer.id}"
/* Notice we are assigning the security group here */
security_groups = ["${aws_security_group.nat.id}"]
/* this puts the instance in your public subnet, but translate to the private one */
subnet_id = "${aws_subnet.web.id}"
/* this is really important for nat instance */
source_dest_check = false
associate_public_ip_address = true
}
/* Give NAT EIP In DMZ */
resource "aws_eip" "nat_web" {
instance = "${aws_instance.nat_web.id}"
vpc = true
}
/* Install server in private subnet and jump host to it with terraform */
resource "aws_instance" "private_box" {
ami = "ami-d1315fb1"
instance_type = "t2.large"
key_name = "${aws_key_pair.deployer.id}"
subnet_id = "${aws_subnet.api.id}"
associate_public_ip_address = false
/* this is what gives the box access to talk to the nat */
security_groups = ["${aws_security_group.nat.id}"]
connection {
/* connect through the nat instance to reach this box */
bastion_host = "${aws_eip.nat_dmz.public_ip}"
bastion_user = "ec2-user"
bastion_private_key = "${file("keys/terraform_rsa")}"
/* connect to box here */
user = "ec2-user"
host = "${self.private_ip}"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
更新: 断断续续地致力于此。似乎无法获得带有两个子网和一个 SSH 堡垒的工作配置。为完整的 .tf 文件配置设置赏金: * 创建两个私有子网 * 创建堡垒 * 在通过堡垒配置的每个子网上旋转一个 ec2 实例(运行 通过堡垒的任意 shell 命令) * 已配置互联网网关 * 为私有子网上的主机提供一个 nat 网关 * 已相应配置路由和安全组
原文post: 我正在尝试学习 Terraform 并构建原型。我有一个通过 Terraform 配置的 AWS VPC。除了 DMZ 子网之外,我还有一个 public 子网 'web' 从 Internet 接收流量。我有一个无法从 Internet 访问的私有子网 'app'。我正在尝试配置堡垒主机,以便 terraform 可以在私有 'app' 子网上提供实例。我还没有能够让它工作。
当我通过 SSH 连接到堡垒时,我无法从堡垒主机通过 SSH 连接到私有子网中的任何实例。我怀疑路由有问题。我一直在通过几个可用的示例和文档构建这个原型。许多示例通过 aws 提供商使用略有不同的技术和地形路由定义。
有人可以提供定义这三个子网的理想或正确方法吗(public 'web'、public 'dmz' w/ bastion 和私有 'app') 以便 'web' 子网上的实例可以访问 'app' 子网并且 DMZ 中的堡垒主机可以在私有 'app' 子网中提供实例?
下面是我的配置片段:
resource "aws_subnet" "dmz" {
vpc_id = "${aws_vpc.vpc-poc.id}"
cidr_block = "${var.cidr_block_dmz}"
}
resource "aws_route_table" "dmz" {
vpc_id = "${aws_vpc.vpc-poc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "dmz" {
subnet_id = "${aws_subnet.dmz.id}"
route_table_id = "${aws_route_table.dmz.id}"
}
resource "aws_subnet" "web" {
vpc_id = "${aws_vpc.vpc-poc.id}"
cidr_block = "10.200.2.0/24"
}
resource "aws_route_table" "web" {
vpc_id = "${aws_vpc.vpc-poc.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.bastion.id}"
}
}
resource "aws_route_table_association" "web" {
subnet_id = "${aws_subnet.web.id}"
route_table_id = "${aws_route_table.web.id}"
}
resource "aws_subnet" "app" {
vpc_id = "${aws_vpc.vpc-poc.id}"
cidr_block = "10.200.3.0/24"
}
resource "aws_route_table" "app" {
vpc_id = "${aws_vpc.vpc-poc.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.bastion.id}"
}
}
resource "aws_route_table_association" "app" {
subnet_id = "${aws_subnet.app.id}"
route_table_id = "${aws_route_table.app.id}"
}
除非堡垒主机也充当 NAT(我不建议您在同一实例上组合角色),否则 Web 和应用程序子网将没有任何出站 Web 访问,否则看起来路由明智因为TF会自动为VPC添加一条本地路由记录。
只要您拥有覆盖您的 VPC 范围的本地路由记录,那么路由就应该没问题。使用您的 Terraform 配置文件(并添加最少的必要资源)允许我在所有 3 个子网中创建一些基本实例并在它们之间成功路由,因此您可能会遗漏其他内容,例如安全组或 NACL。
您还没有提供完整的 Terraform,但您需要允许 SSH 从堡垒 IP 或堡垒主机的 CIDR 块进入您的 'app' VPC 实例,因此如下所示:
resource "aws_security_group" "allow_ssh" {
name = "allow_ssh"
description = "Allow inbound SSH traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${aws_instance.bastion.private_ip}/32"]
}
}
然后在您的 'app' 实例资源中,您需要添加安全组:
...
vpc_security_group_ids = ["${aws_security_group.allow_ssh.id}"]
...
https://www.terraform.io/docs/providers/aws/r/security_group_rule.html
您应该使用 tcpdump 和其他调试工具检查网络问题。 请检查:
- ip可达且网络设置正确(如10.200.2.X可以可达堡垒机ip)
- iptables/another 防火墙不会阻止您的流量
- ssh 服务器正在侦听(ssh 从那些主机到那些主机的 ip)
- 您拥有正确的主机安全组(您可以在 EC2 实例的属性中看到这一点)
- 尝试使用 tcpdump 嗅探流量
我没有看到堡垒主机的原因。
我使用 saltstack 有类似的东西,我只是使用 VPC 内的主服务器控制其余部分,为其分配特定的安全组以允许访问。
CIDR X/24
subnetX.0/26- subnet for control server. <aster server ip EC2-subnet1/32
subnetX.64/26 - private minions
subentX.128/26 - public minions
subnetX.192/26- private minions
然后为每个子网创建一条路由 table 以表达对隔离的热爱 将每条路由附加到单个子网。例如
rt-1 - subnetX.0/26
rt-2 - subnetX.64/26
rt-3 - subnetX.128/26
rt-4 - subnetX.192/26
确保你的路由 table 有这样的东西,这样 rt-1 实例连接到每个人的路由都是可能的
destination: CIDR X/24 Target: local
然后通过安全组inbound.e.g限制连接。 允许来自 EC2-subnet1/32
的 SSH一旦我完成了与控制服务器的所有工作,我可以删除说 CIDR X/24 目标的特定路由:我的 public 子网中的本地,因此它不再能够路由流量到我当地的 CIDR。
我没有理由创建复杂的堡垒,因为我已经赋予了删除控制服务器中路由的权力。
下面是一个可能对您有所帮助的片段。这是未经测试的,但是是从我在私有子网中配置 VM 的一个 terraform 文件中提取的。我知道这适用于一个私有子网,我试着像你原来的问题一样在这里实现两个。
我跳过我的 NAT 实例来命中并使用 Terraform 提供私有子网框。如果您的安全组设置正确,它就可以工作。我做了一些实验。
/* VPC creation */
resource "aws_vpc" "vpc_poc" {
cidr_block = "10.200.0.0/16"
}
/* Internet gateway for the public subnets */
resource "aws_internet_gateway" "gateway" {
vpc_id = "${aws_vpc.vpc_poc.id}"
}
/* DMZ subnet - public */
resource "aws_subnet" "dmz" {
vpc_id = "${aws_vpc.vpc_poc.id}"
cidr_block = "10.200.1.0/24"
/* may help to be explicit here */
map_public_ip_on_launch = true
/* this is recommended in the docs */
depends_on = ["aws_internet_gateway.gateway"]
}
resource "aws_route_table" "dmz" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "dmz" {
subnet_id = "${aws_subnet.dmz.id}"
route_table_id = "${aws_route_table.dmz.id}"
}
/* Web subnet - public */
resource "aws_subnet" "web" {
vpc_id = "${aws_vpc.vpc_poc.id}"
cidr_block = "10.200.2.0/24"
map_public_ip_on_launch = true
depends_on = ["aws_internet_gateway.gateway"]
}
resource "aws_route_table" "web" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
/* your public web subnet needs access to the gateway */
/* this was set to bastion before so you had a circular arg */
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "web" {
subnet_id = "${aws_subnet.web.id}"
route_table_id = "${aws_route_table.web.id}"
}
/* App subnet - private */
resource "aws_subnet" "app" {
vpc_id = "${aws_vpc.vpc_poc.id}"
cidr_block = "10.200.3.0/24"
}
/* Create route for DMZ Bastion */
resource "aws_route_table" "app" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
/* this send traffic to the bastion to pass off */
instance_id = "${aws_instance.nat_dmz.id}"
}
}
/* Create route for App Bastion */
resource "aws_route_table" "app" {
vpc_id = "${aws_vpc.vpc_poc.id}"
route {
cidr_block = "0.0.0.0/0"
/* this send traffic to the bastion to pass off */
instance_id = "${aws_instance.nat_web.id}"
}
}
resource "aws_route_table_association" "app" {
subnet_id = "${aws_subnet.app.id}"
route_table_id = "${aws_route_table.app.id}"
}
/* Default security group */
resource "aws_security_group" "default" {
name = "default-sg"
description = "Default security group that allows inbound and outbound traffic from all instances in the VPC"
vpc_id = "${aws_vpc.vpc_poc.id}"
ingress {
from_port = "0"
to_port = "0"
protocol = "-1"
self = true
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
self = true
}
}
/* Security group for the nat server */
resource "aws_security_group" "nat" {
name = "nat-sg"
description = "Security group for nat instances that allows SSH and VPN traffic from internet. Also allows outbound HTTP[S]"
vpc_id = "${aws_vpc.vpc_poc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
/* this your private subnet cidr */
cidr_blocks = ["10.200.3.0/24"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
/* this is your private subnet cidr */
cidr_blocks = ["10.200.3.0/24"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 22
to_port = 22
protocol = "tcp"
/* this is the vpc cidr block */
cidr_blocks = ["10.200.0.0/16"]
}
egress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
/* Security group for the web */
resource "aws_security_group" "web" {
name = "web-sg"
description = "Security group for web that allows web traffic from internet"
vpc_id = "${aws_vpc.vpc_poc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
/* Install deploy key for use with all of our provisioners */
resource "aws_key_pair" "deployer" {
key_name = "deployer-key"
public_key = "${file("~/.ssh/id_rsa")}"
}
/* Setup NAT in DMZ subnet */
resource "aws_instance" "nat_dmz" {
ami = "ami-67a54423"
availability_zone = "us-west-1a"
instance_type = "m1.small"
key_name = "${aws_key_pair.deployer.id}"
/* Notice we are assigning the security group here */
security_groups = ["${aws_security_group.nat.id}"]
/* this puts the instance in your public subnet, but translate to the private one */
subnet_id = "${aws_subnet.dmz.id}"
/* this is really important for nat instance */
source_dest_check = false
associate_public_ip_address = true
}
/* Give NAT EIP In DMZ */
resource "aws_eip" "nat_dmz" {
instance = "${aws_instance.nat_dmz.id}"
vpc = true
}
/* Setup NAT in Web subnet */
resource "aws_instance" "nat_web" {
ami = "ami-67a54423"
availability_zone = "us-west-1a"
instance_type = "m1.small"
key_name = "${aws_key_pair.deployer.id}"
/* Notice we are assigning the security group here */
security_groups = ["${aws_security_group.nat.id}"]
/* this puts the instance in your public subnet, but translate to the private one */
subnet_id = "${aws_subnet.web.id}"
/* this is really important for nat instance */
source_dest_check = false
associate_public_ip_address = true
}
/* Give NAT EIP In DMZ */
resource "aws_eip" "nat_web" {
instance = "${aws_instance.nat_web.id}"
vpc = true
}
/* Install server in private subnet and jump host to it with terraform */
resource "aws_instance" "private_box" {
ami = "ami-d1315fb1"
instance_type = "t2.large"
key_name = "${aws_key_pair.deployer.id}"
subnet_id = "${aws_subnet.api.id}"
associate_public_ip_address = false
/* this is what gives the box access to talk to the nat */
security_groups = ["${aws_security_group.nat.id}"]
connection {
/* connect through the nat instance to reach this box */
bastion_host = "${aws_eip.nat_dmz.public_ip}"
bastion_user = "ec2-user"
bastion_private_key = "${file("keys/terraform_rsa")}"
/* connect to box here */
user = "ec2-user"
host = "${self.private_ip}"
private_key = "${file("~/.ssh/id_rsa")}"
}
}