How to Connect between VPCs in Different AWS Accounts and Regions 

How to Connect between VPCs in Different AWS Accounts and Regions 

June 26, 2024
29 views
Get tips and best practices from Develeap’s experts in your inbox

In the previous article, we discussed working with Terraform across two (or more) AWS accounts simultaneously. In this article, I want to demonstrate how to connect two completely private and distinct networks located in different AWS accounts and regions.

To demonstrate the connection between two completely private networks located in different AWS accounts and regions, let me share a real-world example from a project I worked on with a client. This client is a large credit card processing company that handles hundreds of thousands of transactions daily. To enhance their systems, they initiated a business intelligence (BI) project to monitor and analyze their transactions. This BI project was hosted in a separate AWS account and required access to the real transaction data, which, naturally, included highly sensitive information. It was crucial to establish this connection in a secure and entirely private manner, without exposing the data externally.

For the example, we will create a VPC in each account with only private subnets (to ensure all traffic remains within the private network) and connect them using Transit Gateways. To test the connection, we will set up a database machine in one network, and an EC2 machine in the other network, which will connect to the database in the first network using MySQL commands.

We will accomplish all of this using Infrastructure as Code (IaC) with Terraform, step by step, including detailed examples. The complete code I wrote is available in this repository, and here we will see examples from it. So let’s get started!

Requirements

  • Access to two AWS accounts.
  • Set one of the accounts as a management account by creating the necessary permissions, as explained here. 

Step 1 – Networking

Let’s start by creating the networks. In each account, we will create a VPC, a private subnet, a route table, and associate the route table with the subnet. (In the second account we will create two subnets, as this is the minimum requirement for the database instance that we will set up later).

At this stage, it’s crucial to ensure there are no address conflicts. Therefore, we should choose different CIDR blocks for each VPC. (I selected 20.0.0.0/16 and 21.0.0.0/16 as seen in the variables file).

Additionally, we will create a security group in each account. In each account, we will add a single ingress rule: in the first account, we will create a self-type ingress rule that allows traffic within the resources in the network itself, that are sharing the same security group. It is needed for the endpoints later.

In the second account, we will create a rule that only allows traffic from the private IP of the EC2 instance that was created in the first account on port 3306.

Apart from this, the networks will be completely closed to external traffic.

Step 2 – Connecting the Networks each other

It’s important to note that if we were creating both networks in the same region (even if in different accounts), we would have several options to connect them: AWS PrivateLink, VPC peering, or a single Transit Gateway with attachments to each VPC. However, since we chose to challenge ourselves by connecting different regions, we will do this by creating in each account a Transit Gateway with a VPC attachment, and peering between them.

In each account, we will create a Transit Gateway and attach it to the VPC using a Transit Gateway attachment.

resource "aws_ec2_transit_gateway" "first" {
provider = aws.first
tags = {
Name = "${var.first_transit_gateway_name}"
}
}

resource "aws_ec2_transit_gateway_vpc_attachment" "first" {
provider = aws.first
subnet_ids = [aws_subnet.first.id]
transit_gateway_id = aws_ec2_transit_gateway.first.id
vpc_id = aws_vpc.first.id
tags = {
Name = "${var.first_vpc_attachment_name}"
}
}

Do the same in the second account.

After that, we will set up peering between the gateways. This peering will be created in one of the accounts (I chose to create it in the second account) and we provide it with the account ID and region of the other account (in this case, the first account).

data "aws_caller_identity" "first" {
provider = aws.first
}

resource "aws_ec2_transit_gateway_peering_attachment" "peer" {
provider = aws.second
peer_account_id = data.aws_caller_identity.first.account_id
peer_region = var.first_region
peer_transit_gateway_id = aws_ec2_transit_gateway.first.id
transit_gateway_id = aws_ec2_transit_gateway.second.id
tags = {
Name = "${var.peering_attachment_name}",
Side = "Creator"
}
}

Next, we return to the first account to approve the peering.

resource "aws_ec2_transit_gateway_peering_attachment_accepter" "peer" {
provider = aws.first
depends_on = [aws_ec2_transit_gateway_peering_attachment.peer]
transit_gateway_attachment_id = aws_ec2_transit_gateway_peering_attachment.peer.id
tags = {
Name = "${var.peering_attachment_name}",
Side = "Acceptor"
}
}

Step 3 – Configuring Routing

We’re almost done. For the connection to work, we need to configure the routes to direct traffic between the networks according to the CIDR blocks.

In practical terms, if an EC2 instance in the first account sends a request to an IP address within the CIDR block of the VPC in the second account, we need to ensure that it can reach the destination.

To achieve this, two steps are necessary:

  1. In the main route table, we specify that this traffic should be routed to the Transit Gateway.
  2. In the Transit Gateway’s route table, we configure the traffic to be routed to the peering connection, enabling it to safely proceed to the second VPC.
resource "aws_route" "first" {
provider = aws.first
destination_cidr_block = aws_vpc.second.cidr_block
route_table_id = aws_route_table.first.id
transit_gateway_id = aws_ec2_transit_gateway.first.id
}

resource "aws_ec2_transit_gateway_route" "first" {
provider = aws.first
depends_on = [aws_ec2_transit_gateway_peering_attachment_accepter.peer]
destination_cidr_block = aws_vpc.second.cidr_block
transit_gateway_attachment_id = aws_ec2_transit_gateway_peering_attachment.peer.id
transit_gateway_route_table_id = aws_ec2_transit_gateway.first.association_default_route_table_id
}

Of course, you can only route to the peering connection once it exists, which is why we’ve added the depends_on in the transit_gateway_routeresource. We will repeat the same process in the second account.

Now everything is connected! The only task remaining is to verify the connection ourselves.

Step 4 – Setting Up the Instances for Testing

To test the setup, we will launch a MySQL database instance in the second account. During the setup of this machine, we will define the username, password, and database name (pay attention to the nice name I’ve chosen: develeap_we_can_take_you_there_demo_db). The password will be generated randomly using the random_passwordresource.

In the first account, we will launch an EC2 instance, install MySQL on it, and connect from this instance to the remote database on the db-instance in the second account. This will confirm that the connection is successful.

Handling a Completely Private Network

I am going to complicate things a bit because I want to show how this works in a network that is completely closed off from the internet. We have created two VPCs with only private subnets and no internet gateway (which are essentially two ways of saying the same thing). So, how do we install MySQL on the EC2 instance? And how do we access the it’s terminal?

We could make this much simpler by setting up an internet gateway, allowing us to download and install MySQL and easily connect to our instance via port 22. However, I want to demonstrate a more secure method, which also gives us an opportunity to use two other useful tools.

Using Packer for Custom AMIs

To get an EC2 instance with MySQL already installed, I use another handy tool from HashiCorp called Packer. This tool is designed for creating custom images. I will create an AMI that includes MySQL and use it for our instance.

The process is straightforward: Packer launches an EC2 instance from the base image I specify, executes the commands I define, creates the image, and then deletes all the temporary resources it created. While I could do all this manually, Packer simplifies and streamlines the process.

variable "ami_name" {
type = string
default = "mysql-ubuntu"
}

data "amazon-ami" "ubuntu" {
filters = {
virtualization-type = "hvm"
name = "ubuntu/images/*ubuntu-noble-*"
root-device-type = "ebs"
architecture = "x86_64"
}
owners = ["amazon"]
most_recent = true
region = var.aws_region
}

source "amazon-ebs" "example" {
region = var.aws_region
source_ami = data.amazon-ami.ubuntu.id
instance_type = var.instance_type
ssh_username = var.ssh_username
ami_name = var.ami_name
ami_description = "AMI with MySQL installed"
}

build {
sources = ["source.amazon-ebs.example"]

provisioner "shell" {
inline = [
"sudo apt update -y",
"sudo apt install -y mysql-server",
"sudo service mysql start"
]
}
}

To create the AMI, go to packer directory and run the command packer build build.pkr.hcl.

So, we created an AMI called mysql-ubuntu, and now I will use it to create the instance. During the instance creation, I use user data to create a folder named /home/ssm-user/ (ssm-user is the default user when connecting with session manager), and write the required configuration of the db-instance (username, password, and host) into a file named .my.cnf. MySQL defaults to looking for this file and reading from it, so when the instance starts, we can connect to the remote database using the mysql command without any additional parameters.

Connecting to the Instance with Session Manager

How will we connect to the instance? With another useful AWS tool called Session Manager. It allows us to connect to the instance even if it is completely closed to external traffic.

To achieve this, we will:

  1. Create an IAM role and attach it to the instance with an instance profile.
  2. Attach the necessary policy to the role. I used the AWS managed policy called AmazonSSMManagedInstanceCore.
  3. Create VPC endpoints for our network.

Final Steps

With our environment ready, we will run terraform apply to set everything up. Once that is done, we will proceed to connect and verify the setup.

Now, let’s connect to our instance with AWS session manager. On AWS console, navigate to Session Manager and click on Start Session. List of Target instances will be appeared, and in the list – our instance. Select it and click on Start Session. A terminal will be opened in a new tab. If I run mysqlcommand, MySQL shell will be opened:

To verify that we are indeed connected to the db-instance in the other VPC, let’s run the SHOW DATABASES command and see the unmistakable name of the database (along with some other databases that MySQL creates by default).

OK! So we are connected! From the EC2 instance in first account in us-east-1 region, we have connected to database in another account and another region, and all traffic goes through the private network only.

We demonstrated an end-to-end solution for connecting networks in different AWS accounts and regions. We also showed how to test the connection and implement all of this in a completely private network without internet access. Along the way, we demonstrated the use of Packer for simple and elegant AMI creation, and showed a very secure way to connect to an EC2 instance in a completely closed environment without exposing any of its ports. I hope you enjoyed and learned from this. Feel free to leave your comments!

We’re Hiring!
Develeap is looking for talented DevOps engineers who want to make a difference in the world.