
Copying Files to EC2 in a Private Network
Deploying an EC2 instance in a private network brings security benefits, but it also creates challenges – how do you copy files to it? Since the instance is not accessible from the internet, common methods such as file provisioners or SSH-based transfers will not work as they both require direct network access to the instance, which is blocked in a private subnet.
This guide will explore two effective methods for copying files to EC2 instances in a private network:
1. User Data
2. AWS Systems Manager (SSM).
Copy files on Instance Launch (user data)
Terraform’s user_data allows us to copy files when the EC2 instance boots up. This is useful for initial configuration. The best way to manage files with user_data is by using the cloudinit_config
data source with the write_files option.
EC2 Configuration with User Data
resource "aws_instance" "nginx" {
...
user_data = data.cloudinit_config.nginx.rendered
}
Writing Files with cloudinit_config
data "cloudinit_config" "nginx" {
for_each = var.mode_colors
gzip = true
base64_encode = true
part {
content_type = "text-cloud-config"
content = <<-EOF
#cloud-config
packages:
- nginx
# Files to be written
write_files:
- path: /etc/nginx/nginx.conf
permissions: '0644'
owner: root:root
encoding: 'b64'
content: ${filebase64("${path.module}/nginx/nginx.conf")}
- path: /usr/share/nginx/html/index.html
permissions: '0644'
owner: root:root
encoding: 'b64'
content: ${filebase64("${path.module}/nginx/index.html")}
EOF
}
# Second part: Start Nginx
part {
content_type = "text/x-shellscript"
content = <<-EOF
#!/bin/bash
systemctl start nginx
systemctl enable nginx
EOF
}
}
So the main reasons to use cloud-init for copying files are:
- Ensures all files are ready before the initialization script runs.
- It allows managing multiple files with correct permissions.
Updating files after launch
If you try to update the file in Terraform, it modifies the user_data. But will the files on the EC2 instance get updated? NO.
Why? Because user_data only runs once – at launch time. Terraform will apply the changes, but the EC2 instance won’t pick up the new file.
To get the update, you should recreate the instance. You can modify Terraform to recreate the instance on user data changes.
user_data_replace_on_change = true
What happens in case you don’t want to recreate the instance and just copy a file to an existing instance?
Since we cannot use Terraform’s file provisioner (due to the lack of direct SSH access), we need a different approach. The best way to update files on an instance in a private network is to use AWS Systems Manager (SSM), which enables the remote execution of commands without requiring direct access.
Let’s see it in the code —
Step 2.1: Attach an IAM Role to the EC2 Instance
To use AWS SSM, we must assign an aws_iam_role with SSM permissions to the EC2 instance.
resource "aws_iam_role" "ssm_role" {
name = "EC2SSMRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
})
}
resource "aws_iam_policy_attachment" "ssm_attach" {
name = "SSMAttach"
roles = [aws_iam_role.ssm_role.name]
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
resource "aws_iam_instance_profile" "ssm_profile" {
name = "EC2SSMProfile"
role = aws_iam_role.ssm_role.name
}
Now, attach this profile to your EC2 instance:
resource "aws_instance" "nginx" {
iam_instance_profile = aws_iam_instance_profile.ssm_profile.name
}
Step 2.2: Create an SSM document to copy files
Create an SSM Document to update the required files.
resource "aws_ssm_document" "update_conf" {
name = "UpdateConf"
document_type = "Command"
content = jsonencode({
schemaVersion = "2.2"
description = "Update conf in nginx instance"
mainSteps = [{
action = "aws:runShellScript"
name = "updateConf"
inputs = {
runCommand = [
"base64_conf='${base64encode(file("${path.module}/nginx/nginx.conf"))}'",
"echo $base64_conf | base64 -d > /etc/nginx/nginx.conf",
"nginx -s reload"
]
}
}]
})
}
Step 2.3: Attach the SSM command to the instance
Attach the SSM command to the EC2 instance
# Attach ssm document to nginx server
resource "aws_ssm_association" "update_conf_association_nginx" {
name = aws_ssm_document.update_conf.name
targets {
key = "InstanceIds"
values = [aws_instance.nginx.id]
}
depends_on = [aws_instance.nginx]
}
With this, every time Terraform applies changes to nginx.conf, the SSM command automatically updates the file on the instance without recreating it.
Step 2.4: Cleanup
Each change to the file applied with Terraform creates a new version of the aws_ssm_document.
The cleanup removes old versions and keeps only the latest one active.
resource "null_resource" "cleanup_ssm_old_versions" {
provisioner "local-exec" {
command = <<EOT
aws ssm list-document-versions \
--name "${aws_ssm_document.update_conf.name}" \
--query "DocumentVersions[*].DocumentVersion" \
--output json \
| jq -r '.[1:] | .[]' \
| xargs -I {} aws ssm delete-document \
--name "${aws_ssm_document.update_conf.name}" \
--document-version {}
EOT
}
triggers = {
document_version = aws_ssm_document.update_conf.document_version # Triggers only if the document changes
}
}
It is triggered automatically whenever the document is updated.
Conclusion
Deploying EC2 instances in a private network enhances security but requires alternative file management methods. By using Terraform’s user_data for initial file copies and AWS Systems Manager (SSM) for updates, we can manage files without SSH access. This approach ensures secure, automated file management and flexibility without requiring the recreation of instances.