Discover AWS Secrets in EKS — part I: How to combine EKS cluster and other cloud services for your deployment?

Discover AWS Secrets in EKS — part I: How to combine EKS cluster and other cloud services for your deployment?

June 22, 2023
Get tips and best practices from Develeap’s experts in your inbox

Grant the desired permissions to the cluster by adding the desired roles and policies to your module, by using Terraform.

Companies that deploy their infrastructure on a managed Kubernetes cluster like EKS, GKE, or AKS will usually want to receive additional services from their cloud provider beyond the cluster itself. 

When it comes to our infrastructure in AWS, we often require additional services like AWS Secrets Manager, Route 53, EBS volumes, and more. In this article, I will primarily discuss AWS, although the principles I’ll cover can be applied to other providers as well. 

Granting permissions to the EKS cluster

To enable the cluster to utilize the extra services, we need to grant the cluster the  appropriate permissions. In the AWS documentation, there are several examples of how to do this using the console or CLI. (for example, this tutorial about AWS Secrets Manager secret in an Amazon EKS pod).

In this case, Terraform is the most practical approach. By deploying an EKS cluster through Terraform (which is highly recommended), it will be automatically set up with all the necessary permissions. This eliminates the need for extra settings and configurations.

First, in order to grant the cluster permissions to other resources in AWS, you need to ensure that your cluster has an OIDC provider. An OIDC (OpenID Connect) provider is a crucial component in establishing secure communication and authorization between an EKS cluster and AWS services.

If you are using the official EKS module, it already generates it for you automatically, so worry not. The attributes we need to take from the OIDC provider are its URL and ARN. In the EKS module, we already receive them as built-in outputs.

Now we can create the IAM role that grants permissions to the cluster.

resource "aws_iam_role" "cluster_role" {
  name = "my-cluster-role"
  assume_role_policy = data.aws_iam_policy_document.role_assume_policy_oidc.json
}
# Assume Policy
data "aws_iam_policy_document" "role_assume_policy_oidc" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"
    principals {
      type        = "Federated"
      identifiers = [module.eks.oidc_provider_arn]
    }
    condition {
      test     = "StringLike"
      variable = "${module.eks.oidc_provider}:sub"
      values   = ["system:serviceaccount:*:*"]
    }
    condition {
      test     = "StringEquals"
      variable = "${module.eks.oidc_provider}:aud"
      values   = ["sts.amazonaws.com"]
    }
  }
}

Thus, the cluster’s service accounts can receive the granted permissions in AWS.

In this example, every service account in the cluster can receive the permissions, but it is possible and, of course, safer to narrow down the permissions to a specific name or namespace, as shown in this example:

values = ["system:serviceaccount:kube-system:secret-reader"]

Now, only a service account named “secret-reader”in the kube-system namespace will be able to receive the permission.

Once the role is created, the next step is to define the desired policy based on the specific capabilities we want the cluster to have. We then attach the policy to the role, resulting in a fully configured cluster with the necessary permissions.

Here is an example of a policy and policy attachment to retrieve secrets from AWS Secrets Manager:

resource "aws_iam_policy" "read_secret" {
  name        = "read-secrets"
  path        = "/"
  description = "Authorization for my cluster to read aws secrets"
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action   = [
          "secretsmanager:GetSecretValue",
          "secretsmanager:DescribeSecret"
        ]
        Effect   = "Allow"
        Resource = "*"
      },
    ]
  })
}
resource "aws_iam_role_policy_attachment" "SecretManagerPolicy" {
  policy_arn = aws_iam_policy.read_secret.arn
  role       = aws_iam_role.cluster_role.name
}

Here, too, I can grant permission to all secrets, but it is possible and recommended to narrow down the permission to a specific secret or define a prefix so that the cluster has permission only for secrets starting with that prefix.

instead of

Resource = "*"

I can write:

Resource = "arn:aws:secretsmanager:<your region>:<your account id>:secret:${var.secret_arn_prefix}*"

In the variable secret_arn_prefix, you can define the prefix you want (e.g., “staging”). This means that the cluster will only have permission to access secrets whose names start with that specific prefix. By using this approach, you can further restrict the cluster’s access to only the secrets that match the specified prefix, enhancing security and access control.

Now the only thing left is to create a service account in the cluster and attach it to the pod or deployment that uses the desired AWS resources.

In the annotations of the service account, add the IAM role that we’ve created.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: secret-reader
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::<your account id>:role/my-cluster-role

Conclusion

In many cases, we will use the cluster not as the sole platform and would like to combine other cloud services in it. However, the policies we will incorporate within might change, but their principle is identical: Producing an IAM role that grants the cluster the relevant authorizations.

In this article, we demonstrated how to use Terraform to grant the cluster the required access to the relevant services, so that it is ready with every access it needs when it goes live.

Would you like to know how to securely and easily pass secrets to the cluster? Read our next part and discover the simple, easy, and secure way to work with passwords in the cluster.

We’re Hiring!
Develeap is looking for talented DevOps engineers who want to make a difference in the world.