AWS Secrets in EKS— part II: How to securely and easily pass secrets to the cluster?
The million-dollar question is: How to securely and easily pass secrets to the cluster?
On one hand, the right approach is a GitOps mindset: all the cluster’s configurations and resources are stored in Git, making everything readable, dynamic, and recoverable. But on the other hand, we don’t want to expose our secrets in Git.
There are many solutions to this, but in this article, we’ll focus on one of them — the Secrets Store CSI Driver. It is a service that can be installed on the cluster as a Helm chart and allows us to consume secrets directly from our secret provider. This service can work with multiple providers, and in this article, we’ll concentrate on AWS Secrets Manager.
The first step to working with AWS Secrets Manager is to configure the required permissions for the cluster. You can find the complete explanation of how to do it in the first part of my blog.
How does Secrets Store CSI Driver work?
By default, the driver doesn’t generate Kubernetes secrets but simply injects the secret as a volume into the pod, to the path you specify. This provides a more secure approach but may be less convenient since you need to extract environment variables from the secret into the pod, making it less user-friendly. Nonetheless, it can also be configured to generate Kubernetes secrets in addition to the volume.
In this blog, I will explain the basic mechanics of the Secrets Store CSI Driver, and then mention two basic things that are missing in the chart and how I address them.
so, how does Secrets Store CSI Driver work?
Secrets Store CSI Driver contains a custom resource, which is called a Secret Provider Class and is designed to transfer the secrets from the secrets manager to the cluster.
Here is an example of how to create a SecretProviderClass:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: db-aws-secrets
namespace: applications
spec:
provider: aws
parameters:
objects: |
- objectName: "staging-mysql-url" #the name of the secret in aws secrets manager
objectType: "secretsmanager"
Now, I create a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: applications
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: secret-reader
volumes:
- name: aws-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "db-aws-secrets"
containers:
- name: my-app
image: ubuntu:latest
volumeMounts:
- name: aws-secrets
mountPath: "/mnt/secrets"
readOnly: true
In the volumes section we define the volume that refers to the secretProviderClass that we’ve created.
In the volume mounts we define the path that the secrets will be stored inside the pod.
Now, if we exec into the pod to the path /mnt/secrets, a file named “staging-mysql-url”
will be there, and it will contain the value of this secret in aws secrets-manager.
Create Kubernetes Secret
To create a Kubernetes secret, we will add it to our SecretProviderClass:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: db-aws-secrets
namespace: applications
spec:
provider: aws
parameters:
objects: |
- objectName: "staging-mysql-url"
objectType: "secretsmanager"
objectAlias: db_url #alias that should be different from the object name
secretObjects:
- data:
- key: DATABASE_URL
objectName: db_url #the alias name
secretName: db-url #the name of kubernetes secret that will be created
type: Opaque
labels:
app: mysql
Now a secret named db-url will be created, and its data will include this:
DATABASE_URL: <value of the secret, base64>
Of course, it’s possible to create more than one data, or more than one Kubernetes secret. it’s a list of lists.
Your next steps to finalize the process
So far, so good. Now, what is missing in the chart? Two simple things that, once added, will make it perfect.
First, in order to work with AWS Secrets Manager, you need to add an additional installation of a few files that are located here.
The second thing, as I mentioned before, is that you can add a configuration that generates a Kubernetes secret. However, when I initially tried it, it didn’t work. There is a pod where I can see the file and its secret value, but the Kubernetes secret is not created. It turns out (and for some reason, it’s not mentioned in the documentation) that the cluster role of the chart (named secretproviderclasses-role
) does not have permission to create secrets. So, what do we do? We add it to the cluster role of the chart, and it works perfectly.
kubectl edit clusterrole secretproviderclasses-role
and add this block to the cluster role:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- list
- watch
- create
But as we said, we want all the cluster configuration to be in Git without the need to run kubectl commands to make it work.
The simple solution is to create a custom Helm chart that uses the official chart as a dependency, and there the AWS installation files are present, along with a cluster role that has permissions for secrets. This cluster role overrides the original cluster role of the chart. Here you can see an example that I created.
Now, when we install this chart, we have a simple and functional system for managing our secrets in Kubernetes.
Did you miss my first article demonstrating how to use Terraform to grant the cluster the required access to the relevant services, so that it is ready with every access it needs when it goes live? You can read it here