GitOps Configuration with ArgoCD

GitOps Configuration with ArgoCD

March 26, 2024
Get tips and best practices from Develeap’s experts in your inbox

As part of my training at Develeap, and as an effort to showcase the implementation of modern DevOps methodologies, I developed a portfolio project that supports efficient development and deployment. It includes a robust CI/CD pipeline for a containerized application, infrastructure provisioning on AWS using Terraform IaC, and a GitOps configuration for deploying the app on Kubernetes. 

Task-it is a task management web-based application that allows users to create, manage, and track tasks with text-based content. Its technological stack consists of a user interface I built with React.js and a REST API backend I coded in Flask, all backed by a PostgreSQL database. In order to later deploy the application and serve it to end users, I rely on Docker to build a container image, and later store this image in an artifact repository as part of a CI pipeline managed by Jenkins. This pipeline is triggered as a result of commits and pushes being made to the app’s GitHub repository, and the produced image is later utilized by a custom Helm chart I’ve built for Task-it. 

In this article, I’ll be focusing on the deployment aspect of my project, and how I made use of ArgoCD while adhering to the GitOps workflow to deploy my application’s Helm chart and its accompanying infrastructure to a Kubernetes cluster. 

GitOps is a methodology for managing and automating infrastructure and application deployment through Git, taking advantage of version control as the single source of truth that defines the desired state of our production environment. In GitOps, we treat the code we use to define our infrastructure the same way we would our application’s source code – by managing it through a designated repository. This approach differs from the traditional one which relies on the CI process to trigger a push to production, and instead allows us to monitor the repository that stores our configuration and pull changes from it as they are detected. The configuration for my application’s deployment works the same way, and it’s stored in its very own repository, utilizing ArgoCD to deploy and maintain the desired state of my cluster, be it for my own custom application Helm charts or for publicly maintained charts, which my infrastructure relies on.

Installing ArgoCD

The first step for getting started with ArgoCD is to provision a Kubernetes cluster in which we’ll spin up our infrastructure. In my case, I’ve used the Elastic Kubernetes Service (EKS) offered by AWS. Once we have a cluster up and running and have configured kubectl to communicate with it, we can go ahead and install ArgoCD using its community-maintained Helm chart. 

$ helm repo add argo https://argoproj.github.io/argo-helm
$ helm install -n argocd --create-namespace argocd argo/argo-cd --version <VERSION>
Note the use of a separate namespace for ArgoCD, as per best practices, for a logical separation of concerns within our cluster.

Upon installation, we are greeted in the terminal with instructions on how to access the UI and login with the initially generated admin password, note that it’s a good practice to change this password and delete its secret after logging in. We’ll be using port forwarding to access the web UI in order to monitor our deployed applications, though we’ll primarily be using declarative configuration files in order to actually deploy them (more on that later). For initial login, simply port-forward the argocd-server service to a port on your local machine and then access that same port on localhost in your browser. To login as the default admin user use the auto-generated password obtained from the argocd-initial-admin-secret secret as the admin user.

# Obtain the password
$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

# Port forward the UI
$ kubectl port-forward service/argocd-server -n argocd 8080:443

Configuring credentials

We’ll be leaving the user interface behind for now and focus on declaratively configuring ArgoCD to access our GitOps configuration repository. For me, my configuration is hosted on a GitHub repository called taskit-gitops, which serves to deploy the Kubernetes infrastructure for my application, and in case the repository is private, it is necessary to provide ArgoCD with credentials to access it. The declarative way of doing so is by creating a secret labeled with the argocd.argoproj.io/secret-type label set to repository, and providing certain properties as its data, which include:

  • name – The name for our credential, make it meaningful.
  • type – Credential type, which we’ll set to git.
  • url – The SSH URL for the repository. 
  • sshPrivateKey – The SSH credential itself which grants access to the private repository.

Here’s a living example:

apiVersion: "v1"
kind: "Secret"
metadata:
  name: "taskit-gitops-repo-cred"
  namespace: "argocd"
  labels:
    argocd.argoproj.io/secret-type: "repository"
stringData:
  name: "taskit-gitops-repo-cred"
  type: "git"
  url: "git@github.com:yuval2313/taskit-gitops.git"
  sshPrivateKey: |
    -----BEGIN OPENSSH PRIVATE KEY-----
	<YOUR-SECRET-KEY>
    -----END OPENSSH PRIVATE KEY-----
Secrets must exist within the same namespace as the resources consuming it, so be sure to create this one within the same namespace you used previously when installing the chart – or else ArgoCD won’t see it!

Once we’ve created the secret in our cluster, we can navigate through the web UI to Settings > Repositories to see that our configuration was successful:

Deploying an application

Now, we can move on to actually deploying our infrastructure by getting ArgoCD to deploy some resources, which is done by making use of a custom resource definition (CRD) called an Application. This custom resource object represents a deployed application that’s managed by ArgoCD, and it needs to know where to find the Kubernetes manifests defining our desired state, as well as the destination cluster in which we’ll have our actual state. It achieves this behavior with the help of two key pieces of information – source and destination. Apart from these two key attributes of an application manifest, we have additional specifications we can provide to determine how and where to install our application on the cluster, let’s take a look at my example for deploying the Task-it application Helm chart that’s hosted within my configuration repository:

apiVersion: "argoproj.io/v1alpha1"
kind: "Application"
metadata:
  name: "taskit-application"
  namespace: "argocd"
  finalizers:
    - "resources-finalizer.argocd.argoproj.io"
spec:
  project: "default" 
  source: 
    repoURL: "git@github.com:yuval2313/taskit-gitops.git"
    path: "taskit" 
    targetRevision: "HEAD"
  destination: 
    server: "https://kubernetes.default.svc"
    namespace: "taskit" 
  syncPolicy:
    syncOptions:
      - "CreateNamespace=true" 
    automated: 
      selfHeal: true
      prune: true

Let’s cover the specification for this application manifest.

  • project – For grouping ArgoCD applications into projects.
  • source – Reference to the Git configuration.
    • repoURL – Where to find k8s manifests, either a Git / Helm repository.
    • path – Path within Git repository.
    • targetRevision – Target Git revision, for Helm repositories it’s the chart version.
  • destination – Reference to the target cluster and namespace.
    • server – Cluster API URL, in most cases we simply point to the cluster in which we are hosting ArgoCD like in the example above.
    • namespace – Here we specify a custom namespace for our app.
  • syncPolicy
    • syncOptions – Setting the CreateNamespace option to true automatically creates our custom namespace if it doesn’t already exist.
  • automation – Most automation features are turned off by default as a safety mechanism.
    • selfHeal – When manual changes are applied to the cluster –> automatically sync.
    • prune – Deletion when performing synchronization is disabled by default, this enables it.

Additionally, you may notice the resources-finalizer.argocd.argoproj.io finalizer which enables cascading deletion of the application’s resources. This is useful when we want to delete the application resource itself and also all of the resources it manages.

We can create an application using kubectl as we would with any other Kubernetes manifest:

$ kubectl apply -f taskit-application.yml

After creating this application resource in our cluster, ArgoCD will begin to monitor our source and compare it with our destination, and because we’ve enabled automation features, it will automatically synchronize our cluster with its desired state. Initially, since the cluster doesn’t contain our app yet, synchronization would cause it to be deployed! In the future, as changes are committed to the GitOps configuration repository, ArgoCD will pick up on these changes and make the necessary adjustments to the actual state of the cluster to meet specifications. 

Let’s check out the result:

One common change to be committed to a GitOps configuration repository is to edit the image tag of some Kubernetes deployment resource which manages our application in the cluster. This change can be done as part of the CI/CD pipeline after pushing a new version of an image to an artifact repository, which would subsequently cause this new version to be reflected in our production environment thanks to ArgoCD synchronizing the state.

Deploying from a Helm Repository

Now, we know how to deploy an application by pointing to its configuration within a Git repository, which is great! To follow up, let’s discuss how we can deploy a Helm chart hosted in a Helm repository, and to do this we need to change our specification just a bit so we can target a chart with a specific version, and supply it with custom values as well.

apiVersion: "argoproj.io/v1alpha1"
kind: "Application"
metadata:
  name: "ingress-nginx-application"
  namespace: "argocd"
  finalizers:
    - "resources-finalizer.argocd.argoproj.io"
spec:
  project: "default"
  sources:
    - repoURL: "https://kubernetes.github.io/ingress-nginx"
      targetRevision: 4.9.0
      chart: "ingress-nginx"
      helm:
        valueFiles:
          - "$mycharts/values/ingress-nginx-values.yml"
    - repoURL: "git@github.com:yuval2313/taskit-gitops.git"
      targetRevision: "HEAD"
      ref: "mycharts"
  destination:
    server: "https://kubernetes.default.svc"
    namespace: "ingress-nginx"
  syncPolicy:
    syncOptions:
      - "CreateNamespace=true"
    automated:
      selfHeal: true
      prune: true

In this application manifest, we are deploying version 4.9.0 of the ingress-nginx Helm chart, and supplying it with custom values. The first thing we ought to notice is that the targetRevision property no longer specifies a Git revision such as “HEAD”, but rather the chart’s version. Another detail to note is the appearance of some additional properties - chart and helm which are specific to helms charts, and the use of sources instead of just source.

  • sources – Specify multiple sources. 
  • chart – Chart name, only relevant for charts hosted in Helm repositories.
  • helm
    • valueFiles – Path to a values file.
    • value – Values file as block file.
    • valueObject – Values file as YAML object.

I’ve opted for the helm.valueFiles property instead of inserting my custom values directly within the application manifest. This property accepts a list of paths to values files, which can either be relative to the spec.source.path directory defined above, or rather a reference to a file hosted in a different source, such as in my example. See, because we aren’t targeting a chart hosted in a Git repository, we can’t specify a relative path to a values file which may exist in the same repo. Instead, we must use the sources property (note that it’s plural) to specify multiple sources; the first source is for the chart hosted in a Helm repository which we wish to deploy, and the other one is for the values file which is located in the GitOps configuration repository. The source intended for fetching the values utilizes the ref property to assign a referable name to itself, then the first source intended for the chart we are deploying (ingress-nginx in this case) can utilize this reference using a $ sign, and subsequently specify the path to the value file within that source.

The app of apps pattern

We’ve learned how to deploy Helm charts hosted in Git and Helm repositories, and we may often wish to implement both these types of applications in our cluster. To create multiple apps, we may simply choose to apply each one individually or perhaps use a custom script to deploy all of our required applications, but the recommended approach is to adhere to the app-of-apps pattern

This pattern is based on a simple concept – to have one ArgoCD application which is responsible for deploying only other ArgoCD applications. Since applications are capable of applying Kubernetes manifests, be it straight-up YAML files or possibly Helm charts as we’ve already demonstrated, there’s no reason applications can’t apply other applications. With such an approach, we can organize all of our desired application YAML files within a single directory in our GitOps configuration repository, and point an app of apps application manifest to this particular directory, triggering a chain reaction of applications 🤯. 

Picking up from our examples above, we’ll demonstrate the deployment of both the taskit application chart as well as the ingress-nginx chart using a single app of apps application, and we’ll start by reviewing the GitOps repository’s structure:

├── taskit/
│   ├── templates/
│   └── Chart.yml
├── infra-apps/
│   ├── taskit-application.yml
│   └── ingress-nginx-application.yml
├── values/
│   └── ingress-nginx-values.yml
└── infra-apps-application.yml

In this repository we have 3 key directories:

  • taskit/ – This is the custom Helm chart for my app.
  • infra-apps/ – This directory will hold all of the ArgoCD application manifests for my desired infrastructure.
  • values/ – Here we store our custom values files for different charts.

The app of apps application manifest, infra-apps-application.yml, sits at the root of our repository and looks like this:

apiVersion: "argoproj.io/v1alpha1"
kind: "Application"
metadata:
  name: "infra-apps-application"
  namespace: "argocd"
  finalizers:
    - "resources-finalizer.argocd.argoproj.io"
spec:
  project: "default"
  source:
    repoURL: "git@github.com:yuval2313/taskit-gitops.git"
    path: "infra-apps"
    targetRevision: "HEAD"
  destination:
    server: "https://kubernetes.default.svc"
    namespace: "argocd"
  syncPolicy:
    syncOptions:
      - "CreateNamespace=true"
    automated:
      selfHeal: true
      prune: true

All in all it looks just like any other application pointing to a Git repository, only as we already know, this one is responsible solely for other ArgoCD applications. Earlier we made a brief mention of the resources-finalizer.argocd.argoproj.io finalizer which is also present here and enables cascade deletion, in this case, it means that upon deleting this here application resource, it would subsequently delete the applications under it as well – use with caution. With this example, we can deploy our app of apps application once and it will take care of deploying both the taskit application chart as well as the ingress-nginx chart with custom values! 

Now, let’s apply this resource in the cluster:

$ kubectl apply -f infra-apps-application.yml

My actual GitOps configuration repository contains a few more applications and custom charts, which I deploy using the very same infra-apps-application.yml manifest, let’s take a look at what that looks like:

Conclusion

In this article, I’ve explored how we can utilize ArgoCD in conjunction with a GitOps configuration repository to deploy an application’s infrastructure to a Kubernetes cluster. We’ve introduced the basics of configuring ArgoCD, utilizing the Application CRD, and delved into the differences between specifying a chart hosted in a Git repository vs one in a Helm repository. Finally, we’ve also learned how to take advantage of the app of apps pattern to both structure our GitOps repository as well as deploy all of our desired ArgoCD applications in a single stroke.

References

We’re Hiring!
Develeap is looking for talented DevOps engineers who want to make a difference in the world.