Deployment On Azure AKS

An example of a local deployment of a Kubernetes cluster on Azure AKS is provided here. Clone this repository and modify the files at your convenience. In the repository, as in a standard Terraform module, you will find the following files:

  • terraform.tf - declaration of the Terraform providers used in this project;

  • locals.tf - local variables used by the DevOps Stack modules;

  • main.tf - definition of all the deployed modules;

  • storage.tf - creation of the Storage Account and Storage Container used by Loki and Thanos;

  • dns.tf - creation of the wildcard record for the ingresses of the DevOps Stack components;

  • oidc.tf - addition of the redirect URIs to the Azure AD Enterprise Application in order to use it to authenticate to the DevOps Stack components providing a web interface;

  • outputs.tf - the output variables of the DevOps Stack;

The requirements folder is not part of the Terraform code you execute directly. Its importance is explained on the next section.

Requirements

On your local machine, you need to have the following tools installed:

  • Azure CLI to login to your Azure account and interact with your AKS cluster;

  • Terraform to provision the whole stack;

  • kubectl or k9sto interact with your cluster;

Other than that, you will require the following:

  • An active Azure account with an active subscription;

  • An Enterprise Application on Entra ID to use as an identity provider for the DevOps Stack components;

  • The Azure subscription needs to have a Key Vault to store the secrets used to pass the credentials of said application to the DevOps Stack components;

  • Your Azure account needs to be part of a user group that has been assigned the role Owner, Key Vault Reader and Key Vault Secrets User on the subscription;

  • Your Azure account also needs to be an Owner of the Enterprise Application in order to add the proper redirect URIs.

In this repository, you will find an example of Terraform code that could provision the required resources above. You can find this code here.

Note that this code needs to be executed by an administrator with the proper rights on the on the subscription but also on Entra ID.

An alternative to creating the required resources separately is that your user has an Application Developer role assignment on the Entra ID instance the subscription is linked to.

This will allow you to create the Enterprise Application and add the redirect URIs directly with your code, without the need of an administrator.

Check the application.tf from the tip above and adapt the Terraform resources in order to create the application yourself.

Or simply create the Enterprise Application and add the redirect URIs manually.

Specificities and explanations

Remote Terraform state

If you do not want to configure the remote Terraform state backend, you can simply remove the backend block from the terraform.tf file.

More information about the remote backends is available on the official documentation.

OIDC authentication

The DevOps Stack modules are developed with OIDC in mind. In production, you should have an identity provider that supports OIDC and use it to authenticate to the DevOps Stack applications.

In this example, we use an Enterprise Applicaion as OIDC provider.

You can use any other OIDC provider by adapting the oidc block in the locals.tf file with the proper values.

Let’s Encrypt SSL certificates

By default, to avoid rate-limiting your domain by Let’s Encrypt, the example uses the letsencrypt-staging configuration of the cert-manager module to generate certificates. This uses the Let’s Encrypt staging environment which has an invalid CA certificate.

If you feel ready to test with production certificates, you can simply edit the locals.tf file and change the cluster_issuer variable to letsencrypt-prod.

Deployment

  1. Clone the repository and cd into the examples/aks folder;

  2. Login to your Azure account with the Azure CLI, set the proper subscription and verify you are connected it:

    az login
    az account set --subscription <subscription_id>
    az account show
  3. Check out the modules you want to deploy in the main.tf file, and comment out the others;

    You can also add your own Terraform modules in this file or any other file on the root folder. A good place to start to write your own module is to clone the devops-stack-module-template repository and adapt it to your needs.
  4. From the source of the example deployment, initialize the Terraform modules and providers:

    terraform init
  5. Configure the variables in locals.tf to your preference: TIP: The cluster module documentation can help you know what to put in the kubernetes_version, for example.

    locals {
      # Parameters for the resources that are created outside this code, but still on the Azure subscription where the DevOps Stack will be deployed.
      default_resource_group         = "YOUR_DEFAULT_RESOURCE_GROUP" # The default resource group where the Key Vault with the Azure AD application credentials is located.
      default_key_vault              = "YOUR_KEY_VAULT_NAME"         # The name of the Key Vault with the Azure AD application credentials.
      oidc_application_name          = "YOUR_APPLICATION_NAME"       # The name of the Azure AD application that will be used for OIDC authentication.  cluster_admins_group_object_id = "38a1908d-0ccd-4acc-99d5-7f0228289752"
      cluster_admins_group_object_id = "YOUR_CLUSTER_ADMINS_GROUP_OBJECT_ID"
    
      # Parameters used for this deployment of the DevOps Stack.
      common_resource_group    = "YOUR_COMMON_RESOURCE_GROUP" # The resource group where the common resources will reside. Must be unique for each DevOps Stack deployment in a single Azure subscription. 
      location                 = "YOUR_LOCATION"
      kubernetes_version       = "1.28"
      sku_tier                 = "Standard"
      cluster_name             = "YOUR_CLUSTER_NAME" # Must be unique for each DevOps Stack deployment in a single Azure subscription.
      base_domain              = "your.domain.here"  # Must match a DNS zone in the Azure subscription where you are deploying the DevOps Stack.
      activate_wildcard_record = true
      cluster_issuer           = module.cert-manager.cluster_issuers.staging
      letsencrypt_issuer_email = "YOUR_EMAIL_ADDRESS"
      enable_service_monitor   = false # Can be enabled after the first bootstrap.
      app_autosync             = true ? { allow_empty = false, prune = true, self_heal = true } : {}
    
      # The virtual network CIDR must be unique for each DevOps Stack deployment in a single Azure subscription.
      virtual_network_cidr = "10.1.0.0/16"
    
      # Automatic subnets IP range calculation, splitting the virtual_network_cidr above into 6 subnets.
      cluster_subnet = cidrsubnet(local.virtual_network_cidr, 8, 0)
    
      # Local containing all the OIDC definitions required by the DevOps Stack modules.
      oidc = {
        issuer_url    = format("https://login.microsoftonline.com/%s/v2.0", data.azuread_client_config.current.tenant_id)
        oauth_url     = format("https://login.microsoftonline.com/%s/oauth2/authorize", data.azuread_client_config.current.tenant_id)
        token_url     = format("https://login.microsoftonline.com/%s/oauth2/token", data.azuread_client_config.current.tenant_id)
        api_url       = format("https://graph.microsoft.com/oidc/userinfo")
        client_id     = data.azurerm_key_vault_secret.aad_application_client_id.value
        client_secret = data.azurerm_key_vault_secret.aad_application_client_secret.value
        oauth2_proxy_extra_args = local.cluster_issuer != "letsencrypt-prod" ? [
          "--insecure-oidc-skip-issuer-verification=true",
          "--ssl-insecure-skip-verify=true",
        ] : []
      }
    }
  6. Finally, run terraform apply and accept the proposed changes to create the Kubernetes nodes on Azure AKS and populate them with our services;

    terraform apply
  7. After the first deployment (please note the troubleshooting step related with Argo CD), you can go to the locals.tf and enable the ServiceMonitor boolean to activate the Prometheus exporters that will send metrics to Prometheus;

    This flag needs to be set as false for the first bootstrap of the cluster, otherwise the applications will fail to deploy while the Custom Resource Definitions of the kube-prometheus-stack are not yet created.
    You can either set the flag as true in the locals.tf file or you can simply delete the line on the modules' declarations, since this variable is set as true by default on each module.
    Take note of the local called app_autosync. If you set the condition of the ternary operator to false you will disable the auto-sync for all the DevOps Stack modules. This allows you to choose when to manually sync the module on the Argo CD interface and is useful for troubleshooting purposes.

Access the cluster and the DevOps Stack applications

To access your cluster, you need to use the Azure CLI to recover a Kubeconfig you can use:

az aks get-credentials --resource-group YOUR_CLUSTER_RESOURCE_GROUP_NAME --name YOUR_CLUSTER_NAME --file ~/.kube/NAME_TO_GIVE_YOUR_CONFIG.config

If you do not add your user’s or group’s object ID to the rbac_aad_admin_group_object_ids variable on the main.tf, you will need to use the --admin flag on the command above. This will give the privileged Kubeconfig to access the cluster.

Then you can use the kubectl or k9s command to interact with the cluster:

k9s --kubeconfig ~/.kube/NAME_TO_GIVE_YOUR_CONFIG.config

As for the DevOps Stack applications, you can access them through the ingress domain that you can find in the ingress_domain output. If you used the code from the example without modifying the outputs, you will see something like this on your terminal after the terraform apply has done its job:

Outputs:

ingress_domain = "your.domain.here"

Or you can use kubectl to get all the ingresses and their respective URLs:

kubectl get ingress --all-namespaces --kubeconfig ~/.kube/NAME_TO_GIVE_YOUR_CONFIG.config

Stop the cluster

To definitively stop the cluster on a single command, you can simply use the terraform destroy command. This will destroy all the resources created by the Terraform code, including the Kubernetes cluster.

Troubleshooting

connection_error during the first deployment

In some cases, you could encounter an error like these the first deployment:

╷
│ Error: error while waiting for application argocd to be created
│
│   with module.argocd.argocd_application.this,
│   on .terraform/modules/argocd/main.tf line 55, in resource "argocd_application" "this":
│   55: resource "argocd_application" "this" {
│
│ error while waiting for application argocd to be synced and healthy: rpc error: code = Unavailable desc = error reading from server: EOF
╵

The error is due to the way we provision Argo CD on the final steps of the deployment. We use the bootstrap Argo CD to deploy the final Argo CD module, which causes a redeployment of Argo CD and consequently a momentary loss of connection between the Argo CD Terraform provider and the Argo CD server.

You can simply re-run the command terraform apply to finalize the bootstrap of the cluster every time you encounter this error.

Argo CD interface reload loop when clicking on login

If you encounter a loop when clicking on the login button on the Argo CD interface, you can try to delete the Argo CD server pod and let it be recreated.

For more informations about the Argo CD module, please refer to the respective documentation page.