Deploying the DevOps Stack to SKS
Prerequisites
-
Access to API keys allowing to create required resources in Exoscale,
-
Access to GitLab or GitHub (only supported CI/CD for now),
-
Knowledge of Terraform basics
Create your Terraform root module
Camptocamp’s DevOps Stack is instantiated using a Terraform composition module.
Here is a minimal working example:
# terraform/main.tf
locals {
cluster_name = "my-cluster"
}
module "cluster" {
source = "git::https://github.com/camptocamp/devops-stack.git//modules/sks/exoscale?ref=v0.47.0"
cluster_name = local.cluster_name
zone = "ch-gva-2"
kubernetes_version = "1.21.4"
nodepools = {
"router-${local.cluster_name}" = {
size = 2
instance_type = "standard.large"
}
}
}
Terraform Outputs
Define outputs:
# terraform/outputs.tf
output "argocd_server_admin_password" {
sensitive = true
value = module.cluster.argocd_server_admin_password
}
output "argocd_auth_token" {
sensitive = true
value = module.cluster.argocd_auth_token
}
output "kubeconfig" {
sensitive = true
value = module.cluster.kubeconfig
}
output "argocd_server" {
value = module.cluster.argocd_server
}
output "grafana_admin_password" {
sensitive = true
value = module.cluster.grafana_admin_password
}
Terraform Backend
If you wish to collaborate, define a backend to store your state:
# terraform/versions.tf
terraform {
backend "remote" {
organization = "example_corp"
workspaces {
name = "my-app-prod"
}
}
}
Deploying from your workstation
Even if one of the purpose of the DevOps Stack is to do everything in pipelines, you could deploy your cluster from your workstation using the Terraform CLI:
$ cd terraform
$ terraform init
$ terraform apply
Deploying from pipelines
When using pipelines, the DevOps Stack runs a dry-run on Merge Request and applies the modification on commit on a protected branch.
GitLab CI
Push your code in a new project on Gitlab
Create a new project on GitLab and push your Terraform files on it. You can use either gitlab.com or your self hosted GitLab instance.
Protect your branch
The cluster creation pipeline is triggered only on protected branches, so you have to protect every branch that will define a cluster (in Settings ⇒ Repository ⇒ Protected Branches).
Add CI / CD variables
There are multiple ways to configure the Terraform SKS provider. You could commit the credentials in your code, with a high potential risk of secret leakage, or another simple solution is to define the required environment variables as CI/CD variables.
In your project’s Setting → CI/CD → Variables, add variables for:
-
EXOSCALE_API_KEY
-
EXOSCALE_API_SECRET
GitHub Actions
Add Actions secrets
There are multiple ways to configure the Terraform SKS provider. You could commit the credentials in your code, with a high potential risk of leakage, or another simple solution is to define the required environment variables as Actions secrets.
In your project settings in Secrets Actions, create secrets for these variables:
-
EXOSCALE_API_KEY
-
EXOSCALE_API_SECRET
Create GitHub Actions workflow for your project
Unfortunately, composite Actions have some limitations right now,
so we can’t provide a single Action to declare in your workflow
(as we do for GitLab pipeline).
Hence, you have to create a .github/workflows/terraform.yml
file with this content:
---
name: 'Terraform'
on:
push:
branches:
- main
pull_request:
jobs:
terraform:
name: Terraform
runs-on: ubuntu-latest
env:
EXOSCALE_API_KEY: ${{ secrets.EXOSCALE_API_KEY }}
EXOSCALE_API_SECRET: ${{ secrets.EXOSCALE_API_SECRET }}
TF_ROOT: terraform
defaults:
run:
working-directory: ${{ env.TF_ROOT }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
terraform_version: 0.14.10
- name: Terraform Format
run: terraform fmt -check -diff -recursive
- name: Terraform Init
run: terraform init
- name: Terraform Validate
run: terraform validate -no-color
- name: Terraform Plan
if: github.event_name == 'pull_request'
run: terraform plan -no-color -out plan
- name: Install aws-iam-authenticator
if: github.event_name == 'push'
run: |
mkdir -p ${{ github.workspace }}/bin
curl -o ${{ github.workspace }}/bin/aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/aws-iam-authenticator
chmod +x ${{ github.workspace }}/bin/aws-iam-authenticator
echo "PATH=${{ github.workspace }}/bin:$PATH" >> $GITHUB_ENV
- name: Terraform Apply
if: github.event_name == 'push'
run: terraform apply --auto-approve
- name: Terraform Plan
if: github.event_name == 'push'
run: terraform plan --detailed-exitcode -no-color
Recovering the kubeconfig for SKS
-
Retrieve the kubeconfig using the following command:
exo sks kubeconfig <CLUSTER_NAME> -z <YOUR_REGION> <A_USER_NAME> > kubeconfig.yaml
A_USER_NAME
is the name of the user (certificate CN) to create for the new account associated with the config file.
You can then point to the new kubeconfig.yaml
file using export KUBECONFIG=$PWD/kubeconfig.yaml
.
Then, you should be able to use kubectl
.
Inspect the DevOps Stack Applications
You can view the ingress routes for the various DevOps Stack Applications with:
$ kubectl get ingress --all-namespaces
Access the URLs in https, and use the OIDC/OAuth2 to log in, using the admin
account with the password previously retrieved.
Reference
See the Exoscale SKS reference page.