Deploying the DevOps Stack to OpenShift (IPI) on AWS

The example given below is made to work on AWS, other cloud providers such as Azure will come later.

Prerequisites

  • Access to API keys allowing to create required resources in AWS,

  • Meet the requirements for OCP deployment such as SSH public key and PullSecret from Red Hat,

  • The openshift-install binary is in your $PATH,

  • The AWS cli binary is in your $PATH,

  • Knowledge of Terraform basics

Create your Terraform root module

Camptocamp’s DevOps Stack is instantiated using a Terraform composition module.

Here is a minimal working example:

# terraform/main.tf

locals {
  install_config_path = "install-config.yaml"
  region              = "eu-west-1"
  base_domain         = "example.com"
  cluster_name        = "ocp"
}


module "cluster" {
  source              = "git::https://github.com/camptocamp/devops-stack.git//modules/openshift4/aws?ref=v0.40.0"
  install_config_path = local.install_config_path
  base_domain         = local.base_domain
  cluster_name        = local.cluster_name
  region              = local.region
}

Terraform Outputs

Define outputs:

# terraform/outputs.tf

output "argocd_auth_token" {
  sensitive = true
  value     = module.cluster.argocd_auth_token
}

output "kubeconfig" {
  sensitive = true
  value     = module.cluster.kubeconfig
}

output "argocd_server" {
  value = module.cluster.argocd_server
}

output "grafana_admin_password" {
  sensitive = true
  value     = module.cluster.grafana_admin_password
}

output "console_url" {
  value = module.cluster.console_url
}

output "kubeadmin_password" {
  value = module.cluster.kubeadmin_password
  sensitive = true
}

Terraform Backend

If you wish to collaborate, define a backend to store your state:

# terraform/versions.tf

terraform {
  backend "remote" {
    organization = "example_corp"

    workspaces {
      name = "my-app-prod"
    }
  }
}

Create your IPI Response File

Create the install-config.yaml file required to perform silent install of OpenShift:

# install-config.yaml

apiVersion: v1
baseDomain: example.com
controlPlane:
  hyperthreading: Enabled
  name: master
  platform:
    aws:
      zones:
      - eu-west-1a
      - eu-west-1b
      - eu-west-1c
  replicas: 3
compute:
- hyperthreading: Enabled
  name: worker
  platform:
    aws:
      zones:
      - eu-west-1a
      - eu-west-1b
      - eu-west-1c
  replicas: 3
metadata:
  name: ocp
platform:
  aws:
    region: eu-west-1
pullSecret: '<PULL SECRET KEY>'
fips: false
sshKey: <SSH KEY>

Deploying from your workstation

Even if one of the purpose of the DevOps Stack is to do everything in pipelines, you could deploy your cluster from your workstation using the Terraform CLI:

$ cd terraform
$ terraform init
$ terraform apply

Deployment Process

OpenShift is deployed using the openshift-install binary, which embarks a nested Terraform version. When the DevOps Stack Terraform is running, we cannot follow the child process.

Thus, we can track the deployment progress in terraform/<CLUSTER NAME>/.openshift_install.log

$ tail -f terraform/<CLUSTER NAME>/.openshift_install.log

Get kubeconfig

Retrieve the Kubeconfig file:

$ terraform output -json kubeconfig | jq -r .

or

$ export KUBECONFIG=<CLUSTER NAME>/auth/kubeconfig

You should then be able to use oc.

Get the Kubeadmin Password

Retrieve the kubeadmin password:

$ cat <CLUSTER NAME>/auth/kubeadmin-password

Inspect the DevOps Stack Applications

You can view the ingress routes for the various DevOps Stack Applications with:

$ oc get route --all-namespaces

Destroy the cluster

$ terraform destroy