SKS Variant

This folder contains the variant to use when deploying in Exoscale using an SKS cluster.

Usage

This module can be declared by adding the following block on your Terraform configuration:

module "kube-prometheus-stack" {
  source = "git::https://github.com/camptocamp/devops-stack-module-kube-prometheus-stack//sks?ref=<RELEASE>"

  cluster_name     = module.sks.cluster_name
  base_domain      = module.sks.base_domain
  cluster_issuer   = local.cluster_issuer
  argocd_namespace = module.argocd_bootstrap.argocd_namespace

  prometheus = {
    oidc = module.oidc.oidc
  }
  alertmanager = {
    oidc = module.oidc.oidc
  }
  grafana = {
    oidc = module.oidc.oidc
  }

  dependency_ids = {
    argocd       = module.argocd_bootstrap.id
    traefik      = module.traefik.id
    cert-manager = module.cert-manager.id
    keycloak     = module.keycloak.id
    oidc         = module.oidc.id
    longhorn     = module.longhorn.id
    loki-stack   = module.loki-stack.id
  }
}

When also deploying Thanos in the same cluster, you need to configure the metrics_storage variable with the values of the bucket created for the Thanos module. This will automatically activate the Thanos sidecar in the Prometheus pods as well as defining Thanos as the default data source for Grafana.

module "kube-prometheus-stack" {
  source = "git::https://github.com/camptocamp/devops-stack-module-kube-prometheus-stack//sks?ref=<RELEASE>"

  cluster_name     = module.sks.cluster_name
  base_domain      = module.sks.base_domain
  cluster_issuer   = local.cluster_issuer
  argocd_namespace = module.argocd_bootstrap.argocd_namespace

  metrics_storage = {
    bucket_name = resource.aws_s3_bucket.this["thanos"].id
    region      = resource.aws_s3_bucket.this["thanos"].region
    access_key  = resource.exoscale_iam_access_key.s3_iam_key["thanos"].key
    secret_key  = resource.exoscale_iam_access_key.s3_iam_key["thanos"].secret
  }

  prometheus = {
    oidc = module.oidc.oidc
  }
  alertmanager = {
    oidc = module.oidc.oidc
  }
  grafana = {
    oidc = module.oidc.oidc
  }

  dependency_ids = {
    argocd       = module.argocd_bootstrap.id
    traefik      = module.traefik.id
    cert-manager = module.cert-manager.id
    keycloak     = module.keycloak.id
    oidc         = module.oidc.id
    longhorn     = module.longhorn.id
    loki-stack   = module.loki-stack.id
  }
}
Check the SKS deployment example to see how to create the S3 bucket and to better understand the values passed on the example above.

OIDC

This module was developed with OIDC in mind.

There is an OIDC proxy container deployed as a sidecar on the pods of Prometheus and Alertmanager. As such, the prometheus and alertmanager variables are expected to have a map oidc containing at least the Issuer URL, the Client ID, and the Client Secret.

As for Grafana, the OIDC configuration is done through the grafana variable. The oidc map is expected to contain the same values as for Prometheus and Alertmanager, but also the oauth_url, token_url and api_url values.

You can pass these values by pointing an output from another module (as above), or by defining them explicitly:

module "kube-prometheus-stack" {
  ...
  prometheus | alertmanager = {
    oidc = {
      issuer_url    = "<URL>"
      client_id     = "<ID>"
      client_secret = "<SECRET>"
    }
  }
  grafana = {
    oidc = {
      issuer_url    = "<URL>"
      client_id     = "<ID>"
      client_secret = "<SECRET>"
      oauth_url     = "<URL>"
      token_url     = "<URL>"
      api_url       = "<URL>"
    }
  }
  ...
}

Technical Reference

Dependencies

module.argocd_bootstrap.id

Obviously, the module depends on an already running Argo CD in the cluster in order for the application to be created.

module.traefik.id and module.cert-manager.id

This module has multiple ingresses and consequently it must be deployed after the module traefik and cert-manager.

module.keycloak.id and module.oidc.id

When using Keycloak as an OIDC provider for the Longhorn Dashboard, you need to add Keycloak and the OIDC module as dependencies.

module.longhorn.id

This module requires a Persistent Volume so it needs to be deployed after the module Longhorn.

module.loki-stack.id

In order to be able to check the logs collected by Loki in the Grafana interface, this module requires to be deployed after the module Loki, so it can detect it as a data source.

Requirements

The following requirements are needed by this module:

Modules

The following Modules are called:

kube-prometheus-stack

Source: ../

Version:

Required Inputs

The following input variables are required:

cluster_name

Description: Name given to the cluster. Value used for naming some the resources created by the module.

Type: string

base_domain

Description: Base domain of the cluster. Value used for the ingress' URL of the application.

Type: string

Optional Inputs

The following input variables are optional (have default values):

metrics_storage

Description: Exoscale SOS bucket configuration values for the bucket where the archived metrics will be stored.

Type:

object({
    bucket_name = string
    region      = string
    access_key  = string
    secret_key  = string
  })

Default: null

argocd_namespace

Description: Namespace used by Argo CD where the Application and AppProject resources should be created.

Type: string

Default: "argocd"

argocd_project

Description: Name of the Argo CD AppProject where the Application should be created. If not set, the Application will be created in a new AppProject only for this Application.

Type: string

Default: null

argocd_labels

Description: Labels to attach to the Argo CD Application resource.

Type: map(string)

Default: {}

destination_cluster

Description: Destination cluster where the application should be deployed.

Type: string

Default: "in-cluster"

target_revision

Description: Override of target revision of the application chart.

Type: string

Default: "v8.0.2"

cluster_issuer

Description: SSL certificate issuer to use. Usually you would configure this value as letsencrypt-staging or letsencrypt-prod on your root *.tf files.

Type: string

Default: "ca-issuer"

namespace

Description: Namespace where the applications’s Kubernetes resources should be created. Namespace will be created in case it doesn’t exist.

Type: string

Default: "kube-prometheus-stack"

helm_values

Description: Helm chart value overrides. They should be passed as a list of HCL structures.

Type: any

Default: []

deep_merge_append_list

Description: A boolean flag to enable/disable appending lists instead of overwriting them.

Type: bool

Default: false

app_autosync

Description: Automated sync options for the Argo CD Application resource.

Type:

object({
    allow_empty = optional(bool)
    prune       = optional(bool)
    self_heal   = optional(bool)
  })

Default:

{
  "allow_empty": false,
  "prune": true,
  "self_heal": true
}

dependency_ids

Description: n/a

Type: map(string)

Default: {}

grafana

Description: Grafana settings

Type: any

Default: {}

prometheus

Description: Prometheus settings

Type: any

Default: {}

alertmanager

Description: Object containing Alertmanager settings. The following attributes are supported:

  • enabled: whether Alertmanager is deployed or not (default: true).

  • domain: domain name configured in the Ingress (default: prometheus.apps.${var.cluster_name}.${var.base_domain}).

  • oidc: OIDC configuration to be used by OAuth2 Proxy in front of Alertmanager (required).

  • deadmanssnitch_url: url of a Dead Man’s Snitch service Alertmanager should report to (by default this reporing is disabled).

  • slack_routes: list of objects configuring routing of alerts to Slack channels, with the following attributes:

  • name: name of the configured route.

  • channel: channel where the alerts will be sent (with '#').

  • api_url: slack URL you received when configuring a webhook integration.

  • matchers: list of strings for filtering which alerts will be sent.

  • continue: whether an alert should continue matching subsequent sibling nodes.

Type: any

Default: {}

metrics_storage_main

Description: Storage settings for the Thanos sidecar. Needs to be of type any because the structure is different depending on the provider used.

Type: any

Default: {}

Outputs

The following outputs are exported:

id

Description: ID to pass other modules in order to refer to this module as a dependency.

grafana_admin_password

Description: The admin password for Grafana.

Reference in table format

Show tables

= Requirements

Name Version

>= 5

>= 2

>= 3

>= 3

>= 1

= Modules

Name Source Version

= Inputs

Name Description Type Default Required

Exoscale SOS bucket configuration values for the bucket where the archived metrics will be stored.

object({
    bucket_name = string
    region      = string
    access_key  = string
    secret_key  = string
  })

null

no

Name given to the cluster. Value used for naming some the resources created by the module.

string

n/a

yes

Base domain of the cluster. Value used for the ingress' URL of the application.

string

n/a

yes

Namespace used by Argo CD where the Application and AppProject resources should be created.

string

"argocd"

no

Name of the Argo CD AppProject where the Application should be created. If not set, the Application will be created in a new AppProject only for this Application.

string

null

no

Labels to attach to the Argo CD Application resource.

map(string)

{}

no

Destination cluster where the application should be deployed.

string

"in-cluster"

no

Override of target revision of the application chart.

string

"v8.0.2"

no

SSL certificate issuer to use. Usually you would configure this value as letsencrypt-staging or letsencrypt-prod on your root *.tf files.

string

"ca-issuer"

no

Namespace where the applications’s Kubernetes resources should be created. Namespace will be created in case it doesn’t exist.

string

"kube-prometheus-stack"

no

Helm chart value overrides. They should be passed as a list of HCL structures.

any

[]

no

A boolean flag to enable/disable appending lists instead of overwriting them.

bool

false

no

Automated sync options for the Argo CD Application resource.

object({
    allow_empty = optional(bool)
    prune       = optional(bool)
    self_heal   = optional(bool)
  })
{
  "allow_empty": false,
  "prune": true,
  "self_heal": true
}

no

n/a

map(string)

{}

no

Grafana settings

any

{}

no

Prometheus settings

any

{}

no

Object containing Alertmanager settings. The following attributes are supported:

  • enabled: whether Alertmanager is deployed or not (default: true).

  • domain: domain name configured in the Ingress (default: prometheus.apps.${var.cluster_name}.${var.base_domain}).

  • oidc: OIDC configuration to be used by OAuth2 Proxy in front of Alertmanager (required).

  • deadmanssnitch_url: url of a Dead Man’s Snitch service Alertmanager should report to (by default this reporing is disabled).

  • slack_routes: list of objects configuring routing of alerts to Slack channels, with the following attributes:

  • name: name of the configured route.

  • channel: channel where the alerts will be sent (with '#').

  • api_url: slack URL you received when configuring a webhook integration.

  • matchers: list of strings for filtering which alerts will be sent.

  • continue: whether an alert should continue matching subsequent sibling nodes.

any

{}

no

Storage settings for the Thanos sidecar. Needs to be of type any because the structure is different depending on the provider used.

any

{}

no

= Outputs

Name Description

id

ID to pass other modules in order to refer to this module as a dependency.

The admin password for Grafana.