Using Same Terraform Code For Oracle (OKE) And AWS (EKS) Kubernetes Clusters

by Marta Kowalska 77 views

Hey guys! Ever found yourself juggling multiple Terraform configurations for different Kubernetes clusters? It's a pain, right? Especially when you're aiming for consistency and efficiency. In this article, we'll dive deep into how you can use the same Terraform code to manage both Oracle Kubernetes Engine (OKE) and Amazon Elastic Kubernetes Service (EKS) clusters. We’ll also cover deploying Argo CD using Helm. Let's make your infrastructure management a breeze!

Understanding the Challenge

Before we jump into the solution, let's break down the problem. You want to use the same Terraform code for both OKE and EKS. This means dealing with the nuances of different cloud providers while keeping your infrastructure as code (IaC) DRY (Don't Repeat Yourself). Plus, you need to deploy Argo CD, which adds another layer of complexity. So, how do we tackle this?

The Core Issues

  • Provider Differences: OKE and EKS have different resource naming conventions, authentication mechanisms, and networking setups. Terraform needs to handle these variations gracefully.
  • Helm Chart Management: Deploying Argo CD via Helm requires ensuring Helm is properly configured and the chart is deployed with the correct values.
  • State Management: Keeping Terraform state consistent across multiple environments is crucial for avoiding conflicts and ensuring smooth deployments.

Solution Overview

Our approach will involve a combination of Terraform modules, variables, and potentially Terragrunt to manage the complexity. Here’s a high-level plan:

  1. Modularize Terraform Code: Break your Terraform code into reusable modules for Kubernetes cluster creation, networking, and Helm deployments.
  2. Use Variables for Customization: Leverage Terraform variables to handle differences between OKE and EKS, such as region names, instance sizes, and networking configurations.
  3. Implement Terragrunt (Optional): Terragrunt can help manage Terraform state and DRY up your configurations further, especially in multi-environment setups.
  4. Dynamic Provider Configuration: Configure Terraform providers (OCI and AWS) dynamically based on variables.
  5. Helm Chart Deployment: Use the helm_release resource in Terraform to deploy Argo CD.

Step-by-Step Implementation

Let's walk through the implementation step by step. This will involve creating modules, setting up variables, and configuring providers.

1. Modularizing Terraform Code

Modularity is key to reusability. We’ll create modules for the core components:

  • Kubernetes Cluster Module: This module will handle the creation of either an OKE or EKS cluster based on input variables.
  • Networking Module: This will set up the necessary networking resources (VPCs, subnets, security groups) for the cluster.
  • Helm Deployment Module: This will deploy Helm charts, in our case, Argo CD.

Kubernetes Cluster Module

This module will contain the core logic for creating a Kubernetes cluster. Let's start with a basic structure:

modules/
  kubernetes-cluster/
    main.tf
    variables.tf
    outputs.tf

modules/kubernetes-cluster/main.tf:

# This is a simplified example. Actual implementation will vary.

resource "null_resource" "cluster" {
  # Placeholder for OKE/EKS cluster creation logic
}

modules/kubernetes-cluster/variables.tf:

variable "cluster_name" {
  type = string
  description = "The name of the Kubernetes cluster."
}

variable "cloud_provider" {
  type = string
  description = "The cloud provider (oke or eks)."
  validation {
    condition     = contains(["oke", "eks"], lower(var.cloud_provider))
    error_message = "The cloud_provider value must be either 'oke' or 'eks'."
  }
}

# Additional variables for OKE
variable "oke_compartment_id" {
  type    = string
  default = null
  description = "The OCI compartment ID. Required for OKE."
}

variable "oke_region" {
  type    = string
  default = null
  description = "The OCI region. Required for OKE."
}

# Additional variables for EKS
variable "eks_region" {
  type    = string
  default = null
  description = "The AWS region. Required for EKS."
}

variable "eks_vpc_id" {
  type    = string
  default = null
  description = "The AWS VPC ID. Required for EKS."
}

modules/kubernetes-cluster/outputs.tf:

output "cluster_name" {
  value = var.cluster_name
  description = "The name of the Kubernetes cluster."
}

# Add more outputs as needed, such as kubeconfig, etc.

Networking Module

Similarly, the networking module will set up the required network infrastructure:

modules/
  networking/
    main.tf
    variables.tf
    outputs.tf

The contents of these files will depend on your specific networking requirements. For example, you might create a VPC for EKS or a VCN for OKE.

Helm Deployment Module

This module will deploy Helm charts to your cluster:

modules/
  helm-deployment/
    main.tf
    variables.tf
    outputs.tf

modules/helm-deployment/main.tf:

resource "helm_release" "argocd" {
  name       = var.release_name
  chart      = var.chart_name
  repository = var.chart_repository
  namespace  = var.namespace

  values = [
    jsonencode(var.values)
  ]
}

modules/helm-deployment/variables.tf:

variable "release_name" {
  type        = string
  description = "The name of the Helm release."
}

variable "chart_name" {
  type        = string
  description = "The name of the Helm chart."
}

variable "chart_repository" {
  type        = string
  description = "The Helm chart repository URL."
}

variable "namespace" {
  type        = string
  description = "The Kubernetes namespace to deploy to."
}

variable "values" {
  type        = map(any)
  description = "The values to pass to the Helm chart."
  default     = {}
}

2. Using Variables for Customization

Variables are the key to making your Terraform code adaptable. We’ll use variables to specify cloud-specific settings.

Create a variables.tf file in your root Terraform directory:

variable "cloud_provider" {
  type = string
  description = "The cloud provider (oke or eks)."
  validation {
    condition     = contains(["oke", "eks"], lower(var.cloud_provider))
    error_message = "The cloud_provider value must be either 'oke' or 'eks'."
  }
}

# OKE Variables
variable "oke_compartment_id" {
  type    = string
  default = null
  description = "The OCI compartment ID. Required for OKE."
}

variable "oke_region" {
  type    = string
  default = null
  description = "The OCI region. Required for OKE."
}

# EKS Variables
variable "eks_region" {
  type    = string
  default = null
  description = "The AWS region. Required for EKS."
}

variable "eks_vpc_id" {
  type    = string
  default = null
  description = "The AWS VPC ID. Required for EKS."
}

3. Implementing Terragrunt (Optional)

Terragrunt can simplify managing Terraform state and configurations across multiple environments. It helps you avoid repeating configurations and keeps your code DRY.

If you choose to use Terragrunt, your directory structure might look like this:

terraform/
  modules/
    kubernetes-cluster/
    networking/
    helm-deployment/
  live/
    oke/
      terragrunt.hcl
    eks/
      terragrunt.hcl

Each terragrunt.hcl file will specify the Terraform configuration for that environment.

live/oke/terragrunt.hcl:

include {
  path = find_in_parent_folders("terragrunt.hcl")
}

terraform {
  source = "git::your-repo-url//terraform?ref=main"
}

inputs = {
  cloud_provider    = "oke"
  oke_compartment_id = "ocid1.compartment.oc1..your-compartment-id"
  oke_region        = "us-phoenix-1"
  cluster_name      = "my-oke-cluster"
}

live/eks/terragrunt.hcl:

include {
  path = find_in_parent_folders("terragrunt.hcl")
}

terraform {
  source = "git::your-repo-url//terraform?ref=main"
}

inputs = {
  cloud_provider = "eks"
  eks_region     = "us-east-1"
  eks_vpc_id     = "vpc-xxxxxxxxxxxxxxxxx"
  cluster_name   = "my-eks-cluster"
}

4. Dynamic Provider Configuration

To handle different cloud providers, we’ll configure providers dynamically based on the cloud_provider variable.

Create a providers.tf file in your root directory:

terraform {
  required_providers {
    oci = {
      source  = "oracle/oci"
      version = "~> 4.0"
    }
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
  }
}

# OCI Provider Configuration
provider "oci" {
  region      = var.oke_region
  compartment_id = var.oke_compartment_id
  alias       = "oke"
  configuration_file_profile = "DEFAULT" # Ensure this is set correctly in your OCI config
  
  # You may need to use environment variables for authentication in production
}

# AWS Provider Configuration
provider "aws" {
  region = var.eks_region
  alias  = "eks"
  # Ensure your AWS credentials are configured correctly
}

# Kubernetes Provider
provider "kubernetes" {
  host                   = var.kube_cluster_host
  token                  = var.kube_token
  cluster_ca_certificate = base64decode(var.kube_ca_cert)

  
  # Alternative Configuration
  #   config_path = "~/.kube/config" # Ensure kubeconfig is properly set

  depends_on = [null_resource.cluster] # Replace null_resource with your actual cluster resource
}

provider "helm" {
  kubernetes {
    host                   = var.kube_cluster_host
    token                  = var.kube_token
    cluster_ca_certificate = base64decode(var.kube_ca_cert)
  }
  depends_on = [kubernetes.cluster] # Ensure you use the correct resource name for your cluster
}

5. Helm Chart Deployment (Argo CD)

Now, let's deploy Argo CD using our Helm deployment module.

In your main Terraform file (e.g., main.tf), call the helm-deployment module:

module "argocd" {
  source = "./modules/helm-deployment"

  release_name   = "argocd"
  chart_name     = "argo-cd"
  chart_repository = "https://argoproj.github.io/argo-helm"
  namespace      = "argocd"

  values = {
    # Customize Argo CD values as needed
  }
}

Make sure to include the necessary provider and Kubernetes cluster configurations in your main Terraform file. Here’s an example structure:

# main.tf

module "kubernetes_cluster" {
  source = "./modules/kubernetes-cluster"

  cloud_provider = var.cloud_provider
  # OKE Variables
  oke_compartment_id = var.oke_compartment_id
  oke_region        = var.oke_region
  # EKS Variables
  eks_region = var.eks_region
  eks_vpc_id = var.eks_vpc_id
  cluster_name = "example-cluster"
}

data "kubernetes_service_versions" "current" {
  provider = kubernetes
}


module "argocd" {
  source = "./modules/helm-deployment"

  release_name   = "argocd"
  chart_name     = "argo-cd"
  chart_repository = "https://argoproj.github.io/argo-helm"
  namespace      = "argocd"
  depends_on = [module.kubernetes_cluster]

  values = {
    # Customize Argo CD values as needed
  }
}

Putting It All Together

Your Terraform project structure might look something like this:

terraform/
  modules/
    kubernetes-cluster/
      main.tf
      variables.tf
      outputs.tf
    networking/
      main.tf
      variables.tf
      outputs.tf
    helm-deployment/
      main.tf
      variables.tf
      outputs.tf
  main.tf
  variables.tf
  outputs.tf
  providers.tf
  terragrunt.hcl (optional)
  live/
    oke/
      terragrunt.hcl (optional)
    eks/
      terragrunt.hcl (optional)

To deploy, you’ll run terraform init, terraform plan, and terraform apply. If you’re using Terragrunt, you’ll run these commands within the live/oke or live/eks directories.

Troubleshooting Common Issues

  • Provider Configuration Errors: Double-check your OCI and AWS provider configurations. Ensure you have the correct credentials and regions set.
  • Helm Deployment Failures: Verify the Helm chart repository URL and chart name. Check the Helm values for any syntax errors.
  • Kubernetes Context Issues: Ensure your kubeconfig is properly configured and the correct context is selected.

Conclusion

Using the same Terraform code for both OKE and EKS is totally achievable with a modular approach, proper variable usage, and dynamic provider configuration. By breaking down your infrastructure into reusable modules and leveraging Terraform’s features, you can manage multiple Kubernetes clusters with ease. Don't forget, tools like Terragrunt can further streamline your workflow. So, go ahead, give it a try, and simplify your Kubernetes management!

Remember, this guide provides a solid foundation, but your specific implementation might require adjustments based on your needs. Happy Terraforming, guys!