Skip to content

Provision OKE on Oracle Cloud - Step by Step Guide

flowchart LR
    subgraph Step1["1. Provision"]
        TF[terraform apply]
    end

    subgraph Step2["2. Push"]
        Git[git push]
    end

    subgraph Step3["3. Wait"]
        Boot[Cloud-init<br/>Bootstrap]
    end

    subgraph Step4["4. Verify"]
        Check[kubectl get nodes]
    end

    TF --> Git --> Boot --> Check

After creating terraform.tfvars, run Terraform to provision the infrastructure:

Terminal window
cd tf-oke
terraform init
terraform apply
sequenceDiagram
    participant You as Developer
    participant TF as Terraform
    participant OCI as OCI API
    participant OKE as OKE Cluster

    You->>TF: terraform apply
    TF->>OCI: Create VCN
    TF->>OCI: Create Subnets
    TF->>OCI: Create OKE Cluster
    TF->>OCI: Create Node Pool
    OCI->>OKE: Provision Control Plane
    OCI->>OKE: Provision Worker Nodes
    TF->>You: Output Cluster Details
    Note over OKE: Cluster creation takes ~10-15m

Terraform creates the OCI networking, OKE cluster, and node pool, then generates Kubernetes manifests in the argocd/ directory.

The generated manifests must be committed to your repository for Argo CD to sync them:

Terminal window
cd ..
git add argocd/
git commit -m "Configure cluster manifests"
git push
flowchart LR
    TF[Terraform] -->|generates| Manifests[argocd/]
    Manifests -->|git push| GH[GitHub]
    GH -->|syncs| Argo[Argo CD]
    Argo -->|deploys| Cluster[OKE Cluster]

The OKE cluster control plane is managed by Oracle. Once Terraform completes, the cluster is active, but we need to configure kubectl and install Argo CD.

  1. Configure kubectl:

    Terminal window
    oci ce cluster create-kubeconfig \
    --cluster-id $(terraform output -raw cluster_id) \
    --file $HOME/.kube/config \
    --region $(terraform output -raw region) \
    --token-version 2.0.0 \
    --kube-endpoint PUBLIC_ENDPOINT
  2. Install Argo CD:

    Terminal window
    kubectl create namespace argocd
    kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
    kubectl apply -f ../argocd/applications.yaml

Allow approximately five minutes for Argo CD to initialize and begin syncing applications.

Terminal window
kubectl get nodes

Expected output:

NAME STATUS ROLES AGE VERSION
10.0.10.x Ready node 5m v1.32.1
10.0.10.y Ready node 5m v1.32.1
Terminal window
kubectl get applications -n argocd

Expected output:

NAME SYNC STATUS HEALTH STATUS
argocd-ingress Synced Healthy
argocd-self-managed Synced Healthy
cert-manager Synced Healthy
docs-app Synced Healthy
envoy-gateway Synced Healthy
external-dns Synced Healthy
external-secrets Synced Healthy
gateway-api-crds Synced Healthy
managed-secrets Synced Healthy
root-app Synced Healthy
Terminal window
kubectl get pods -A

All pods should be Running except for completed Job pods.

After a few minutes, test the deployed application:

Terminal window
dig +short k8s.yourdomain.com
curl -I https://k8s.yourdomain.com

If ArgoCD applications remain in Unknown status after initial deploy:

Check if kustomize.buildOptions is set:

Terminal window
kubectl -n argocd get cm argocd-cm -o jsonpath='{.data.kustomize\.buildOptions}'

If empty, patch it and restart the repo server:

Terminal window
kubectl -n argocd patch cm argocd-cm --type=merge -p '{"data":{"kustomize.buildOptions":"--enable-helm"}}'
kubectl -n argocd rollout restart deploy argocd-repo-server

Sync applications in dependency order:

Terminal window
for app in gateway-api-crds external-dns cert-manager external-secrets envoy-gateway managed-secrets argocd-self-managed argocd-ingress docs-app; do
kubectl -n argocd patch application $app --type=merge -p '{"operation":{"sync":{}}}'
sleep 10
done

After all applications are synced, verify HTTPS works:

Terminal window
curl -I https://k8s.yourdomain.com
curl -I https://cd.k8s.yourdomain.com

Both should return HTTP/2 200. HTTP requests should redirect with 301:

Terminal window
curl -I http://k8s.yourdomain.com

See Common Issues for more solutions.