Skip to main content
The Control Plane Kubernetes Operator enables you to manage Control Plane resources directly from your Kubernetes cluster using custom resource definitions (CRDs). It bridges the gap between Kubernetes-native workflows and Control Plane infrastructure.

What you’ll achieve

By the end of this guide, you will have:
  1. A Kubernetes cluster with the Control Plane operator installed
  2. Authentication configured between your cluster and Control Plane
  3. The ability to deploy and manage Control Plane resources using Kubernetes CRDs
  4. Optional ArgoCD integration for GitOps workflows

When to use this

GitOps workflows

Manage Control Plane resources alongside your Kubernetes manifests in Git

ArgoCD integration

Deploy Control Plane resources through ArgoCD applications

Kubernetes-native experience

Manage GVCs, workloads, secrets, and more using CRDs

Infrastructure as Code

Define your entire Control Plane infrastructure declaratively

Prerequisites

A running Kubernetes cluster (v1.19+). This can be:
  • A managed cluster (EKS, GKE, AKS)
  • A local cluster (kind, minikube, Docker Desktop)
  • Any conformant Kubernetes distribution
If you don’t have a cluster, see Quick start (local cluster) to set one up.
Install Helm v3.0+ for deploying the operator:
# macOS
brew install helm

# Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Windows
choco install kubernetes-helm
For other installation methods, see the Helm installation guide.
Ensure kubectl is configured and can communicate with your cluster:
kubectl cluster-info
You need to be signed up to Control Plane and have access to an org. If you don’t already:You also need permissions to create service accounts within your org.

Quick start (local cluster)

If you don’t have a Kubernetes cluster, you can set up a local one for testing.
Docker must be installed and running on your machine for local Kubernetes clusters.
Install kind, then create a cluster:
kind create cluster --name cpln-operator

# Verify
kubectl cluster-info
Then continue with the installation steps below.

Installation

Step 1: Install cert-manager

The operator requires cert-manager for webhook certificate management.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml
Wait for cert-manager to be ready:
kubectl wait --for=condition=Available deployment --all -n cert-manager --timeout=300s

Step 2: Install the operator

Add the Control Plane Helm repository and install the operator:
# Add the Helm repository
helm repo add cpln https://controlplane-com.github.io/k8s-operator
helm repo update

# Install the operator in the controlplane namespace
helm install cpln-operator cpln/cpln-operator \
  -n controlplane \
  --create-namespace
Verify the operator is running:
kubectl get pods -n controlplane -l app=operator

Step 3: Configure authentication

The operator needs credentials to communicate with the Control Plane API. You have two options:

Deploying resources

All Control Plane resources are managed through CRDs. Each resource requires:
  • org: The target Control Plane organization
  • gvc: The target GVC (for GVC-scoped resources like workloads and identities)
We recommend organizing resources by namespace:
  • One namespace per GVC for GVC-scoped resources (workloads, identities, volumesets)
  • One namespace per org for org-scoped resources (GVCs, secrets, policies)
The CRD structure differs from standard Kubernetes resources. Fields like org, gvc, and description are at the top level, not inside spec. Always use the export feature to generate accurate manifests.

Apply with kubectl

You can apply CRD manifests directly using kubectl. Save the following GVC manifest to a file (e.g., gvc.yaml):
apiVersion: cpln.io/v1
kind: gvc
metadata:
  name: my-gvc
  namespace: default
org: YOUR_ORG_NAME  # Replace with your Control Plane org name
description: my-gvc
spec:
  staticPlacement:
    locationLinks:
      - //location/aws-eu-central-1
Apply it to your cluster:
kubectl apply -f gvc.yaml
Verify the resource was created:
kubectl get gvcs
The operator syncs the CRD to Control Plane. You can verify the GVC exists:
cpln gvc get my-gvc --org YOUR_ORG_NAME

ArgoCD integration

The operator integrates seamlessly with ArgoCD for GitOps workflows. Once the operator is installed, you can point ArgoCD at a Git repository containing YAML manifests or a Helm chart.
This section assumes ArgoCD is already installed on your cluster. See the ArgoCD installation guide if you haven’t set it up yet.

Defining ArgoCD applications

An ArgoCD Application defines what to deploy (source) and where to deploy it (destination). For Control Plane CRDs, you can use either a Helm chart or raw YAML manifests stored in a Git repository. Save the manifest to a file (e.g., app.yaml) and update the placeholder values before applying.
Point ArgoCD at a Helm repository containing your Control Plane CRD templates:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-helm-app
  namespace: argocd  # This is usually where ArgoCD is installed
spec:
  project: default
  destination:
    server: https://kubernetes.default.svc  # Cluster API server URL
    namespace: your-namespace  # Target namespace in your cluster
  source:
    repoURL: https://your-org.github.io/your-repo/  # URL of your Helm repository
    chart: my-cpln-chart  # Name of your Helm chart
    targetRevision: 0.1.0  # Chart version
    helm:
      values: |
        org: your-org-name
  syncPolicy:
    automated:
      prune: true  # Automatically delete resources no longer defined in the chart
      selfHeal: true  # Automatically sync drifted resources
Apply your ArgoCD Application:
kubectl -n argocd apply -f app.yaml

Example application

The k8s-operator repository includes a ready-to-use example that deploys a GVC, workload, identity, and other resources. Copy the following manifest and save it to a file (e.g., example-app.yaml):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-helm-app
  namespace: argocd
spec:
  project: default
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: fresh
  source:
    repoURL: 'https://cuppojoe.github.io/argo-example/'
    chart: argo-example
    targetRevision: 0.2.3
    helm:
      values: |
        org: YOUR_ORG_NAME  # Replace with your Control Plane org name
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
Replace YOUR_ORG_NAME with your actual Control Plane org name before applying.
Apply the manifest:
kubectl apply -f example-app.yaml
The example Helm chart creates the following Control Plane resources:
  • GVC named fresh in aws-eu-central-1
  • Workload with a serverless container
  • Identity for cloud access
  • Policy for permissions
  • Secret for credentials
  • Additional resources (agent, domain, ipset)

Connecting to the ArgoCD UI

To access the ArgoCD UI, retrieve the admin password and port-forward the service:
# Print the initial admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d && echo

# Port-forward to the ArgoCD UI
kubectl -n argocd port-forward service/argocd-server 18081:443
Open a browser and navigate to https://localhost:18081. Accept the self-signed certificate and log in with username admin and the password from the command above.
Store your Control Plane CRD manifests in a Git repository and point ArgoCD at it. Changes merged to your main branch will automatically sync to Control Plane.

Uninstalling

Remove operator credentials

Remove the authentication secret:
cpln operator uninstall --org YOUR_ORG_NAME
Or manually:
kubectl delete secret YOUR_ORG_NAME -n controlplane

Remove the operator

helm uninstall cpln-operator -n controlplane

Remove cert-manager (optional)

If you no longer need cert-manager:
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml
Uninstalling the operator does not delete Control Plane resources that were created by it. Resources in Control Plane will continue to exist.

Preventing resource deletion

Deleting a Kubernetes CRD while the operator is running will remove the corresponding resource from Control Plane. To prevent this, add the cpln.io/resource-policy: keep annotation:
kind: gvc
apiVersion: cpln.io/v1
org: your-org-name
metadata:
  name: production
  annotations:
    cpln.io/resource-policy: keep
spec:
  # ...
With this annotation, deleting the Kubernetes resource will not delete the Control Plane resource.

Troubleshooting

Check cert-manager is running:
kubectl get pods -n cert-manager
Verify webhook certificates:
kubectl get certificates -n controlplane
Check operator logs:
kubectl logs -n controlplane -l app=operator -f
Verify the authentication secret exists (look for a secret named after your org):
kubectl get secrets -n controlplane
If the secret for your org doesn’t exist, run the cpln operator install command to create it.
Ensure the service account has appropriate permissions in Control Plane. Check which group and/or policy the service account belongs to and verify it has the necessary permissions.To reconfigure authentication, run the cpln operator install command which will create a service account and add it to the superusers group.
View the full resource spec available for each CRD:
kubectl explain gvc.spec
kubectl explain workload.spec
Create the controlplane namespace:
kubectl create namespace controlplane
Secrets use native Kubernetes Secret objects with special labels, not CRDs. See the secrets handling documentation for the correct format and required labels.

Next steps