Skip to main content
The GCP provider lets you deploy worker nodes in your GCP project. As a prerequisite, only a network and a subnet need to exist.

Requirements

Supported add-ons

Step 1 - Preparing the GCP project

Ensure your GCP project has:
  • A VPC with a subnet in the desired region
  • Service account withe these roles
    RoleDescription
    roles/compute.instanceAdmin.v1Instances, instance groups, templates
    roles/compute.loadBalancerAdminForwarding rules, target pools, addresses
    roles/compute.storageAdminDisks and snapshots
    roles/compute.securityAdminFirewall rules
    roles/compute.networkViewerNetworks/subnets discovery (for validation)
Create a JSON key for the service account and store it in Control Plane as a secret of type GCP.
Permissions can further be restricted using expressions and limiting the scope of the role. Yet Control Plane must be able

Step 2 - Create a Managed Kubernetes Cluster Using a Manifest File

  1. Update the manifest below: Modify the following gcp-mk8s-template.yaml YAML manifest below with actual values. Customize the file as needed.
Replace placeholders for ${PROJECT_ID}, ${NETWORK}, ${SUBNET}, ${SECRET} and ${ZONE}.
kind: mk8s
name: ${NAME}
spec:
  provider:
    gcp:
      projectId: ${PROJECT_ID}
      region: ${REGION}
      image:
        recommended: ubuntu/noble-24.04 # COS linux not yet supported
      saKeyLink: //secret/${SECRET} # points to the secret with the SA JSON key
      network: mk8s
      labels:                   # labels to attach to created GCP resources
        my-google-label: x123
      metadata:                 # metadata to attach to created instances
        my-google-meta1: hello world
      tags:                     # tags to attach to created instances
        - my-google-tag1
        - my-google-tag2
      nodePools:
        - name: general
          bootDiskSize: 30
          machineType: ${MACHINE_TYPE}
          localPersistentDisks: 1
          minSize: 1
          maxSize: 2
          subnet: ${SUBNET}
          zone: ${ZONE}
  addOns:
    headlamp: {}
  version: 1.32.9
This example creates a managed Kubernetes cluster in your project with the following configurations:
  • Kubernetes Version: 1.32.9.
  • Add-ons: Only the headlamp add-on is enabled
  • Node Pool: A single general node pool, scaling on-demand between 1 and 2 nodes.
  • Server Image: ubuntu/noble-24.04.
  1. Create the Cluster: Deploy the aws-mk8s-example cluster by applying the manifest.
    • Console: Apply the gcp-mk8s-template.yaml file using the cpln apply >_ option in the upper right corner.
    • CLI: Execute cpln apply -f gcp-mk8s-template.yaml --org YOUR_ORG_HERE.
    Wait until the cluster is initialized.

Step 3 - Accessing the Cluster

1. Using the Terminal

  1. Obtain the Cluster’s Kubeconfig File: Execute the command cpln mk8s kubeconfig gcp-mk8s-example -f /tmp/gcp-mk8s-example-conf.
  2. Access the Cluster with kubectl: Use the obtained kubeconfig file by running export KUBECONFIG=/tmp/gcp-mk8s-example-conf for the current shell session.

2. Using Kubernetes Dashboard

  1. Navigate to Control Plane Console: Visit the Control Plane Console.
  2. Access the Dashboard: In the Control Plane Console, navigate to Kubernetes in the left sidebar panel and click on Open under Headlamp for the cluster gcp-mk8s-example.

Advanced Configuration Options