Skip to main content

Overview

CockroachDB is a distributed SQL database that provides automatic replication, horizontal scalability, and built-in fault tolerance across multiple regions. This template deploys a multi-region CockroachDB cluster on Control Plane as a stateful workload with replica-direct load balancing. Each location runs a configurable number of replicas that discover and join one another using Control Plane’s internal DNS. On first deployment, the cluster initializes itself, creates a database and user, registers all regions, and sets the survival goal to SURVIVE REGION FAILURE.

What Gets Created

  • GVC — A dedicated GVC across the specified locations.
  • Stateful CockroachDB Workload — CockroachDB (v25.4.0) with per-location replica scaling and replica-direct load balancing.
  • Volume Set — Persistent ext4 storage (general-purpose-ssd) with final snapshot creation and 7-day retention.
  • Identity & Policy — An identity bound to the workload with reveal access to the startup and user secrets, and cloud storage access when backup is enabled.
  • Secrets — A startup script for cluster join/initialization and an opaque secret for the database user credential.
  • Backup Cron Workload (optional) — A scheduled job that triggers a CockroachDB BACKUP SQL command to stream data directly to AWS S3 or GCS.

Architecture

CockroachDB uses the Raft consensus protocol to replicate data across nodes. Each Control Plane location maps to a CockroachDB locality region, and replicas advertise their address via internal DNS (replica-N.WORKLOAD.LOCATION.GVC.cpln.local). With 3 or more regions and the SURVIVE REGION FAILURE survival goal, the cluster tolerates the complete loss of one region without impacting availability.

Installation

This template has no external prerequisites unless backup is enabled. To install, follow the instructions for your preferred method:

UI

Browse, install, and manage templates visually

CLI

Manage templates from your terminal

Terraform

Declare templates in your Terraform configurations
Pulumi Icon Streamline Icon: https://streamlinehq.com

Pulumi

Declare templates in your Pulumi programs

Configuration

The default values.yaml for this template:
gvc:
  name: cockroach-gvc
  locations:
    - name: aws-us-west-2
      replicas: 3
    - name: aws-us-east-2
      replicas: 3
    - name: aws-eu-central-1
      replicas: 3

image: cockroachdb/cockroach:v25.4.0

resources:
  cpu: 2
  memory: 4Gi

database:
  name: mydb
  user: myuser

volumeset:
  capacity: 10 # initial capacity in GiB (minimum is 10)
  autoscaling:
    enabled: false # Set to true to enable autoscaling
    maxCapacity: 100 # Maximum capacity in GiB when autoscaling is enabled
    minFreePercentage: 10 # Minimum free percentage to trigger scaling when autoscaling is enabled
    scalingFactor: 1.2 # Scaling factor to determine how much to scale up when autoscaling is triggered

internal_access:
  type: same-gvc # options: same-gvc, same-org, workload-list
  workloads: # Note: can only be used if type is same-gvc or workload-list
    #- //gvc/GVC_NAME/workload/WORKLOAD_NAME

backup:
  enabled: false
  image: controlplanecorporation/cockroach-backup:1.0
  schedule: "0 2 * * *"
  activeDeadlineSeconds: 14400   # hard kill after 4 hours if backup hangs
  location: aws-us-east-2        # run the backup job in the same region as your storage bucket
  resources:
    cpu: 500m
    memory: 512Mi
  provider: aws  # options: aws, gcp
  aws:
    bucket: my-backup-bucket
    region: us-east-1
    cloudAccountName: my-backup-cloudaccount
    policyName: my-backup-policy
    prefix: cockroach/backups
  gcp:
    bucket: my-backup-bucket
    cloudAccountName: my-backup-cloudaccount
    prefix: cockroach/backups

Locations and Replicas

Configure the gvc.locations section to control which regions the cluster spans and how many replicas run in each.
While CockroachDB can run on 2 locations, a minimum of 3 locations with 3 replicas per location is recommended. This is the minimum required for CockroachDB to survive a full region failure.
Setting a location’s replicas to 0 suspends the workload in that location without removing it from the configuration.

Database Initialization

The database section specifies a database and user to create automatically when the cluster first initializes:
database:
  name: mydb
  user: myuser
The created user is granted full access to the specified database.
These values are only applied on the first initialization. If the cluster has already been initialized, they are skipped on restart or upgrade. To change credentials or the database name on an existing cluster, use CockroachDB’s native commands (e.g. ALTER USER, RENAME DATABASE).

Resources and Storage

  • resources.cpu and resources.memory set the CPU and memory allocated to each CockroachDB replica.
  • volumeset.capacity sets the initial persistent volume size in GiB (minimum 10).
  • volumeset.autoscaling.enabled — Enable automatic volume expansion as data grows.
  • volumeset.autoscaling.maxCapacity — Maximum volume size in GiB.
  • volumeset.autoscaling.minFreePercentage — Triggers a scale-up when free space falls below this percentage.
  • volumeset.autoscaling.scalingFactor — Multiplier applied to the current capacity on each scale-up.

Internal Access

The internal_access section controls which workloads can reach the CockroachDB cluster internally:
TypeDescription
same-gvcAllow access from all workloads in the same GVC
same-orgAllow access from all workloads in the same organization
workload-listAllow access only from specific workloads listed in workloads (can be combined with same-gvc)
When using workload-list, specify each workload using its full link format:
internal_access:
  type: workload-list
  workloads:
    - //gvc/GVC_NAME/workload/WORKLOAD_NAME

Connecting to CockroachDB

Once deployed, the SQL interface is available on port 26257. Connect from a workload within the same GVC using:
cockroach sql --insecure --host=RELEASE_NAME-cockroach.GVC_NAME.cpln.local:26257
This template deploys CockroachDB in insecure mode (no TLS). It is intended for internal workloads that connect through Control Plane’s internal network.
The DB Console (HTTP UI) runs on port 8080 for monitoring cluster health, query performance, and node status. It is not exposed externally — access it via port forward and open http://localhost:8080 in your browser.
This template creates a GVC with a default name defined in the values file. If you plan to deploy multiple instances, you must assign a unique GVC name for each deployment.

Backup

Backup is disabled by default. When enabled, a cron workload triggers a CockroachDB BACKUP SQL command on the configured schedule. CockroachDB nodes stream backup data directly to cloud storage using their own workload identity — the backup job only issues the SQL command and does not transfer data itself.
  • backup.enabled — Enable scheduled backups.
  • backup.schedule — Cron expression for backup frequency (default: daily at 2am UTC).
  • backup.provideraws or gcp.
  • backup.location — The Control Plane location where the backup cron job runs. Set this to the region closest to your storage bucket to minimize cross-region transfer latency.
  • backup.activeDeadlineSeconds — Hard timeout for the backup job in seconds (default: 14400 / 4 hours). The job is killed if it exceeds this limit.
  • backup.resources.cpu / backup.resources.memory — Resources for the backup cron container.

AWS S3

Before enabling backup with provider: aws, complete the following in your AWS account:
  1. Create an S3 bucket. Set backup.aws.bucket to its name and backup.aws.region to its region.
  2. If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set backup.aws.cloudAccountName to its name.
  3. Create an IAM policy with the following JSON, replacing YOUR_BUCKET_NAME:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket",
                "s3:GetObjectVersion",
                "s3:DeleteObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR_BUCKET_NAME",
                "arn:aws:s3:::YOUR_BUCKET_NAME/*"
            ]
        }
    ]
}
  1. Set backup.aws.policyName to the name of the policy created in step 3.
  2. Set backup.aws.prefix to the folder path where backups will be stored.

GCS

Before enabling backup with provider: gcp, complete the following in your GCP account:
  1. Create a GCS bucket. Set backup.gcp.bucket to its name.
  2. If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set backup.gcp.cloudAccountName to its name.
  3. Add the Storage Admin role to the GCP service account associated with the Cloud Account.
  4. Set backup.gcp.prefix to the folder path where backups will be stored.

Restoring a Backup

Backups are stored at BUCKET/PREFIX/. Run cockroach sql from a machine with network access to the cluster. AWS S3
cockroach sql --insecure \
  --host="RELEASE_NAME-cockroach.GVC_NAME.cpln.local:26257" \
  --execute="RESTORE FROM LATEST IN 's3://BUCKET_NAME/PREFIX?AUTH=implicit&AWS_REGION=BUCKET_REGION';"
GCS
cockroach sql --insecure \
  --host="RELEASE_NAME-cockroach.GVC_NAME.cpln.local:26257" \
  --execute="RESTORE FROM LATEST IN 'gs://BUCKET_NAME/PREFIX?AUTH=implicit';"

External References

CockroachDB Documentation

Official CockroachDB documentation

Multi-Region Overview

Learn about multi-region deployments

Survive Region Failure

Configure region failure survival goals

Backup Image Source

Source code for the CockroachDB backup container image

CockroachDB Template

View the source files, default values, and chart definition