Skip to main content

Overview

PostgreSQL is a powerful open-source relational database. This template deploys a single-replica PostgreSQL instance with persistent storage and optional scheduled backups to AWS S3 or GCS.
PostgreSQL on Control Plane operates as a single-replica deployment. Do not scale up the replica count, as this would result in multiple isolated instances rather than a replicated cluster. For a highly available setup, use the PostgreSQL Highly Available template instead.

What Gets Created

  • Stateful Workload — A single-replica PostgreSQL container with configurable resources.
  • Volume Set — Persistent storage for database data, with optional autoscaling.
  • Secret — A dictionary secret storing the database username and password, injected into the container at startup.
  • Identity & Policy — An identity bound to the workload with reveal access to the database credentials secret, and cloud storage access when backup is enabled.
  • Cron Workload (optional) — A scheduled pg_dump backup job that writes compressed SQL dumps to AWS S3 or GCS.
This template does not create a GVC. You must deploy it into an existing GVC.

Installation

This template has no external prerequisites unless backup is enabled. To install, follow the instructions for your preferred method:

Configuration

The default values.yaml for this template:
image: postgres:18  # versions before postgres:17 do not support the backup feature

resources:
  minCpu: 200m
  minMemory: 128Mi
  maxCpu: 500m
  maxMemory: 256Mi

config:
  username: username
  password: password
  database: test

volumeset:
  capacity: 10 # initial capacity in GiB (minimum is 10)
  autoscaling:
    enabled: false
    maxCapacity: 100 # Maximum capacity in GiB
    minFreePercentage: 10 # Trigger scaling when free space drops below this percentage
    scalingFactor: 1.2 # Multiply current capacity by this factor when scaling up

internalAccess:
  type: same-gvc # options: none, same-gvc, same-org, workload-list
  workloads: # Note: can only be used if type is same-gvc or workload-list
    #- //gvc/GVC_NAME/workload/WORKLOAD_NAME

backup: # compatible with Postgres 17+ only
  enabled: false
  image: controlplanecorporation/pg-backup:18.1.0 # 18.1.0 = Postgres 18, 17.1.0 = Postgres 17
  schedule: "0 2 * * *" # cron schedule, default is daily at 2am UTC

  resources:
    cpu: 100m
    memory: 128Mi

  provider: aws # Options: aws or gcp

  aws:
    bucket: my-backup-bucket
    region: us-east-1
    cloudAccountName: my-backup-cloudaccount
    policyName: my-backup-policy
    prefix: postgres/backups

  gcp:
    bucket: my-backup-bucket
    cloudAccountName: my-backup-cloudaccount
    prefix: postgres/backups

Credentials

  • config.username — PostgreSQL username. Change before deploying to production.
  • config.password — PostgreSQL password. Change before deploying to production.
  • config.database — Name of the database created on startup.
These values are only applied on first startup when the data directory is empty. Updating them after the initial deployment will have no effect on the running database. To change credentials or the database name on an existing instance, use PostgreSQL’s native commands (e.g. ALTER USER, ALTER DATABASE).

Resources

  • resources.minCpu / resources.minMemory — Minimum CPU and memory guaranteed to the workload.
  • resources.maxCpu / resources.maxMemory — Maximum CPU and memory the workload can use.

Storage

  • volumeset.capacity — Initial volume size in GiB (minimum 10).
  • volumeset.autoscaling.enabled — Automatically expand the volume as it fills. When enabled:
    • maxCapacity — Maximum volume size in GiB.
    • minFreePercentage — Trigger a scale-up when free space drops below this percentage.
    • scalingFactor — Multiply the current capacity by this factor when scaling up.

Internal Access

  • internalAccess.type — Controls which workloads can connect to PostgreSQL on port 5432:
TypeDescription
noneNo internal access allowed
same-gvcAllow access from all workloads in the same GVC
same-orgAllow access from all workloads in the same organization
workload-listAllow access only from specific workloads listed in workloads

Connecting to PostgreSQL

Once deployed, connect to the database from within the same GVC using:
RELEASE_NAME-postgres.GVC_NAME.cpln.local:5432

Backup

Backup is disabled by default. When enabled, a cron workload runs pg_dump on the configured schedule and uploads compressed SQL dumps to AWS S3 or GCS.
Backup requires PostgreSQL 17 or later. Set backup.image to match your PostgreSQL version: controlplanecorporation/pg-backup:18.1.0 for Postgres 18, or controlplanecorporation/pg-backup:17.1.0 for Postgres 17.
  • backup.enabled — Enable scheduled backups.
  • backup.schedule — Cron expression for backup frequency (default: daily at 2am UTC).
  • backup.provideraws or gcp.
  • backup.resources.cpu / backup.resources.memory — Resources for the backup cron container.

Backup Prerequisites

AWS S3

Before enabling backup with provider: aws, complete the following in your AWS account:
  1. Create an S3 bucket. Set backup.aws.bucket to its name and backup.aws.region to its region.
  2. If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set backup.aws.cloudAccountName to its name.
  3. Create an IAM policy with the following JSON, replacing YOUR_BUCKET_NAME:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket",
                "s3:GetObjectVersion",
                "s3:DeleteObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR_BUCKET_NAME",
                "arn:aws:s3:::YOUR_BUCKET_NAME/*"
            ]
        }
    ]
}
  1. Set backup.aws.policyName to the name of the policy created in step 3.
  2. Set backup.aws.prefix to the folder path where backups will be stored.

GCS

Before enabling backup with provider: gcp, complete the following in your GCP account:
  1. Create a GCS bucket. Set backup.gcp.bucket to its name.
  2. If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set backup.gcp.cloudAccountName to its name.
  3. Add the Storage Admin role to the GCP service account associated with the Cloud Account.
  4. Set backup.gcp.prefix to the folder path where backups will be stored.

Restoring a Backup

Run the following from a client with access to the backup bucket: AWS S3:
export PGPASSWORD="PASSWORD"

aws s3 cp "s3://BUCKET_NAME/PREFIX/BACKUP_FILE.sql.gz" - \
  | gunzip \
  | psql \
      --host=RELEASE_NAME-postgres.GVC_NAME.cpln.local \
      --port=5432 \
      --username=USERNAME \
      --dbname=postgres

unset PGPASSWORD
GCS:
export PGPASSWORD="PASSWORD"

gsutil cp "gs://BUCKET_NAME/PREFIX/BACKUP_FILE.sql.gz" - \
  | gunzip \
  | psql \
      --host=RELEASE_NAME-postgres.GVC_NAME.cpln.local \
      --port=5432 \
      --username=USERNAME \
      --dbname=postgres

unset PGPASSWORD

External References