Overview
PostgreSQL is a powerful open-source relational database. This template deploys a single-replica PostgreSQL instance with persistent storage and optional scheduled backups to AWS S3 or GCS.PostgreSQL on Control Plane operates as a single-replica deployment. Do not scale up the replica count, as this would result in multiple isolated instances rather than a replicated cluster. For a highly available setup, use the PostgreSQL Highly Available template instead.
What Gets Created
- Stateful Workload — A single-replica PostgreSQL container with configurable resources.
- Volume Set — Persistent storage for database data, with optional autoscaling.
- Secret — A dictionary secret storing the database username and password, injected into the container at startup.
- Identity & Policy — An identity bound to the workload with
revealaccess to the database credentials secret, and cloud storage access when backup is enabled. - Cron Workload (optional) — A scheduled
pg_dumpbackup job that writes compressed SQL dumps to AWS S3 or GCS.
This template does not create a GVC. You must deploy it into an existing GVC.
Installation
This template has no external prerequisites unless backup is enabled. To install, follow the instructions for your preferred method:UI
Browse, install, and manage templates visually
CLI
Manage templates from your terminal
Terraform
Declare templates in your Terraform configurations
Pulumi
Declare templates in your Pulumi programs
Configuration
The defaultvalues.yaml for this template:
Credentials
config.username— PostgreSQL username. Change before deploying to production.config.password— PostgreSQL password. Change before deploying to production.config.database— Name of the database created on startup.
These values are only applied on first startup when the data directory is empty. Updating them after the initial deployment will have no effect on the running database. To change credentials or the database name on an existing instance, use PostgreSQL’s native commands (e.g.
ALTER USER, ALTER DATABASE).Resources
resources.minCpu/resources.minMemory— Minimum CPU and memory guaranteed to the workload.resources.maxCpu/resources.maxMemory— Maximum CPU and memory the workload can use.
Storage
volumeset.capacity— Initial volume size in GiB (minimum 10).volumeset.autoscaling.enabled— Automatically expand the volume as it fills. When enabled:maxCapacity— Maximum volume size in GiB.minFreePercentage— Trigger a scale-up when free space drops below this percentage.scalingFactor— Multiply the current capacity by this factor when scaling up.
Internal Access
internalAccess.type— Controls which workloads can connect to PostgreSQL on port5432:
| Type | Description |
|---|---|
none | No internal access allowed |
same-gvc | Allow access from all workloads in the same GVC |
same-org | Allow access from all workloads in the same organization |
workload-list | Allow access only from specific workloads listed in workloads |
Connecting to PostgreSQL
Once deployed, connect to the database from within the same GVC using:Backup
Backup is disabled by default. When enabled, a cron workload runspg_dump on the configured schedule and uploads compressed SQL dumps to AWS S3 or GCS.
Backup requires PostgreSQL 17 or later. Set
backup.image to match your PostgreSQL version: controlplanecorporation/pg-backup:18.1.0 for Postgres 18, or controlplanecorporation/pg-backup:17.1.0 for Postgres 17.backup.enabled— Enable scheduled backups.backup.schedule— Cron expression for backup frequency (default: daily at 2am UTC).backup.provider—awsorgcp.backup.resources.cpu/backup.resources.memory— Resources for the backup cron container.
Backup Prerequisites
AWS S3
Before enabling backup withprovider: aws, complete the following in your AWS account:
- Create an S3 bucket. Set
backup.aws.bucketto its name andbackup.aws.regionto its region. - If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set
backup.aws.cloudAccountNameto its name. - Create an IAM policy with the following JSON, replacing
YOUR_BUCKET_NAME:
- Set
backup.aws.policyNameto the name of the policy created in step 3. - Set
backup.aws.prefixto the folder path where backups will be stored.
GCS
Before enabling backup withprovider: gcp, complete the following in your GCP account:
- Create a GCS bucket. Set
backup.gcp.bucketto its name. - If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set
backup.gcp.cloudAccountNameto its name. - Add the Storage Admin role to the GCP service account associated with the Cloud Account.
- Set
backup.gcp.prefixto the folder path where backups will be stored.