Skip to main content

Overview

TiDB is a distributed, MySQL-compatible database designed for horizontal scalability and high availability. It separates compute from storage across three components: a SQL processing layer (TiDB Server), a distributed key-value store (TiKV), and a placement driver (PD) that manages cluster metadata and scheduling. This template deploys a production-ready TiDB cluster across multiple Control Plane locations using PingCAP’s official images.

What Gets Created

  • GVC — A new GVC spanning the configured locations.
  • Stateful PD Workload — (RELEASE_NAME-pd): placement driver cluster distributed across locations according to pdReplicas. Uses replicaDirect addressing so each PD node is individually reachable.
  • Stateful TiKV Workload — (RELEASE_NAME-tikv): distributed storage nodes. Replica count per location is controlled by gvc.locations[].replicas.
  • Standard TiDB Server Workload — (RELEASE_NAME-server): MySQL-compatible SQL layer on port 4000. Per-location replica count follows gvc.locations[].replicas.
  • DB Init Workload (optional) — (RELEASE_NAME-tidb-db-init): a one-time initialization job that sets the root password and creates the application database and user. Disable after first deployment.
  • Backup Cron Workload (optional) — A scheduled backup job that uses TiDB’s br tool to write a full cluster snapshot to AWS S3 or GCS.
  • Volume Set — PD storage (RELEASE_NAME-tidb-pd-vs): 10 GiB fixed, ext4, general-purpose-ssd, with 7-day snapshot retention.
  • Volume Set — TiKV storage (RELEASE_NAME-tidb-tikv-vs): configurable capacity with optional autoscaling, ext4, general-purpose-ssd, with 7-day snapshot retention.
  • Secrets — Opaque secrets containing startup scripts for PD, TiKV, and TiDB Server, plus an optional dictionary secret with database credentials.
  • Identity & Policy — A shared identity bound to all workloads with reveal access to all secrets, and cloud storage access when backup is enabled.

Installation

This template has no external prerequisites unless backup is enabled. To install, follow the instructions for your preferred method:

UI

Browse, install, and manage templates visually

CLI

Manage templates from your terminal

Terraform

Declare templates in your Terraform configurations
Pulumi Icon Streamline Icon: https://streamlinehq.com

Pulumi

Declare templates in your Pulumi programs

Configuration

The default values.yaml for this template:
devMode: false # WARNING: For development/testing only. Bypasses the 3-location HA requirement. Do NOT enable in production.

gvc:
  name: tidb-gvc
  locations: # Replica count applies to TiKV and TiDB Server workloads; PD uses pdReplicas
    - name: aws-us-east-2
      replicas: 1
    - name: aws-us-west-2
      replicas: 1
    - name: aws-us-east-1
      replicas: 1
  pdReplicas: 3 # options: 3, 5, 7

images:
  server: pingcap/tidb:v8.5.3
  tikv: pingcap/tikv:v8.5.3
  pd: pingcap/pd:v8.5.3

resources:
  pd:
    cpu: 2
    memory: 4Gi
  server:
    cpu: 2
    memory: 2Gi
  tikv:
    cpu: 2
    memory: 4Gi

autoCreateDatabase: # Enable to automatically create the database on initialization
  enabled: true
  deployInitWorkload: true # Set to false after the DB has been initialized to remove the init workload and save resources
  database: # Set database values
    rootPassword: myrootpw
    user: myuser
    password: mypw
    db: mydb

volumeset:
  tikv:
    capacity: 10 # initial capacity in GiB (minimum is 10)
    autoscaling:
      enabled: false # Set to true to enable autoscaling
      maxCapacity: 100 # Maximum capacity in GiB
      minFreePercentage: 10 # Minimum free percentage before scaling triggers
      scalingFactor: 1.2 # Multiplier applied to current capacity when scaling
  pd:
    capacity: 10 # initial capacity in GiB (minimum is 10)

exposeServer: true # Set to true to expose TiDB server publicly

external_access: # Set if client is outside the GVC or in another location
  server_outboundAllowCIDR: []
  tikv_outboundAllowCIDR: [] # Note: when backup.enabled is true, the template automatically allows outbound access (0.0.0.0/0) regardless of this value.
  pd_outboundAllowCIDR: []

internal_access:
  server:
    type: same-gvc # options: same-gvc, same-org, workload-list
    workloads:  # Note: can only be used if type is same-gvc or workload-list
      #- //gvc/GVC_NAME/workload/WORKLOAD_NAME
  tikv:
    type: same-gvc # options: same-gvc, same-org, workload-list
    workloads:  # Note: can only be used if type is same-gvc or workload-list
      #- //gvc/GVC_NAME/workload/WORKLOAD_NAME
  pd:
    type: same-gvc # options: same-gvc, same-org, workload-list
    workloads:  # Note: can only be used if type is same-gvc or workload-list
      #- //gvc/GVC_NAME/workload/WORKLOAD_NAME

backup:
  enabled: false
  image: controlplanecorporation/tidb-backup:v8.5.3
  schedule: "0 2 * * *"  # daily at 2am UTC
  activeDeadlineSeconds: 14400  # 4 hours max per backup job
  location: aws-us-east-1  # Run backup in the location closest to your storage bucket/region

  resources:
    cpu: 1
    memory: 1Gi

  provider: aws # Options: aws or gcp

  aws:
    bucket: my-backup-bucket
    region: us-east-1
    cloudAccountName: my-backup-cloudaccount
    policyName: my-backup-policy
    prefix: tidb/backups

  gcp:
    bucket: my-backup-bucket
    cloudAccountName: my-backup-cloudaccount
    prefix: tidb/backups

Locations

  • gvc.name — Name of the GVC to create. Must be unique within your organization if deploying multiple instances.
  • gvc.locations — List of Control Plane locations. At least 3 locations are required unless devMode is enabled.
  • locations[].replicas — Number of TiKV and TiDB Server replicas per location. Set to 0 to suspend a component in that location without removing it from the configuration.
  • gvc.pdReplicas — Total number of PD replicas across all locations. Must be 3, 5, or 7. When set to 3, exactly 3 locations are required. Replicas are distributed evenly across locations.

Development Mode

Set devMode: true to bypass the 3-location requirement and deploy with 1 or 2 locations for development and testing purposes.
devMode provides no fault tolerance. If the location(s) become unavailable, the cluster will halt. Never enable this in production.
Even in dev mode, PD still requires 3 replicas (pdReplicas: 3) and TiKV still needs at least 3 total instances across all locations. Configure replicas per location accordingly: 1 location — all 3 TiKV instances in a single location:
devMode: true
gvc:
  locations:
    - name: aws-us-east-2
      replicas: 3
  pdReplicas: 3
2 locations — at least 3 total TiKV instances across both:
devMode: true
gvc:
  locations:
    - name: aws-us-east-2
      replicas: 2
    - name: aws-us-east-1
      replicas: 1
  pdReplicas: 3

Database Initialization

  • autoCreateDatabase.enabled — Creates a dictionary secret with database credentials used by the TiDB Server and init workload.
  • autoCreateDatabase.deployInitWorkload — Deploys a one-time init job that sets the root password and creates the application database and user. The job checks whether the database already exists before running — if it does, it exits immediately.
After the cluster is initialized, set autoCreateDatabase.deployInitWorkload to false and upgrade the template to remove the init workload and free up resources.

Credentials

  • autoCreateDatabase.database.rootPassword — MySQL root password. Change before deploying to production.
  • autoCreateDatabase.database.user — Application database username.
  • autoCreateDatabase.database.password — Application database password.
  • autoCreateDatabase.database.db — Name of the application database to create.
These values are only applied on first initialization. If the database already exists, the init workload exits without making changes. To modify credentials on an existing cluster, use MySQL’s native commands (e.g. ALTER USER, ALTER DATABASE).

Resources

  • resources.pd.cpu / resources.pd.memory — CPU and memory per PD replica.
  • resources.server.cpu / resources.server.memory — CPU and memory per TiDB Server replica.
  • resources.tikv.cpu / resources.tikv.memory — CPU and memory per TiKV replica.

Storage

TiKV storage (configurable):
  • volumeset.tikv.capacity — Initial volume size in GiB (minimum 10).
  • volumeset.tikv.autoscaling.enabled — Automatically expand volumes as they fill. When enabled:
    • maxCapacity — Maximum volume size in GiB.
    • minFreePercentage — Trigger a scale-up when free space drops below this percentage.
    • scalingFactor — Multiply current capacity by this factor when scaling up.
PD storage (fixed):
  • volumeset.pd.capacity — Initial volume size in GiB for PD metadata (minimum 10).
Both volume sets retain snapshots for 7 days and create a final snapshot on deletion.

Access

Internal access — configured per component (server, tikv, pd):
TypeDescription
same-gvcAllow access from all workloads in the same GVC (recommended)
same-orgAllow access from all workloads in the same organization
workload-listAllow access only from specific workloads
External access:
  • exposeServer — Set to true to allow external connections to the TiDB MySQL port (4000) from any IP (0.0.0.0/0).
  • external_access.server_outboundAllowCIDR / tikv_outboundAllowCIDR / pd_outboundAllowCIDR — Outbound CIDR allowlists for each component, for reaching external services. When backup.enabled is true, TiKV outbound access is automatically set to 0.0.0.0/0 so nodes can upload directly to cloud storage, regardless of tikv_outboundAllowCIDR.

Connecting to TiDB

TiDB Server is MySQL-compatible. Connect using any MySQL client from within the same GVC:
RELEASE_NAME-server.GVC_NAME.cpln.local:4000
Use the user / password credentials from autoCreateDatabase.database, or connect as root with rootPassword.

Ports

WorkloadPortProtocolDescription
TiDB Server4000TCPMySQL-compatible SQL port
TiDB Server10080HTTPTiDB status and metrics
PD2379TCPPD client port
PD2380TCPPD peer (Raft) port
TiKV20160TCPTiKV data port
TiKV20180TCPTiKV status port

Backup

Backup is disabled by default. When enabled, a cron workload uses TiDB’s br tool to take a full cluster snapshot on the configured schedule and upload it to AWS S3 or GCS.
  • backup.enabled — Enable scheduled backups.
  • backup.schedule — Cron expression for backup frequency (default: daily at 2am UTC).
  • backup.provideraws or gcp.
  • backup.location — The Control Plane location where the backup job runs. Set to the location closest to your storage bucket to minimize cross-region transfer latency and costs.
  • backup.activeDeadlineSeconds — Maximum time allowed per backup job in seconds (default: 14400 / 4 hours).
  • backup.resources.cpu / backup.resources.memory — Resources for the backup cron container.
When backup.enabled is true, the template automatically grants TiKV outbound access to 0.0.0.0/0 so nodes can upload data directly to cloud storage. This overrides external_access.tikv_outboundAllowCIDR.

AWS S3

Before enabling backup with provider: aws, complete the following in your AWS account:
  1. Create an S3 bucket. Set backup.aws.bucket to its name and backup.aws.region to its region.
  2. If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set backup.aws.cloudAccountName to its name.
  3. Create an IAM policy with the following JSON, replacing YOUR_BUCKET_NAME:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket",
                "s3:GetObjectVersion",
                "s3:DeleteObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR_BUCKET_NAME",
                "arn:aws:s3:::YOUR_BUCKET_NAME/*"
            ]
        }
    ]
}
  1. Set backup.aws.policyName to the name of the policy created in step 3.
  2. Set backup.aws.prefix to the folder path where backups will be stored.

GCS

Before enabling backup with provider: gcp, complete the following in your GCP account:
  1. Create a GCS bucket. Set backup.gcp.bucket to its name.
  2. If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set backup.gcp.cloudAccountName to its name.
  3. Add the Storage Admin role to the GCP service account associated with the Cloud Account.
  4. Set backup.gcp.prefix to the folder path where backups will be stored.

Restoring a Backup

Backups are stored at BUCKET/PREFIX/tidb-TIMESTAMP/. To restore, run br restore full from a machine with network access to the PD endpoint. AWS S3:
br restore full \
  --pd="RELEASE_NAME-pd.GVC_NAME.cpln.local:2379" \
  --storage="s3://BUCKET_NAME/PREFIX/tidb-TIMESTAMP" \
  --s3.region="BUCKET_REGION"
GCS:
br restore full \
  --pd="RELEASE_NAME-pd.GVC_NAME.cpln.local:2379" \
  --storage="gcs://BUCKET_NAME/PREFIX/tidb-TIMESTAMP"
The br binary version must match your TiDB cluster version. Download it from the TiDB Community Toolkit.

External References

TiDB Documentation

Official TiDB documentation

TiDB Community Toolkit

Download br and other TiDB ecosystem tools

Backup Image Source

Source code for the TiDB backup container image

TiDB Template

View the source files, default values, and chart definition