Overview
TiDB is a distributed, MySQL-compatible database designed for horizontal scalability and high availability. It separates compute from storage across three components: a SQL processing layer (TiDB Server), a distributed key-value store (TiKV), and a placement driver (PD) that manages cluster metadata and scheduling. This template deploys a production-ready TiDB cluster across multiple Control Plane locations using PingCAP’s official images.What Gets Created
- GVC — A new GVC spanning the configured locations.
- Stateful PD Workload — (
RELEASE_NAME-pd): placement driver cluster distributed across locations according topdReplicas. UsesreplicaDirectaddressing so each PD node is individually reachable. - Stateful TiKV Workload — (
RELEASE_NAME-tikv): distributed storage nodes. Replica count per location is controlled bygvc.locations[].replicas. - Standard TiDB Server Workload — (
RELEASE_NAME-server): MySQL-compatible SQL layer on port4000. Per-location replica count followsgvc.locations[].replicas. - DB Init Workload (optional) — (
RELEASE_NAME-tidb-db-init): a one-time initialization job that sets the root password and creates the application database and user. Disable after first deployment. - Backup Cron Workload (optional) — A scheduled backup job that uses TiDB’s
brtool to write a full cluster snapshot to AWS S3 or GCS. - Volume Set — PD storage (
RELEASE_NAME-tidb-pd-vs): 10 GiB fixed, ext4, general-purpose-ssd, with 7-day snapshot retention. - Volume Set — TiKV storage (
RELEASE_NAME-tidb-tikv-vs): configurable capacity with optional autoscaling, ext4, general-purpose-ssd, with 7-day snapshot retention. - Secrets — Opaque secrets containing startup scripts for PD, TiKV, and TiDB Server, plus an optional dictionary secret with database credentials.
- Identity & Policy — A shared identity bound to all workloads with
revealaccess to all secrets, and cloud storage access when backup is enabled.
Installation
This template has no external prerequisites unless backup is enabled. To install, follow the instructions for your preferred method:UI
Browse, install, and manage templates visually
CLI
Manage templates from your terminal
Terraform
Declare templates in your Terraform configurations
Pulumi
Declare templates in your Pulumi programs
Configuration
The defaultvalues.yaml for this template:
Locations
gvc.name— Name of the GVC to create. Must be unique within your organization if deploying multiple instances.gvc.locations— List of Control Plane locations. At least 3 locations are required unlessdevModeis enabled.locations[].replicas— Number of TiKV and TiDB Server replicas per location. Set to0to suspend a component in that location without removing it from the configuration.gvc.pdReplicas— Total number of PD replicas across all locations. Must be3,5, or7. When set to3, exactly 3 locations are required. Replicas are distributed evenly across locations.
Development Mode
SetdevMode: true to bypass the 3-location requirement and deploy with 1 or 2 locations for development and testing purposes.
Even in dev mode, PD still requires 3 replicas (pdReplicas: 3) and TiKV still needs at least 3 total instances across all locations. Configure replicas per location accordingly:
1 location — all 3 TiKV instances in a single location:
Database Initialization
autoCreateDatabase.enabled— Creates a dictionary secret with database credentials used by the TiDB Server and init workload.autoCreateDatabase.deployInitWorkload— Deploys a one-time init job that sets the root password and creates the application database and user. The job checks whether the database already exists before running — if it does, it exits immediately.
After the cluster is initialized, set
autoCreateDatabase.deployInitWorkload to false and upgrade the template to remove the init workload and free up resources.Credentials
autoCreateDatabase.database.rootPassword— MySQL root password. Change before deploying to production.autoCreateDatabase.database.user— Application database username.autoCreateDatabase.database.password— Application database password.autoCreateDatabase.database.db— Name of the application database to create.
These values are only applied on first initialization. If the database already exists, the init workload exits without making changes. To modify credentials on an existing cluster, use MySQL’s native commands (e.g.
ALTER USER, ALTER DATABASE).Resources
resources.pd.cpu/resources.pd.memory— CPU and memory per PD replica.resources.server.cpu/resources.server.memory— CPU and memory per TiDB Server replica.resources.tikv.cpu/resources.tikv.memory— CPU and memory per TiKV replica.
Storage
TiKV storage (configurable):volumeset.tikv.capacity— Initial volume size in GiB (minimum 10).volumeset.tikv.autoscaling.enabled— Automatically expand volumes as they fill. When enabled:maxCapacity— Maximum volume size in GiB.minFreePercentage— Trigger a scale-up when free space drops below this percentage.scalingFactor— Multiply current capacity by this factor when scaling up.
volumeset.pd.capacity— Initial volume size in GiB for PD metadata (minimum 10).
Access
Internal access — configured per component (server, tikv, pd):
| Type | Description |
|---|---|
same-gvc | Allow access from all workloads in the same GVC (recommended) |
same-org | Allow access from all workloads in the same organization |
workload-list | Allow access only from specific workloads |
exposeServer— Set totrueto allow external connections to the TiDB MySQL port (4000) from any IP (0.0.0.0/0).external_access.server_outboundAllowCIDR/tikv_outboundAllowCIDR/pd_outboundAllowCIDR— Outbound CIDR allowlists for each component, for reaching external services. Whenbackup.enabledistrue, TiKV outbound access is automatically set to0.0.0.0/0so nodes can upload directly to cloud storage, regardless oftikv_outboundAllowCIDR.
Connecting to TiDB
TiDB Server is MySQL-compatible. Connect using any MySQL client from within the same GVC:user / password credentials from autoCreateDatabase.database, or connect as root with rootPassword.
Ports
| Workload | Port | Protocol | Description |
|---|---|---|---|
| TiDB Server | 4000 | TCP | MySQL-compatible SQL port |
| TiDB Server | 10080 | HTTP | TiDB status and metrics |
| PD | 2379 | TCP | PD client port |
| PD | 2380 | TCP | PD peer (Raft) port |
| TiKV | 20160 | TCP | TiKV data port |
| TiKV | 20180 | TCP | TiKV status port |
Backup
Backup is disabled by default. When enabled, a cron workload uses TiDB’sbr tool to take a full cluster snapshot on the configured schedule and upload it to AWS S3 or GCS.
backup.enabled— Enable scheduled backups.backup.schedule— Cron expression for backup frequency (default: daily at 2am UTC).backup.provider—awsorgcp.backup.location— The Control Plane location where the backup job runs. Set to the location closest to your storage bucket to minimize cross-region transfer latency and costs.backup.activeDeadlineSeconds— Maximum time allowed per backup job in seconds (default:14400/ 4 hours).backup.resources.cpu/backup.resources.memory— Resources for the backup cron container.
When
backup.enabled is true, the template automatically grants TiKV outbound access to 0.0.0.0/0 so nodes can upload data directly to cloud storage. This overrides external_access.tikv_outboundAllowCIDR.AWS S3
Before enabling backup withprovider: aws, complete the following in your AWS account:
- Create an S3 bucket. Set
backup.aws.bucketto its name andbackup.aws.regionto its region. - If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set
backup.aws.cloudAccountNameto its name. - Create an IAM policy with the following JSON, replacing
YOUR_BUCKET_NAME:
- Set
backup.aws.policyNameto the name of the policy created in step 3. - Set
backup.aws.prefixto the folder path where backups will be stored.
GCS
Before enabling backup withprovider: gcp, complete the following in your GCP account:
- Create a GCS bucket. Set
backup.gcp.bucketto its name. - If you do not have a Cloud Account set up, refer to the docs to Create a Cloud Account. Set
backup.gcp.cloudAccountNameto its name. - Add the Storage Admin role to the GCP service account associated with the Cloud Account.
- Set
backup.gcp.prefixto the folder path where backups will be stored.
Restoring a Backup
Backups are stored atBUCKET/PREFIX/tidb-TIMESTAMP/. To restore, run br restore full from a machine with network access to the PD endpoint.
AWS S3:
The
br binary version must match your TiDB cluster version. Download it from the TiDB Community Toolkit.External References
TiDB Documentation
Official TiDB documentation
TiDB Community Toolkit
Download
br and other TiDB ecosystem toolsBackup Image Source
Source code for the TiDB backup container image
TiDB Template
View the source files, default values, and chart definition