Overview
TiDB is a distributed, MySQL-compatible database designed for horizontal scalability and high availability. It separates compute from storage across three components: a SQL processing layer (TiDB Server), a distributed key-value store (TiKV), and a placement driver (PD) that manages cluster metadata and scheduling. This template deploys a production-ready TiDB cluster across multiple Control Plane locations using PingCAP’s official images.What Gets Created
- GVC — A new GVC spanning the configured locations.
- Stateful Workload — PD (
RELEASE_NAME-pd): placement driver cluster distributed across locations according topdReplicas. UsesreplicaDirectaddressing so each PD node is individually reachable. - Stateful Workload — TiKV (
RELEASE_NAME-tikv): distributed storage nodes. Replica count per location is controlled bygvc.locations[].replicas. - Standard Workload — TiDB Server (
RELEASE_NAME-server): MySQL-compatible SQL layer on port4000. Per-location replica count followsgvc.locations[].replicas. - Standard Workload (optional) — DB Init (
RELEASE_NAME-tidb-db-init): a one-time initialization job that sets the root password and creates the application database and user. Disable after first deployment. - Volume Set — PD storage (
RELEASE_NAME-tidb-pd-vs): 10 GiB fixed, ext4, general-purpose-ssd, with 7-day snapshot retention. - Volume Set — TiKV storage (
RELEASE_NAME-tidb-tikv-vs): configurable capacity with optional autoscaling, ext4, general-purpose-ssd, with 7-day snapshot retention. - Secrets — Opaque secrets containing startup scripts for PD, TiKV, and TiDB Server, plus an optional dictionary secret with database credentials.
- Identity & Policy — A shared identity bound to all workloads with
revealaccess to all secrets.
Installation
To install, follow the instructions for your preferred method:UI
Browse, install, and manage templates visually
CLI
Manage templates from your terminal
Terraform
Declare templates in your Terraform configurations
Pulumi
Declare templates in your Pulumi programs
Configuration
The defaultvalues.yaml for this template:
Locations
gvc.name— Name of the GVC to create. Must be unique within your organization if deploying multiple instances.gvc.locations— List of Control Plane locations. At least 3 locations are required.locations[].replicas— Number of TiKV and TiDB Server replicas per location. Set to0to suspend a component in that location without removing it from the configuration.gvc.pdReplicas— Total number of PD replicas across all locations. Must be3,5, or7. When set to3, exactly 3 locations are required. Replicas are distributed evenly across locations.
Database Initialization
autoCreateDatabase.enabled— Creates a dictionary secret with database credentials used by the TiDB Server and init workload.autoCreateDatabase.deployInitWorkload— Deploys a one-time init job that sets the root password and creates the application database and user. The job checks whether the database already exists before running — if it does, it exits immediately.
After the cluster is initialized, set
autoCreateDatabase.deployInitWorkload to false and upgrade the template to remove the init workload and free up resources.Credentials
autoCreateDatabase.database.rootPassword— MySQL root password. Change before deploying to production.autoCreateDatabase.database.user— Application database username.autoCreateDatabase.database.password— Application database password.autoCreateDatabase.database.db— Name of the application database to create.
These values are only applied on first initialization. If the database already exists, the init workload exits without making changes. To modify credentials on an existing cluster, use MySQL’s native commands (e.g.
ALTER USER, ALTER DATABASE).Resources
resources.pd.cpu/resources.pd.memory— CPU and memory per PD replica.resources.server.cpu/resources.server.memory— CPU and memory per TiDB Server replica.resources.tikv.cpu/resources.tikv.memory— CPU and memory per TiKV replica.
Storage
TiKV storage (configurable):volumeset.tikv.capacity— Initial volume size in GiB (minimum 10).volumeset.tikv.autoscaling.enabled— Automatically expand volumes as they fill. When enabled:maxCapacity— Maximum volume size in GiB.minFreePercentage— Trigger a scale-up when free space drops below this percentage.scalingFactor— Multiply current capacity by this factor when scaling up.
volumeset.pd.capacity— Initial volume size in GiB for PD metadata (minimum 10).
Access
Internal access — configured per component (server, tikv, pd):
| Type | Description |
|---|---|
same-gvc | Allow access from all workloads in the same GVC (recommended) |
same-org | Allow access from all workloads in the same organization |
workload-list | Allow access only from specific workloads |
exposeServer— Set totrueto allow external connections to the TiDB MySQL port (4000) from any IP (0.0.0.0/0).external_access.server_outboundAllowCIDR/tikv_outboundAllowCIDR/pd_outboundAllowCIDR— Outbound CIDR allowlists for each component, for reaching external services.
Connecting to TiDB
TiDB Server is MySQL-compatible. Connect using any MySQL client from within the same GVC:user / password credentials from autoCreateDatabase.database, or connect as root with rootPassword.
Ports
| Workload | Port | Protocol | Description |
|---|---|---|---|
| TiDB Server | 4000 | TCP | MySQL-compatible SQL port |
| TiDB Server | 10080 | HTTP | TiDB status and metrics |
| PD | 2379 | TCP | PD client port |
| PD | 2380 | TCP | PD peer (Raft) port |
| TiKV | 20160 | TCP | TiKV data port |
| TiKV | 20180 | TCP | TiKV status port |