Skip to main content

Overview

TiDB is a distributed, MySQL-compatible database designed for horizontal scalability and high availability. It separates compute from storage across three components: a SQL processing layer (TiDB Server), a distributed key-value store (TiKV), and a placement driver (PD) that manages cluster metadata and scheduling. This template deploys a production-ready TiDB cluster across multiple Control Plane locations using PingCAP’s official images.

What Gets Created

  • GVC — A new GVC spanning the configured locations.
  • Stateful Workload — PD (RELEASE_NAME-pd): placement driver cluster distributed across locations according to pdReplicas. Uses replicaDirect addressing so each PD node is individually reachable.
  • Stateful Workload — TiKV (RELEASE_NAME-tikv): distributed storage nodes. Replica count per location is controlled by gvc.locations[].replicas.
  • Standard Workload — TiDB Server (RELEASE_NAME-server): MySQL-compatible SQL layer on port 4000. Per-location replica count follows gvc.locations[].replicas.
  • Standard Workload (optional) — DB Init (RELEASE_NAME-tidb-db-init): a one-time initialization job that sets the root password and creates the application database and user. Disable after first deployment.
  • Volume Set — PD storage (RELEASE_NAME-tidb-pd-vs): 10 GiB fixed, ext4, general-purpose-ssd, with 7-day snapshot retention.
  • Volume Set — TiKV storage (RELEASE_NAME-tidb-tikv-vs): configurable capacity with optional autoscaling, ext4, general-purpose-ssd, with 7-day snapshot retention.
  • Secrets — Opaque secrets containing startup scripts for PD, TiKV, and TiDB Server, plus an optional dictionary secret with database credentials.
  • Identity & Policy — A shared identity bound to all workloads with reveal access to all secrets.

Installation

To install, follow the instructions for your preferred method:

Configuration

The default values.yaml for this template:
gvc:
  name: tidb-gvc
  locations: # Replica count applies to TiKV and TiDB Server workloads; PD uses pdReplicas
    - name: aws-us-east-2
      replicas: 1
    - name: aws-us-west-2
      replicas: 1
    - name: aws-us-east-1
      replicas: 1
  pdReplicas: 3 # options: 3, 5, 7

images:
  server: pingcap/tidb:v8.5.3
  tikv: pingcap/tikv:v8.5.3
  pd: pingcap/pd:v8.5.3

resources:
  pd:
    cpu: 2
    memory: 4Gi
  server:
    cpu: 2
    memory: 2Gi
  tikv:
    cpu: 2
    memory: 4Gi

autoCreateDatabase:
  enabled: true
  deployInitWorkload: true # Set to false after initialization to remove the init workload
  database:
    rootPassword: myrootpw
    user: myuser
    password: mypw
    db: mydb

volumeset:
  tikv:
    capacity: 10 # initial capacity in GiB (minimum is 10)
    autoscaling:
      enabled: false
      maxCapacity: 100 # Maximum capacity in GiB
      minFreePercentage: 10
      scalingFactor: 1.2
  pd:
    capacity: 10 # initial capacity in GiB (minimum is 10)

exposeServer: false # Set to true to expose the TiDB MySQL port publicly

external_access:
  server_outboundAllowCIDR: []
  tikv_outboundAllowCIDR: []
  pd_outboundAllowCIDR: []

internal_access:
  server:
    type: same-gvc # options: same-gvc, same-org, workload-list
    workloads:
      #- //gvc/GVC_NAME/workload/WORKLOAD_NAME
  tikv:
    type: same-gvc
    workloads:
      #- //gvc/GVC_NAME/workload/WORKLOAD_NAME
  pd:
    type: same-gvc
    workloads:
      #- //gvc/GVC_NAME/workload/WORKLOAD_NAME

Locations

  • gvc.name — Name of the GVC to create. Must be unique within your organization if deploying multiple instances.
  • gvc.locations — List of Control Plane locations. At least 3 locations are required.
  • locations[].replicas — Number of TiKV and TiDB Server replicas per location. Set to 0 to suspend a component in that location without removing it from the configuration.
  • gvc.pdReplicas — Total number of PD replicas across all locations. Must be 3, 5, or 7. When set to 3, exactly 3 locations are required. Replicas are distributed evenly across locations.

Database Initialization

  • autoCreateDatabase.enabled — Creates a dictionary secret with database credentials used by the TiDB Server and init workload.
  • autoCreateDatabase.deployInitWorkload — Deploys a one-time init job that sets the root password and creates the application database and user. The job checks whether the database already exists before running — if it does, it exits immediately.
After the cluster is initialized, set autoCreateDatabase.deployInitWorkload to false and upgrade the template to remove the init workload and free up resources.

Credentials

  • autoCreateDatabase.database.rootPassword — MySQL root password. Change before deploying to production.
  • autoCreateDatabase.database.user — Application database username.
  • autoCreateDatabase.database.password — Application database password.
  • autoCreateDatabase.database.db — Name of the application database to create.
These values are only applied on first initialization. If the database already exists, the init workload exits without making changes. To modify credentials on an existing cluster, use MySQL’s native commands (e.g. ALTER USER, ALTER DATABASE).

Resources

  • resources.pd.cpu / resources.pd.memory — CPU and memory per PD replica.
  • resources.server.cpu / resources.server.memory — CPU and memory per TiDB Server replica.
  • resources.tikv.cpu / resources.tikv.memory — CPU and memory per TiKV replica.

Storage

TiKV storage (configurable):
  • volumeset.tikv.capacity — Initial volume size in GiB (minimum 10).
  • volumeset.tikv.autoscaling.enabled — Automatically expand volumes as they fill. When enabled:
    • maxCapacity — Maximum volume size in GiB.
    • minFreePercentage — Trigger a scale-up when free space drops below this percentage.
    • scalingFactor — Multiply current capacity by this factor when scaling up.
PD storage (fixed):
  • volumeset.pd.capacity — Initial volume size in GiB for PD metadata (minimum 10).
Both volume sets retain snapshots for 7 days and create a final snapshot on deletion.

Access

Internal access — configured per component (server, tikv, pd):
TypeDescription
same-gvcAllow access from all workloads in the same GVC (recommended)
same-orgAllow access from all workloads in the same organization
workload-listAllow access only from specific workloads
External access:
  • exposeServer — Set to true to allow external connections to the TiDB MySQL port (4000) from any IP (0.0.0.0/0).
  • external_access.server_outboundAllowCIDR / tikv_outboundAllowCIDR / pd_outboundAllowCIDR — Outbound CIDR allowlists for each component, for reaching external services.

Connecting to TiDB

TiDB Server is MySQL-compatible. Connect using any MySQL client from within the same GVC:
RELEASE_NAME-server.GVC_NAME.cpln.local:4000
Use the user / password credentials from autoCreateDatabase.database, or connect as root with rootPassword.

Ports

WorkloadPortProtocolDescription
TiDB Server4000TCPMySQL-compatible SQL port
TiDB Server10080HTTPTiDB status and metrics
PD2379TCPPD client port
PD2380TCPPD peer (Raft) port
TiKV20160TCPTiKV data port
TiKV20180TCPTiKV status port

External References