Overview
Apache Kafka is a distributed event streaming platform designed for high-throughput, fault-tolerant publish-subscribe messaging. This template deploys a Kafka cluster in KRaft mode (no ZooKeeper required), using built-in Raft consensus for distributed coordination. Optional components include a Kafbat UI for cluster management, a REST Proxy for HTTP-based access, Kafka Connect for sink and source connectors, and Prometheus exporters for metrics.What Gets Created
- Kafka Cluster Workload — A stateful Kafka broker cluster with configurable replicas (minimum 3). Each replica runs as a combined controller and broker. Optional sidecar containers for Kafka Exporter and JMX Exporter can be co-located on the same workload.
- Volume Set — Persistent storage for Kafka log directories with autoscaling, snapshot retention, and optional AWS KMS encryption.
- Secrets — Three opaque secrets: the controller configuration (KRaft quorum and SASL settings), the cluster initialization script (startup logic, listener setup, and log directory initialization), and the Kafka credentials (KRaft cluster ID, inter-broker password, controller password, and listener admin credentials). A JMX exporter configuration secret is also created when JMX exporter is enabled.
- Identity & Policy — An identity bound to the cluster workload with
revealaccess to the Kafka secrets. - Kafbat UI Workload (optional) — A web-based Kafka management interface with its own identity, policy, and optional domain routing. Enabled when
kafbat_ui.enabled: true. - Kafka REST Proxy Workload (optional) — An HTTP API gateway for producing and consuming Kafka messages without a native client. Enabled when
kafka_rest_proxy.enabled: true. - Kafka Connect Workload (optional) — A worker cluster for running sink and source connectors. Configured via the
kafka_connectorssection (commented out by default). - Kafka Client Workload (optional) — A lightweight client workload for testing and debugging cluster connectivity from within the GVC. Enabled when
kafka_clientis uncommented in the values file. - Domain (optional) — Public listener routing for external Kafka access. Created when the public listener is configured, using either direct replica routing or multi-port routing.
This template does not create a GVC. You must deploy it into an existing GVC.
Prerequisites
Prerequisites are only required if you plan to use the public listener or Kafbat UI. Skip the relevant section if those features are not needed.Public Listener (External Access)
To expose Kafka brokers to clients outside the GVC, you need:- A registered domain (e.g.,
kafka.example.com). - A Dedicated Load Balancer enabled on the GVC. This is required for external Kafka access and reduces cross-zone traffic costs when
kafka.multiZoneis enabled. - DNS configured as described in the Configure Domain guide.
Kafbat UI
Kafbat UI requires a pre-created Control Plane secret containing its YAML configuration before the template is installed. The secret name must match the value ofkafbat_ui.configuration_secret (default: kafka-kafbat-ui-config).
Create the secret with a config.yaml key containing your Kafbat UI configuration. A minimal example connecting to the Kafka cluster with SASL authentication:
Installation
To install, follow the instructions for your preferred method:UI
Browse, install, and manage templates visually
CLI
Manage templates from your terminal
Terraform
Declare templates in your Terraform configurations
Pulumi
Declare templates in your Pulumi programs
Configuration
The defaultvalues.yaml for this template:
Kafka Cluster
kafka.name— Suffix used to form the workload name ({release}-{kafka.name}).kafka.replicas— Number of broker replicas. Must be 3 or more and must not be 2. Maximum 5 replicas act as combined controller+broker; replicas beyond 5 are broker-only.kafka.multiZone— Whentrue, distributes replicas across availability zones. Enable the Dedicated Load Balancer on the GVC to reduce cross-zone traffic costs.kafka.resources— CPU and memory bounds for each Kafka broker (cpu,memory,minCpu,minMemory).kafka.overrideHeapOpts— Override the default JVM heap settings. If unset, the heap is derived from the configured memory.kafka.logDirs— Comma-separated paths for Kafka log directories. Each path maps to a separate volume.
Storage
kafka.volumes.logs.initialCapacity— Initial volume size in GB per replica (minimum 10).kafka.volumes.logs.performanceClass— Storage class:general-purpose-ssdorhigh-throughput-ssd(minimum 1000 GB for high-throughput).kafka.volumes.logs.fileSystemType— Filesystem type:ext4orxfs.kafka.volumes.logs.snapshots— Snapshot configuration:createFinalSnapshot(taken on workload deletion),retentionDuration(e.g.,7d), and optionalschedule(cron, UTC).kafka.volumes.logs.autoscaling— Automatically expand the volume as it fills:maxCapacity— Maximum volume size in GB.minFreePercentage— Trigger scale-up when free space drops below this percentage.scalingFactor— Multiply current capacity by this factor when scaling up.
Custom Encryption (AWS KMS)
To encrypt Kafka data volumes with a customer-managed key, uncommentkafka.volumes.customEncryption:
region— AWS region where the KMS key is located (e.g.,aws-us-east-2).keyId— The full ARN of the AWS KMS key.
Custom encryption can only be applied when the volume is first created. Existing volumes cannot be re-encrypted after creation.
Firewall
kafka.firewall.internal_inboundAllowType— Controls which workloads can reach the Kafka cluster internally (same-gvcrecommended, orsame-org).kafka.firewall.external_inboundAllowCIDR— CIDR ranges allowed to reach Kafka from the internet. Commented out by default.kafka.firewall.inboundAllowWorkload— Explicit workload links allowed to connect (use withsame-gvcorworkload-list).
Listeners
Client Listener (Internal)
The client listener is the primary access point for producers and consumers within the GVC.kafka.listeners.client.protocol— Security protocol:PLAINTEXTorSASL_PLAINTEXT.kafka.listeners.client.containerPort— Port for client connections (default9092). Automatically overridden to the range3000–3004when the public listener with direct replica routing is enabled.kafka.listeners.client.sasl.admin— Admin username and password. The admin user is also added as an ACL superuser. Change the default password before deploying to production.kafka.listeners.client.sasl.users/.passwords— Comma-separated lists of additional usernames and their passwords for client connections.
Public Listener (Optional, External Access)
Uncommentkafka.listeners.public to expose Kafka brokers to clients outside the GVC. Two routing approaches are supported:
Direct Replica Routing (Recommended)
Generates a subdomain per replica, enabling zone-aware routing and minimizing cross-zone traffic. Requires a dedicated load balancer on the GVC.
For external access, Kafka clients should use
SASL_SSL as the security protocol since TLS is enforced at the load balancer level.ACL
kafka.acl.superUsers— Semicolon-separated list of Kafka superusers (e.g.,User:admin;User:connectors). Superusers bypass ACL checks.kafka.acl.allowEveryoneIfNoAclFound— Whentrue, allows all operations on topics or groups that have no ACL defined. Set tofalsefor strict access control.
Secrets
These values are stored as a Control Plane secret and injected into the cluster at startup. Change all three before deploying to production.kafka.secrets.kraft_cluster_id— A unique identifier for the KRaft cluster. Generate withkafka-storage.sh random-uuidor any base64-encoded random string.kafka.secrets.inter_broker_password— Password used for inter-broker communication (SASL).kafka.secrets.controller_password— Password used for controller-to-broker communication (SASL).
Extra Configurations
kafka.extra_configurations.default.replication.factor— Default replication factor for new topics. Cannot exceed the number of replicas.kafka.extra_configurations.auto.create.topics.enable— Whentrue, topics are automatically created when first produced to or consumed from.kafka.extra_configurations.log.retention.hours— How long Kafka retains log segments before deletion (default168hours = 7 days).
Exporters
Both exporters run as sidecar containers on the Kafka cluster workload. Kafka Exporter (kafka_exporter) — Exposes consumer group lag and topic metrics in Prometheus format.
kafka_exporter.cpu/kafka_exporter.memory— Resources for the exporter sidecar.kafka_exporter.listener— Listener name to connect to (default:client).kafka_exporter.dropMetrics— List of metric name patterns to exclude (e.g.,["kafka_consumergroup.*"]).
jmx_exporter) — Exposes Kafka JMX metrics in Prometheus format.
jmx_exporter.kafkaJmxPort— JMX port on the Kafka broker (default5557).jmx_exporter.exporterPort— Port where the exporter serves metrics (default5556).jmx_exporter.cpu/jmx_exporter.memory— Resources for the JMX exporter sidecar.
Kafbat UI (Optional)
Kafbat UI is a web-based interface for browsing topics, viewing consumer groups, and managing the cluster.kafbat_ui.enabled— Enable or disable the Kafbat UI workload (default:true).kafbat_ui.configuration_secret— Name of the pre-created Control Plane secret containing the Kafbat configuration YAML (see Prerequisites).kafbat_ui.domain— Optional custom domain for the Kafbat UI. Requires DNS configuration pointing to the GVC’s load balancer.kafbat_ui.replicas— Number of Kafbat UI replicas (default:1).kafbat_ui.resources— CPU and memory for the UI workload.kafbat_ui.firewall— Firewall rules for the UI workload. By default, external access is open to0.0.0.0/0.
Kafka REST Proxy (Optional)
The REST Proxy provides an HTTP API for producing, consuming, and managing Kafka resources without a native client.kafka_rest_proxy.enabled— Enable or disable the REST Proxy workload (default:true).kafka_rest_proxy.domain— Optional custom domain for the REST Proxy.kafka_rest_proxy.properties.bootstrap.servers— Kafka cluster bootstrap address (update to match your release name and cluster name).kafka_rest_proxy.jaas_conf— JAAS configuration for the Kafka client and REST Proxy HTTP authentication.kafka_rest_proxy.password_properties— Users and passwords for REST Proxy BASIC authentication.
Update
kafka_rest_proxy.properties.bootstrap.servers to match your release name and cluster workload name before deploying (e.g., SASL_PLAINTEXT://my-release-cluster:9092).Kafka Connect (Optional)
Kafka Connect workers for running sink and source connectors are configured via thekafka_connectors list (commented out by default). Each entry defines a Connect worker with connector plugins and their configurations.
Supported plugin artifact types: jar, zip, tgz. Plugins are downloaded and extracted into the plugins_folder on startup.
See the template source for full connector configuration examples including MirrorMaker 2, Camel S3 Sink, ClickHouse Sink, and Snowflake Sink.
Kafka Client (Optional)
The Kafka client workload provides a persistent container running in the GVC that you can connect to via the Control Plane UI or CLI for testing and debugging. To enable it, uncomment thekafka_client section in your values file. Once deployed, connect via the workload terminal and use the Kafka CLI tools:
Connecting to Kafka
Internal (same GVC) Connect using the cluster workload name as the bootstrap server:SASL_SSL as the security protocol for external clients since TLS is enforced at the load balancer.