Skip to main content

Overview

Apache Kafka is a distributed event streaming platform designed for high-throughput, fault-tolerant publish-subscribe messaging. This template deploys a Kafka cluster in KRaft mode (no ZooKeeper required), using built-in Raft consensus for distributed coordination. Optional components include a Kafbat UI for cluster management, a REST Proxy for HTTP-based access, Kafka Connect for sink and source connectors, and Prometheus exporters for metrics.

What Gets Created

  • Kafka Cluster Workload — A stateful Kafka broker cluster with configurable replicas (minimum 3). Each replica runs as a combined controller and broker. Optional sidecar containers for Kafka Exporter and JMX Exporter can be co-located on the same workload.
  • Volume Set — Persistent storage for Kafka log directories with autoscaling, snapshot retention, and optional AWS KMS encryption.
  • Secrets — Three opaque secrets: the controller configuration (KRaft quorum and SASL settings), the cluster initialization script (startup logic, listener setup, and log directory initialization), and the Kafka credentials (KRaft cluster ID, inter-broker password, controller password, and listener admin credentials). A JMX exporter configuration secret is also created when JMX exporter is enabled.
  • Identity & Policy — An identity bound to the cluster workload with reveal access to the Kafka secrets.
  • Kafbat UI Workload (optional) — A web-based Kafka management interface with its own identity, policy, and optional domain routing. Enabled when kafbat_ui.enabled: true.
  • Kafka REST Proxy Workload (optional) — An HTTP API gateway for producing and consuming Kafka messages without a native client. Enabled when kafka_rest_proxy.enabled: true.
  • Kafka Connect Workload (optional) — A worker cluster for running sink and source connectors. Configured via the kafka_connectors section (commented out by default).
  • Kafka Client Workload (optional) — A lightweight client workload for testing and debugging cluster connectivity from within the GVC. Enabled when kafka_client is uncommented in the values file.
  • Domain (optional) — Public listener routing for external Kafka access. Created when the public listener is configured, using either direct replica routing or multi-port routing.
This template does not create a GVC. You must deploy it into an existing GVC.

Prerequisites

Prerequisites are only required if you plan to use the public listener or Kafbat UI. Skip the relevant section if those features are not needed.

Public Listener (External Access)

To expose Kafka brokers to clients outside the GVC, you need:
  1. A registered domain (e.g., kafka.example.com).
  2. A Dedicated Load Balancer enabled on the GVC. This is required for external Kafka access and reduces cross-zone traffic costs when kafka.multiZone is enabled.
  3. DNS configured as described in the Configure Domain guide.

Kafbat UI

Kafbat UI requires a pre-created Control Plane secret containing its YAML configuration before the template is installed. The secret name must match the value of kafbat_ui.configuration_secret (default: kafka-kafbat-ui-config). Create the secret with a config.yaml key containing your Kafbat UI configuration. A minimal example connecting to the Kafka cluster with SASL authentication:
kafka:
  clusters:
    - name: local
      bootstrapServers: RELEASE_NAME-cluster:9092
      properties:
        security.protocol: SASL_PLAINTEXT
        sasl.mechanism: PLAIN
        sasl.jaas.config: >-
          org.apache.kafka.common.security.plain.PlainLoginModule required
          username="admin"
          password="your-admin-password";

Installation

To install, follow the instructions for your preferred method:

Configuration

The default values.yaml for this template:
kafka:
  name: cluster
  image: apache/kafka:3.9.1
  suspend: false
  deletionProtection: false
  replicas: 3 # must not be 2
  minReadySeconds: 0
  debug: false
  multiZone: false # If true: It's recommended to enable multi-zone on the Dedicated Load Balancer setting on GVC to reduce the cross-zone traffic
  logDirs: /opt/kafka/logs-0,/opt/kafka/logs-1
  env: [] # If you need to set environment variables, add them here
  volumes:
    logs:
      initialCapacity: 10 # In GB
      performanceClass: general-purpose-ssd # general-purpose-ssd / high-throughput-ssd (Min 1000GB)
      fileSystemType: ext4 # ext4 / xfs
      snapshots:
        createFinalSnapshot: true
        retentionDuration: 7d
        schedule: 0 0 * * * # UTC
      autoscaling:
        maxCapacity: 1000 # In GB
        minFreePercentage: 20
        scalingFactor: 1.2
    # customEncryption:
    #   enabled: false
    #   region: aws-us-east-2 # Replace with the appropriate region
    #   keyId: arn:aws:kms:us-east-2:1234567890:key/d411f35a-1d31-4515-9934-4f193e042d80 # Replace with your AWS KMS key ARN
  cpu: 1000m
  memory: 2000Mi
  minCpu: 250m
  minMemory: 2000Mi
  # overrideHeapOpts: "-Xmx1024m -Xms1024m"
  firewall:
    internal_inboundAllowType: "same-gvc" # Options: same-org / same-gvc (Recommended)
    # external_inboundAllowCIDR: 0.0.0.0/0
    # inboundAllowWorkload:
    #   - //gvc/main-kafka/workload/main-kafka-kafbat-ui
    #   - //gvc/client-gvc/workload/client
    # external_outboundAllowCIDR: "111.222.333.444/16,111.222.444.333/32"
  listeners:
    client:
      protocol: SASL_PLAINTEXT
      name: CLIENT
      containerPort: 9092
      sasl:
        admin:
          username: admin
          password: "your-admin-password"
        users: "user"
        passwords: "your-user-password"
    # public:
    #   protocol: SASL_PLAINTEXT
    #   name: PUBLIC
    #   directReplicaRouting:
    #     enabled: true
    #     containerPort: 9095
    #     publicAddress: kafka.example.com # Dedicated Load Balancer must be enabled on the GVC
    #   sasl:
    #     users: "public-user"
    #     passwords: "your-public-user-password"
  acl:
    superUsers: "User:admin"
    allowEveryoneIfNoAclFound: false
  secrets:
    kraft_cluster_id: your-kraft-cluster-id # Example: bkdDtS1Rsf536si7BGM0JY
    inter_broker_password: your-inter-broker-password
    controller_password: your-controller-password
  extra_configurations:
    default.replication.factor: 3
    auto.create.topics.enable: true
    log.retention.hours: 168

kafka_exporter:
  name: exporter
  image: danielqsj/kafka-exporter:v1.9.0
  debug: false
  cpu: 50m
  memory: 128Mi
  listener: client
  env: []
  dropMetrics: []

jmx_exporter:
  name: jmx-exporter
  image: ghcr.io/controlplane-com/bitnami/jmx-exporter
  kafkaJmxPort: 5557
  exporterPort: 5556
  debug: false
  cpu: 250m
  memory: 256Mi
  minCpu: 80m
  minMemory: 125Mi
  listener: client
  dropMetrics: []

kafbat_ui:
  enabled: true
  deletionProtection: false
  name: kafbat-ui
  image: ghcr.io/kafbat/kafka-ui
  cpu: 300m
  memory: 1000Mi
  minCpu: 100m
  minMemory: 400Mi
  replicas: 1
  timeoutSeconds: 30
  configuration_secret: kafka-kafbat-ui-config # Pre-create this secret before installing
  # domain: kafbat-ui.example.com
  firewall:
    external_inboundAllowCIDR: "0.0.0.0/0"
    external_outboundAllowCIDR: "0.0.0.0/0"

kafka_rest_proxy:
  enabled: true
  deletionProtection: false
  name: rest-proxy
  image: confluentinc/cp-kafka-rest:latest
  cpu: 500m
  memory: 1000Mi
  capacityAI:
    enabled: true
    minCpu: 125m
    minMemory: 200Mi
  replicas: 1
  timeoutSeconds: 15
  # domain: kafka-rest.example.com
  firewall:
    external_inboundAllowCIDR: 0.0.0.0/0
    external_outboundAllowCIDR: "0.0.0.0/0"
  properties:
    bootstrap.servers: SASL_PLAINTEXT://kafka-dev-cluster:9092
    resource.extension: ALL
    api.v3.enable: true
    api.v2.enable: true
    client.sasl.mechanism: PLAIN
    api.compatibility.mode: BOTH
    listeners: http://0.0.0.0:8082
    authentication.realm: KafkaRest
    authentication.method: BASIC
    authentication.roles: user
    client.security.protocol: SASL_PLAINTEXT
  jaas_conf:
    KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="your-admin-password";
    };
    KafkaRest {
      org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
      debug="true"
      file="/etc/kafka-rest/password.properties";
    };
  password_properties:
    user: your-user-password,user

kafka_client:
  name: client
  image: apache/kafka:3.9.1
  cpu: 500m
  memory: 1000Mi
  firewall:
    external_outboundAllowCIDR: "0.0.0.0/0"

# kafka_connectors:  # Uncomment and configure to deploy Kafka Connect workers
#   - name: cluster
#     image: apache/kafka:3.9.1
#     replicas: 1
#     ...  # See template source for full connector configuration and plugin examples

Kafka Cluster

  • kafka.name — Suffix used to form the workload name ({release}-{kafka.name}).
  • kafka.replicas — Number of broker replicas. Must be 3 or more and must not be 2. Maximum 5 replicas act as combined controller+broker; replicas beyond 5 are broker-only.
  • kafka.multiZone — When true, distributes replicas across availability zones. Enable the Dedicated Load Balancer on the GVC to reduce cross-zone traffic costs.
  • kafka.resources — CPU and memory bounds for each Kafka broker (cpu, memory, minCpu, minMemory).
  • kafka.overrideHeapOpts — Override the default JVM heap settings. If unset, the heap is derived from the configured memory.
  • kafka.logDirs — Comma-separated paths for Kafka log directories. Each path maps to a separate volume.

Storage

  • kafka.volumes.logs.initialCapacity — Initial volume size in GB per replica (minimum 10).
  • kafka.volumes.logs.performanceClass — Storage class: general-purpose-ssd or high-throughput-ssd (minimum 1000 GB for high-throughput).
  • kafka.volumes.logs.fileSystemType — Filesystem type: ext4 or xfs.
  • kafka.volumes.logs.snapshots — Snapshot configuration: createFinalSnapshot (taken on workload deletion), retentionDuration (e.g., 7d), and optional schedule (cron, UTC).
  • kafka.volumes.logs.autoscaling — Automatically expand the volume as it fills:
    • maxCapacity — Maximum volume size in GB.
    • minFreePercentage — Trigger scale-up when free space drops below this percentage.
    • scalingFactor — Multiply current capacity by this factor when scaling up.

Custom Encryption (AWS KMS)

To encrypt Kafka data volumes with a customer-managed key, uncomment kafka.volumes.customEncryption:
  • region — AWS region where the KMS key is located (e.g., aws-us-east-2).
  • keyId — The full ARN of the AWS KMS key.
Custom encryption can only be applied when the volume is first created. Existing volumes cannot be re-encrypted after creation.

Firewall

  • kafka.firewall.internal_inboundAllowType — Controls which workloads can reach the Kafka cluster internally (same-gvc recommended, or same-org).
  • kafka.firewall.external_inboundAllowCIDR — CIDR ranges allowed to reach Kafka from the internet. Commented out by default.
  • kafka.firewall.inboundAllowWorkload — Explicit workload links allowed to connect (use with same-gvc or workload-list).

Listeners

Client Listener (Internal)

The client listener is the primary access point for producers and consumers within the GVC.
  • kafka.listeners.client.protocol — Security protocol: PLAINTEXT or SASL_PLAINTEXT.
  • kafka.listeners.client.containerPort — Port for client connections (default 9092). Automatically overridden to the range 3000–3004 when the public listener with direct replica routing is enabled.
  • kafka.listeners.client.sasl.admin — Admin username and password. The admin user is also added as an ACL superuser. Change the default password before deploying to production.
  • kafka.listeners.client.sasl.users / .passwords — Comma-separated lists of additional usernames and their passwords for client connections.

Public Listener (Optional, External Access)

Uncomment kafka.listeners.public to expose Kafka brokers to clients outside the GVC. Two routing approaches are supported: Direct Replica Routing (Recommended) Generates a subdomain per replica, enabling zone-aware routing and minimizing cross-zone traffic. Requires a dedicated load balancer on the GVC.
public:
  protocol: SASL_PLAINTEXT
  name: PUBLIC
  directReplicaRouting:
    enabled: true
    containerPort: 9095
    publicAddress: kafka.example.com
  sasl:
    users: "public-user"
    passwords: "your-public-user-password"
Each broker becomes reachable at a replica-specific subdomain, for example:
kafka-0-aws-us-east-1.kafka.example.com:9095
kafka-1-aws-us-east-1.kafka.example.com:9095
Multi-Port Routing Assigns one port per replica starting at 3000. Does not require per-replica subdomains but is not recommended for multi-zone deployments due to potential cross-zone routing charges.
For external access, Kafka clients should use SASL_SSL as the security protocol since TLS is enforced at the load balancer level.

ACL

  • kafka.acl.superUsers — Semicolon-separated list of Kafka superusers (e.g., User:admin;User:connectors). Superusers bypass ACL checks.
  • kafka.acl.allowEveryoneIfNoAclFound — When true, allows all operations on topics or groups that have no ACL defined. Set to false for strict access control.

Secrets

These values are stored as a Control Plane secret and injected into the cluster at startup. Change all three before deploying to production.
  • kafka.secrets.kraft_cluster_id — A unique identifier for the KRaft cluster. Generate with kafka-storage.sh random-uuid or any base64-encoded random string.
  • kafka.secrets.inter_broker_password — Password used for inter-broker communication (SASL).
  • kafka.secrets.controller_password — Password used for controller-to-broker communication (SASL).

Extra Configurations

  • kafka.extra_configurations.default.replication.factor — Default replication factor for new topics. Cannot exceed the number of replicas.
  • kafka.extra_configurations.auto.create.topics.enable — When true, topics are automatically created when first produced to or consumed from.
  • kafka.extra_configurations.log.retention.hours — How long Kafka retains log segments before deletion (default 168 hours = 7 days).

Exporters

Both exporters run as sidecar containers on the Kafka cluster workload. Kafka Exporter (kafka_exporter) — Exposes consumer group lag and topic metrics in Prometheus format.
  • kafka_exporter.cpu / kafka_exporter.memory — Resources for the exporter sidecar.
  • kafka_exporter.listener — Listener name to connect to (default: client).
  • kafka_exporter.dropMetrics — List of metric name patterns to exclude (e.g., ["kafka_consumergroup.*"]).
JMX Exporter (jmx_exporter) — Exposes Kafka JMX metrics in Prometheus format.
  • jmx_exporter.kafkaJmxPort — JMX port on the Kafka broker (default 5557).
  • jmx_exporter.exporterPort — Port where the exporter serves metrics (default 5556).
  • jmx_exporter.cpu / jmx_exporter.memory — Resources for the JMX exporter sidecar.

Kafbat UI (Optional)

Kafbat UI is a web-based interface for browsing topics, viewing consumer groups, and managing the cluster.
  • kafbat_ui.enabled — Enable or disable the Kafbat UI workload (default: true).
  • kafbat_ui.configuration_secret — Name of the pre-created Control Plane secret containing the Kafbat configuration YAML (see Prerequisites).
  • kafbat_ui.domain — Optional custom domain for the Kafbat UI. Requires DNS configuration pointing to the GVC’s load balancer.
  • kafbat_ui.replicas — Number of Kafbat UI replicas (default: 1).
  • kafbat_ui.resources — CPU and memory for the UI workload.
  • kafbat_ui.firewall — Firewall rules for the UI workload. By default, external access is open to 0.0.0.0/0.

Kafka REST Proxy (Optional)

The REST Proxy provides an HTTP API for producing, consuming, and managing Kafka resources without a native client.
  • kafka_rest_proxy.enabled — Enable or disable the REST Proxy workload (default: true).
  • kafka_rest_proxy.domain — Optional custom domain for the REST Proxy.
  • kafka_rest_proxy.properties.bootstrap.servers — Kafka cluster bootstrap address (update to match your release name and cluster name).
  • kafka_rest_proxy.jaas_conf — JAAS configuration for the Kafka client and REST Proxy HTTP authentication.
  • kafka_rest_proxy.password_properties — Users and passwords for REST Proxy BASIC authentication.
Update kafka_rest_proxy.properties.bootstrap.servers to match your release name and cluster workload name before deploying (e.g., SASL_PLAINTEXT://my-release-cluster:9092).

Kafka Connect (Optional)

Kafka Connect workers for running sink and source connectors are configured via the kafka_connectors list (commented out by default). Each entry defines a Connect worker with connector plugins and their configurations. Supported plugin artifact types: jar, zip, tgz. Plugins are downloaded and extracted into the plugins_folder on startup. See the template source for full connector configuration examples including MirrorMaker 2, Camel S3 Sink, ClickHouse Sink, and Snowflake Sink.

Kafka Client (Optional)

The Kafka client workload provides a persistent container running in the GVC that you can connect to via the Control Plane UI or CLI for testing and debugging. To enable it, uncomment the kafka_client section in your values file. Once deployed, connect via the workload terminal and use the Kafka CLI tools:
# Create a client properties file for SASL authentication
cat > /tmp/client.properties <<EOF
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
  username="admin" \
  password="your-admin-password";
EOF

# Produce a message
kafka-console-producer.sh \
  --bootstrap-server RELEASE_NAME-cluster:9092 \
  --producer.config /tmp/client.properties \
  --topic test-topic

# Consume messages
kafka-console-consumer.sh \
  --bootstrap-server RELEASE_NAME-cluster:9092 \
  --consumer.config /tmp/client.properties \
  --topic test-topic \
  --from-beginning

Connecting to Kafka

Internal (same GVC) Connect using the cluster workload name as the bootstrap server:
RELEASE_NAME-cluster:9092
To target a specific replica:
RELEASE_NAME-cluster-0.RELEASE_NAME-cluster:9092
RELEASE_NAME-cluster-1.RELEASE_NAME-cluster:9092
External (public listener) When the public listener is configured with direct replica routing, each broker is reachable at a replica-specific subdomain:
kafka-0-aws-us-east-1.kafka.example.com:9095
kafka-1-aws-us-east-1.kafka.example.com:9095
kafka-2-aws-us-east-1.kafka.example.com:9095
Use SASL_SSL as the security protocol for external clients since TLS is enforced at the load balancer.

External References