Skip to main content
The cpln stack command deploys Docker Compose projects directly to Control Plane, automatically converting services, volumes, secrets, and networks to their Control Plane equivalents.

When to use this

Migrate from Docker Compose

Move existing Compose projects to Control Plane without rewriting configuration

Local-to-cloud workflow

Develop locally with Compose, deploy to Control Plane for production

Multi-service apps

Deploy interconnected services as a cohesive stack

Preview deployments

Generate and inspect Control Plane manifests before deploying

Prerequisites

Install the Control Plane CLI. See Installation.
You need a docker-compose.yml or compose.yaml file.
You need permissions to create workloads, secrets, volumesets, identities, and push images.

Deploy a project

cpln stack deploy
This reads your Compose file and deploys all services to Control Plane. See the stack deploy command reference for all options.

Delete a project

Remove all resources created by a Compose deployment:
cpln stack rm
See the stack rm command reference for details.

Preview the generated manifests

Generate Control Plane specs without deploying:
cpln stack manifest
This outputs the converted YAML so you can inspect or modify it before deployment.

Customize workloads with x-cpln

Add an x-cpln block to any service to override the generated workload spec. Each top-level key in x-cpln replaces the corresponding section in the workload spec:
docker-compose.yml
services:
  api:
    image: ghcr.io/example/api:1.2.3
    ports:
      - '8080:8080'

    x-cpln:
      type: standard                    # Override workload type
      containers:                        # Replace entire containers array
        - name: api
          cpu: 250m
          memory: 256Mi
      defaultOptions:                    # Replace defaultOptions
        capacityAI: false
        autoscaling:
          minScale: 0
          maxScale: 3
          metric: concurrency
          target: 50
      firewallConfig:                    # Replace firewallConfig
        external:
          inboundAllowCIDR:
            - 0.0.0.0/0
        internal:
          inboundAllowType: same-gvc
The x-cpln block replaces entire spec sections, and doesn’t merge them. If you override containers, you must include all container configuration.

Available overrides

KeyDescription
typeWorkload type: serverless, standard, stateful
containersComplete container specifications
defaultOptionsAutoscaling, capacity AI, timeouts, suspend
firewallConfigExternal and internal firewall rules
identityLinkLink to a specific identity
supportDynamicTagsEnable dynamic image tag detection
loadBalancerLoad balancer configuration
rolloutOptionsDeployment rollout strategy
securityOptionsSecurity context settings
localOptionsLocation-specific overrides

Service-to-service communication

Update service URLs to use the Control Plane local syntax:
- http://service2:8080
+ http://service2.{GVC}.cpln.local:8080
Replace {GVC} with your actual GVC name.
See the Service-to-Service guide for the full endpoint syntax.

How workload type is determined

The converter analyzes your service definition to select the appropriate workload type:
ConditionWorkload Type
Service has volumes attachedstateful
Service exposes exactly one portserverless
Service exposes multiple ports or no portsstandard
Use x-cpln to override the automatically determined type if needed.

Translation reference

Compose FeatureControl Plane Resource
ServicesSingle-container workloads
NetworksInternal firewall configuration
Named volumesVolumesets (10GB, ext4, general-purpose-ssd)
SecretsSecrets + policies + identities
ConfigsSecrets (treated identically to secrets)
File bind mountsSecrets

Resource mapping

Compose FieldControl Plane Equivalent
deploy.resources.limits.cpusContainer CPU (multiplied by 1000 for millicores)
deploy.resources.limits.memoryContainer memory
deploy.replicasminScale and maxScale (set to same value)
healthcheckReadiness probe
ports / exposeContainer ports
environment / env_fileEnvironment variables (env_file processed first, environment has precedence)
working_dirContainer working directory
commandContainer args
entrypointContainer command

Port protocol

Specify the protocol directly in the port string:
ports:
  - "8080:80/http"      # HTTP protocol
  - "50051:50051/grpc"  # gRPC protocol
  - "9000:9000/http2"   # HTTP/2 protocol
  - "5432:5432/tcp"     # TCP protocol (default if not specified)

Defaults

PropertyDefault Value
CPU42m
Memory128Mi
External inboundAllowed if ports defined
External outboundAllowed (unless network_mode: none)
Capacity AIEnabled only if: reservations < limits, no GPU, and not stateful

Secrets and configs

Both secrets and configs are converted to Control Plane secrets:
services:
  api:
    image: myapp:latest
    secrets:
      - db_password           # Simple reference
      - source: api_key       # With custom target path
        target: /app/api.key

secrets:
  db_password:
    file: ./secrets/db_password.txt
  api_key:
    file: ./secrets/api_key.txt
  • Default mount path: /run/secrets/{name} (if no absolute path specified)
  • Identities and policies are automatically created for workloads using secrets

Healthcheck conversion

Docker healthchecks are converted to readiness probes:
Compose FieldControl Plane Field
testexec.command
intervalperiodSeconds
timeouttimeoutSeconds
start_periodinitialDelaySeconds
retriesfailureThreshold
Supported test formats:
  • String: "curl http://localhost/health"/bin/sh -c wrapper
  • CMD: ['CMD', 'curl', 'http://localhost'] → direct execution
  • CMD-SHELL: ['CMD-SHELL', 'curl http://localhost']/bin/sh -c wrapper
  • NONE: Disables the readiness probe

Network modes

ModeBehavior
Default (no networks)All services can reach each other
Named networksOnly services in the same network can communicate
network_mode: hostExternal inbound traffic allowed
network_mode: noneNo outbound traffic allowed
network_mode: service:otherShares network with another service

GPU support

Services with GPU devices are automatically configured:
deploy:
  resources:
    reservations:
      devices:
        - capabilities: ['gpu']
          count: 1
GPU workloads receive:
  • NVIDIA T4 GPU
  • Minimum CPU: 2000m (overrides default)
  • Minimum Memory: 7168Mi (overrides default)
  • Capacity AI: Disabled

Service inheritance

Extend services from the same or different compose files:
services:
  api:
    extends:
      service: base
      file: base-compose.yml
    environment:
      - API_KEY=secret  # Overrides parent
Child values take precedence over parent values.

Limitations

The following Docker Compose features are not supported:
FeatureStatusAlternative
Directory bind mountsNot supportedUse named volumes or file bind mounts
depends_on orderingIgnoredServices start independently; use health checks
linksIgnoredUse network-based service discovery
Dynamic GPU model/quantityLimitedHardcoded to NVIDIA T4, quantity 1
Multiple containers per serviceNot supportedCreate separate services
privileged modeNot supportedUse securityOptions via x-cpln

Troubleshooting

Update hostnames to use the Control Plane local syntax:
http://service-name.{GVC}.cpln.local:{port}
Ensure both services are in the same network or no networks are defined (global network).
Ensure all ports are explicitly listed in ports or expose in your Compose file. Only services with ports defined get external inbound access.
Secrets referenced in Compose are converted to Control Plane secrets. The converter automatically creates identities and policies. Check that:
  • The secret file exists at the specified path
  • You have permissions to create secrets, identities, and policies
Directory bind mounts are not supported. Convert to either:
  • Named volumes (for persistent data)
  • File bind mounts (for configuration files → converted to secrets)
A service cannot have both image and build specified. Use one or the other:
  • image: Pull from a registry
  • build: Build and push to Control Plane registry
By default, all services can reach each other. If you’re using named networks, ensure communicating services are in the same network.

Next steps