A workload represents a backend application such as a microservice. It is comprised of one or multiple containers. Containers making up the workload communicate freely on localhost.

Workloads run in Control Plane’s AWS, Azure, and GCP accounts, where clouds/regions are determined by the GVC definition. Your workload may run only in a single region of one cloud, or across many regions of all the three clouds – completely up to the GVC definition. Requests are routed to the nearest healthy location.

Workloads are managed using a common interface, regardless of cloud providers. Workload log data is consolidated for easy retrieval and analysis. It means that a particular workload can be operating on AWS, Azure and GCP, yet its log – across instances/providers is accessed using a single API/CLI/UI/Grafana operation.


Auto Scaling

The number of workload replicas is automatically scaled up and down based on the workload's scaling strategy.

Selectable Scaling Strategies:

  • Concurrent Requests Quantity
  • Requests Per Second
  • Percentage of CPU Utilization

The minimum and maximum number of replicas that can be deployed are configurable. Workloads can be scaled down to 0 when there is no traffic and can scale up immediately to fulfil new request.


Capacity AI is not available if CPU Utilization is selected because dynamic allocation of CPU resources cannot be accomplished while scaling replicas based on the usage of its CPU.

Capacity AI

A workload can leverage intelligent allocation of its container's resources (CPU and Memory) by using Capacity AI.

Capacity AI uses an analysis of historical usage to adjust these resources up to a configured maximum.

This can significantly reduce cost but may cause temporary performance issues with sudden spikes in usage.

If capacity AI is disabled, the amount of resources configured will be fully allocated.

Location Override

By default, both Capacity AI and Auto Scaling settings are applied to all deployments at each locations enabled in the GVC. Each location can have these settings overriden to increase performance for a particular audience.

This allows for granular control over how your workload scales for a particular location. If a majority of your users are in Europe, you can set the European locations at a higher level than the rest of the world.

Setting local options will ensure that your target users will be served quickly and results in lower costs for resources that won't be used.


Probes are a feature of Kubernetes that are used to control the health of an application running inside a container.

Each container can have a:

  • Readiness Probe

    • An endpoint configured which can be queried to see if the workload is available and ready to receive requests
  • Liveness Probe

    • An endpoint configured which can be queried to see if the workload is healthy or if it needs to be restarted


Visit the workload reference page for additional information.

Copyright © 2022 Control Plane Corporation. All rights reserved. Revision 68a8865b