Skip to main content

Overview

A volume set is a collection of storage volumes, which may be linked to one or more workloads running on Control Plane. The behavior varies greatly depending on your choice of filesystem.

Traditional File Systems (ext4, xfs)

  • Control Plane provisions a unique volume for each replica in the linked workload.
  • Each volume set can be used by at most one stateful workload.
  • Each volume in the set is bound to a single workload replica
  • Data is not replicated between volumes. If you require data sharing or replication, this must be accomplished at the application level (e.g. by using WAL streaming between two Postgresql instances)

Shared File System

  • Control Plane provisions a single volume per location
  • The volume set can be attached to any number of workloads
  • All attached workload replicas in a given location use the same volume

Caveats

The shared filesystem does not support:
  • snapshots
  • certain commands:
    • createVolumeSnapshot
    • deleteVolumeSnapshot
    • restoreVolume
    • deleteVolume
    • shrinkVolume

Capacity and Billing

  • When a volume is created, it will have an initial capacity defined by the spec of your volume set.
  • Volume capacity can be increased by sending an expandVolume command.
  • Volume capacity can be decreased using the shrinkVolume command, but this causes data loss (see Volume Shrinkage below).
  • The bill for a volume set is calculated by summing the reserved GB of all volumes.

Autoscaling

One Volume Per Replica

This only applies to volume sets using traditional file systems (e.g. ext4, xfs)
  • Like workloads, volume sets scale horizontally. Every replica in the linked workload is automatically assigned a volume.
  • When the linked workload scales down, the volumes are preserved. Volumes are only deleted when you send a deleteVolume command, or when the volume set itself is deleted.

Automatic Expansion (Reactive)

The spec.autoscaling object allows you to specify rules for automatically expanding volumes in the set. When a volume’s free space drops below a threshold, Control Plane expands it. The available options are:
  • maxCapacity: The largest allowable size (in GB) for any volume in the set.
  • minFreePercentage: This must be a number between 1 and 100. When the free percentage on any volume drops below this threshold, Control Plane will issue an expandVolume command automatically.
  • scalingFactor: This must be a number 1. Applied as a multiplier when calculating the new capacity, providing headroom beyond the minimum needed to restore minFreePercentage.
When a volume needs expansion, the new capacity is calculated as:
newCapacity = usedGB / (1 - minFreePercentage/100) × scalingFactor
The result is capped at maxCapacity. Example calculation: Suppose a 10 GB volume has 8 GB used (2 GB free, 20%), and the configuration is minFreePercentage: 20, scalingFactor: 2.
StepCalculationResult
Current state8 GB used, 2 GB free20% free — threshold met
Minimum capacity for 20% free8 / (1 - 0.20)10 GB
Apply scalingFactor10 220 GB
The volume expands to 20 GB (assuming this is maxCapacity).
Volume Set with Reactive Autoscaling
{
  ...
  "spec": {
    ...
    "autoscaling": {
       "maxCapacity": 100,
       "minFreePercentage": 20,
       "scalingFactor": 2
    }
  }
}

Predictive Scaling

Reactive scaling responds to low free space after it happens. In most cases this is sufficient, but workloads that experience rapid surges in storage consumption can outpace reactive scaling — the volume fills up faster than the expansion can complete. Predictive scaling addresses this by projecting future usage and expanding volumes before they run low. It is a supplement to reactive scaling, not a replacement. On each evaluation cycle, Control Plane computes both a reactive target and a predictive target, then uses whichever is larger (capped at maxCapacity). This means reactive scaling always acts as a safety net.

How It Works

When predictive scaling is enabled, Control Plane:
  1. Queries volume usage over a configurable lookback window.
  2. Fits a linear regression to compute the average growth rate.
  3. Projects usage forward over a configurable projection window.
  4. Calculates a predictive target capacity using the same formula as reactive scaling, but substituting projected usage for current usage:
predictiveCapacity = projectedUsedGB / (1 - minFreePercentage/100) × predictiveScalingFactor
  1. Compares the predictive and reactive targets, and uses the larger of the two.
Predictive scaling is skipped (falling back to reactive only) when any of the following are true:
  • The volume’s growth rate is below minGrowthRateGBPerHour.
  • Fewer than minDataPoints data points are available in the lookback window, meaning the projection is not yet considered reliable.

Configuration

Add a predictive object inside autoscaling to enable predictive scaling:
FieldTypeDefaultDescription
enabledbooleanfalseEnable predictive scaling based on historical growth rates.
lookbackHoursnumber24Hours of historical data to analyze for the growth rate calculation. Min: 1, Max: 168 (1 week).
projectionHoursnumber6Hours into the future to project storage needs. Min: 1, Max: 72.
minDataPointsnumber10Minimum number of historical data points required before a projection is considered reliable. Prevents premature scaling decisions when insufficient data is available. Min: 2, Max: 100.
minGrowthRateGBPerHournumber0.01Minimum growth rate (GB/hour) to trigger predictive expansion. Volumes growing slower than this rate will not trigger predictive scaling.
scalingFactornumber(inherited)Scaling factor for predictive expansion. If not set, uses the parent autoscaling.scalingFactor. A lower value (e.g., 1.2) is recommended for gentler proactive scaling. Min: 1.1.
When predictive.enabled is true, the parent autoscaling.minFreePercentage and autoscaling.scalingFactor fields are required.

Example

Volume Set with Predictive Scaling
{
  ...
  "spec": {
    ...
    "autoscaling": {
       "maxCapacity": 100,
       "minFreePercentage": 20,
       "scalingFactor": 2,
       "predictive": {
         "enabled": true,
         "lookbackHours": 48,
         "projectionHours": 12,
         "minDataPoints": 10,
         "minGrowthRateGBPerHour": 0.1,
         "scalingFactor": 1.2
       }
    }
  }
}
Example calculation: Suppose a 50 GB volume currently has 40 GB used (10 GB free), and its growth rate over the past 48 hours is 0.5 GB/hour. Reactive target:
StepCalculationResult
Current state40 GB used, 10 GB free20% free — threshold met
Minimum capacity for 20% free40 / (1 - 0.20)50 GB
Apply scalingFactor (2.0)50 2.0100 GB
Predictive target:
StepCalculationResult
Project usage 12 hours ahead40 + (0.5 12)46 GB
Minimum capacity for 20% free46 / (1 - 0.20)57.5 GB
Apply predictive scalingFactor (1.2)57.5 1.2⌈69⌉ = 69 GB
Control Plane compares the two targets: reactive (100 GB) vs. predictive (69 GB). Since reactive is larger, the volume expands to 100 GB (which also equals maxCapacity in this configuration). In a different scenario — say the volume still has plenty of free space but is growing rapidly — the predictive target could exceed the reactive target. The key principle is: whichever target is larger gets used, ensuring the volume is sized for both current needs and anticipated growth.

Snapshots

Snapshots can be taken at any time and (optionally) on a regular schedule. To set up automatic snapshotting, you may use the spec.snapshots object. Options include:
  • retentionDuration: The length of time to retain a newly created snapshot. This should be a floating point number followed by either d, h, or m (for day, hour or minute)
  • schedule: A cron expression describing the snapshot frequency. Snapshots cannot be taken more frequently than once per hour.

File System Type

Currently supported file systems are:
  • ext4
  • xfs
  • shared

Mount Resources

For the shared file system only, you will be charged CPU and memory per mount point. Shared volumes must be mounted once per node.
There will be at most one mount point per replica.
You can control the minimum and maximum resource allocations per mount point using the mountOptions property. e.g.
{
  "kind": "volumeset",
  "name": "my-shared-volumeset",
  "description": "",
  "tags": {},
  "spec": {
    "fileSystemType": "shared",
    "initialCapacity": 10,
    "mountOptions": {
      "resources": {
        "maxCpu": "200m",
        "maxMemory": "128Mi",
        "minCpu": "100m",
        "minMemory": "128Mi"
      }
    },
    "performanceClass": "shared"
  }
}

Resource Constraints

  • minCpu and maxCpu can be at most 4000m apart
  • The ratio between minCpu and maxCpu must be at least 1:4
  • minMemory and maxMemory can be at most 4096Mi apart
  • The ratio between minMemory and maxMemory must be at least 1:4

Performance Classes

Each volume set has a single, immutable, performance class. The performance class determines:
  • How many Megabytes per second can be transferred to and from the volume (MB/second)
  • How many I/O operations can be processed per second. (IOPS)
  • Read/write latency
Because these drives are served over the network, IOPS/throughput is limited per VM. The performance of individual drives will vary.
Volume performance varies widely by cloud service provider.

General-Purpose SSD

Name: general-purpose-ssd Minimum Capacity: 10Gb Maximum Capacity: 65536Gb
Service ProviderMax ThroughputMax IOPS
AWS125 MB/s3000
GCP1200 MB/s80000
Azure125 MB/s3000

High-Throughput SSD

In general, IOPS/throughput capacity varies linearly with storage capacity. The values shown below are the maximum possible values, and may only be achievable with large volume sizes.
Name: high-throughput-ssd
Minimum Capacity: 200Gb Maximum Capacity: 65536Gb
Service ProviderMax ThroughputMax IOPS
AWS400 MB/s4600
GCP1200 MB/s100000
Azure1200 MB/s15500

Custom Encryption (AWS)

Control Plane encrypts all volumes by default. The custom encryption feature allows you to specify your own AWS KMS keys for encrypting volumes in a volume set, giving you complete control over the encryption keys used for your data.

When to Use Custom Encryption

Use custom encryption when you need to:
  • Meet specific compliance or regulatory requirements that mandate customer-managed keys
  • Maintain control over key lifecycle management (rotation, revocation, etc.)
  • Use separate encryption keys for different environments or applications
  • Integrate with existing key management workflows
Custom encryption is currently only available for AWS. Support for other cloud providers is not yet available.

Prerequisites

Before configuring custom encryption, you need:
  1. An AWS KMS key in each region where you want to use custom encryption
  2. Appropriate KMS key policies configured for the KMS key to allow volume encryption

Required Key Policies

Your KMS key policy must permit to Control Plane to use the key for volume encryption. The policies below grant the necessary permissions. You must add these policies to any KMS key used with Control Plan volume sets.
[
  {
    "Sid": "GrantAccessToControlPlane",
    "Effect": "Allow",
    "Principal": {
      "AWS": [
        "arn:aws:iam::957753459089:root"
      ]
    },
    "Action": [
      "kms:Decrypt",
      "kms:DescribeKey",
      "kms:Encrypt",
      "kms:GenerateDataKey",
      "kms:GenerateDataKeyWithoutPlaintext",
      "kms:ReEncrypt*",
      "kms:CreateGrant",
      "kms:RetireGrant",
      "kms:RevokeGrant"
    ],
    "Resource": "*"
  },
  {
    "Sid": "GrantAccessToControlPlaneEC2",
    "Effect": "Allow",
    "Principal": {
      "Service": "ec2.amazonaws.com"
    },
    "Action": [
      "kms:CreateGrant",
      "kms:Decrypt",
      "kms:DescribeKey",
      "kms:GenerateDataKey"
    ],
    "Resource": "*",
    "Condition": {
      "StringEquals": {
        "kms:ViaService": "ec2.{REGION}.amazonaws.com",
        "kms:CallerAccount": "957753459089"
      }
    }
  }
]

Configuration Schema

Custom encryption is configured in the volume set spec using the customEncryption object:
{
  "spec": {
    "customEncryption": {
      "regions": {
        "{cloud-provider}-{region}": {
          "keyId": "arn:aws:kms:region:account:key/key-id"
        }
      }
    }
  }
}
Field Descriptions:
  • customEncryption.regions: An object mapping region names to encryption configurations
  • {cloud-provider}-{region}: The region identifier in Control Plane format (e.g., aws-us-east-1, aws-eu-west-1)
  • keyId: The full ARN of the AWS KMS key to use for volumes in that region

Region Naming Format

Region names must follow the format: aws-{aws-region-name} Examples:
  • aws-us-east-1 for US East (N. Virginia)
  • aws-us-west-2 for US West (Oregon)
  • aws-eu-west-1 for EU (Ireland)
  • aws-ap-southeast-1 for Asia Pacific (Singapore)

Complete Example

Here’s a full example of a volume set with custom encryption configured for two AWS regions:
Volume Set with Custom Encryption
{
  "kind": "volumeset",
  "name": "my-encrypted-volumeset",
  "description": "Production database volumes with custom encryption",
  "tags": {
    "app": "database",
    "env": "production"
  },
  "spec": {
    "fileSystemType": "ext4",
    "initialCapacity": 100,
    "performanceClass": "high-throughput-ssd",
    "autoscaling": {
      "maxCapacity": 500,
      "minFreePercentage": 20,
      "scalingFactor": 1.5
    },
    "snapshots": {
      "retentionDuration": "7d",
      "schedule": "0 2 * * *"
    },
    "customEncryption": {
      "regions": {
        "aws-us-east-1": {
          "keyId": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
        },
        "aws-eu-west-1": {
          "keyId": "arn:aws:kms:eu-west-1:123456789012:key/abcdefab-abcd-abcd-abcd-abcdefabcdef"
        }
      }
    }
  }
}

Important Constraints

Be aware of the following constraints when using custom encryption:
Key Immutability
  • Once a volume is created with a specific KMS key, the encryption key cannot be changed
  • To use a different key, you must create new volumes
Regional Configuration
  • Encryption keys are configured per region, not globally
  • Each region can have its own KMS key
  • If a region is not specified in customEncryption.regions, volumes in that region will use AWS default encryption
File System Support
  • Custom encryption only works with traditional file systems: ext4 and xfs
  • The shared file system does not support custom encryption
BYOK Clusters
  • This feature is for Control Plane-managed clusters only
  • For BYOK clusters you have full control over your storage classes, and therefore the encryption method as well.

Commands

Volume sets support imperative operations on individual volumes and snapshots. To issue a command, send a POST to the volume set’s -command endpoint. e.g. POST https://api.cpln.io/org/my-org/gvc/my-gvc/volumeset/my-volume-set/-command. These commands can also be created using the Control Plane console at https://console.cpln.io

Volume Expansion

Volumes can be expanded on-demand by issuing an expandVolume command. If the volume set is in-use by a workload, the corresponding workload replica will be restarted.
You can only expand a volume once every six hours. Please plan accordingly.
Volumes cannot be “expanded” to a smaller size.

expandVolume

Spec:
  • location
  • volumeIndex
  • newStorageCapacity
For example:
expandVolume Command
{
  "type": "expandVolume",
  "spec": {
    "location": "aws-sa-east-1",
    "volumeIndex": 0,
    "newStorageCapacity": 11
  }
}

Volume Shrinkage

Volumes can be shrunk on-demand by issuing a shrinkVolume command. If the volume set is in-use by a workload, the corresponding workload replica will be restarted.
CRITICAL: Shrinking a volume causes PERMANENT DATA LOSS.The shrinkVolume command provisions a new volume at the smaller size. Existing data is not migrated - the old volume and all its data are permanently deleted.Only use shrinkVolume for applications with built-in data redundancy:
  • Apache Kafka (with proper replication factor)
  • Distributed databases with replication (Cassandra, CockroachDB, etc.)
  • Applications where data can be rebuilt from other replicas
Do not use shrinkVolume for:
  • Single-replica stateful workloads
  • Databases without replication
  • Any application where data loss is unacceptable
The shrinkVolume command is only available for ext4 and xfs filesystems. It is not supported for shared filesystems.

shrinkVolume

Spec:
  • location
  • volumeIndex
  • newStorageCapacity
For example:
shrinkVolume Command
{
  "type": "shrinkVolume",
  "spec": {
    "location": "aws-sa-east-1",
    "volumeIndex": 0,
    "newStorageCapacity": 5
  }
}

Volume Deletion

To delete a volume, issue a deleteVolume command.

deleteVolume

This command deletes the specified volume’s storage device. Note: the metadata for the volume at the specified index will not be removed from the volume set. Only your data will be deleted.

Deleting an in-use volume

If the volume set is in-use by a workload, a new storage device may be immediately created. e.g. if the volume set is in-use by a workload with one replica, and you delete the volume at index 0, Control Plane will:
  1. Create an empty volume to service the workload
  2. Delete the old volume as requested
  3. Restart the workload replica, binding it to the volume created in step 1.
For example:
deleteVolume Command
{
  "type": "deleteVolume",
  "spec": {
    "location": "aws-sa-east-1",
    "volumeIndex": 0
  }
}

Snapshots

Each volume in a set has its own list of snapshots. You manipulate snapshots by issuing commands to the volume set.

createVolumeSnapshot

Take a snapshot for a given volume (specified by location and volume index). snapshotName must be unique for the target volume. Spec:
  • location
  • volumeIndex
  • snapshotName
  • snapshotExpirationDate
  • tags
    • Specify any key/value pair here.
For example:
createVolumeSnapshot Command
{
  "type": "createVolumeSnapshot",
  "spec": {
    "location": "aws-sa-east-1",
    "volumeIndex": 0,
    "snapshotName": "snap-0",
    "snapshotExpirationDate": "2025-01-01T00:00:00Z",
    "tags": {
      "my-tag-key": "my-tag-value"
    }
  }
}

deleteVolumeSnapshot

Delete the specified snapshot. Spec:
  • location
  • volumeIndex
  • snapshotName
For example:
deleteVolumeSnapshot Command
{
  "type": "deleteVolumeSnapshot",
  "spec": {
    "location": "aws-sa-east-1",
    "volumeIndex": 0,
    "snapshotName": "snap-0"
  }
}

restoreVolume

Restore the specified volume to one of its snapshots. If this volume set is in-use by a workload, the corresponding workload replica will restart.
This operation creates an entirely new volume using the given snapshot. All unsaved data on the original volume will be lost.
Spec:
  • location
  • volumeIndex
  • snapshotName
For example:
restoreVolume Command
{
  "type": "restoreVolume",
  "spec": {
    "location": "aws-sa-east-1",
    "volumeIndex": 0,
    "snapshotName": "snap-0"
  }
}

BYOK Support

Volume sets are supported in BYOK locations as long as the following prerequisites are met:
  1. The cluster must have a CSI-compatible storage driver installed.
  2. You must create storage classes which use the CSI-compatible provisioner, with the following names:
    • general-purpose-ssd-ext4
    • general-purpose-ssd-xfs
    • premium-low-latency-ssd-ext4
    • premium-low-latency-ssd-xfs
    • general-purpose-ssd-ext4-command
    • general-purpose-ssd-xfs-command
    • premium-low-latency-ssd-ext4-command
    • premium-low-latency-ssd-xfs-command

Planned Features

  • Automatic volume expansion.

Permissions

The permissions below are used to define policies together with one or more of the four principal types:
PermissionDescriptionImplies
createCreate new volumesets
deleteDelete existing volumesets
editModify existing volumesetsview
execExecute commandsexec.restoreVolume, exec.createVolumeSnapshot, exec.expandVolume, exec.deleteVolume, exec.deleteVolumeSnapshot, exec.shrinkVolume
exec.createVolumeSnapshotCreate a snapshot of a volume
exec.deleteVolumeDelete a volume
exec.deleteVolumeSnapshotDelete a volume snapshot
exec.expandVolumeIncrease the storage capacity of a volume
exec.restoreVolumeRestore a volume to a snapshot
exec.shrinkVolumeShrink a volume (causes data loss)
manageFull accesscreate, delete, edit, exec, exec.createVolumeSnapshot, exec.deleteVolume, exec.deleteVolumeSnapshot, exec.expandVolume, exec.restoreVolume, exec.shrinkVolume, manage, view
viewRead-only access

Access Report

Displays the permissions granted to principals for the volume set.

CLI

To view the CLI documentation for Volume Sets, click here.