Capacity Planning
Validate that target infrastructure can handle the workloads you are migrating into it. Identify over-subscribed clusters before you finalise wave schedules — not after go-live.
Overview
The Capacity Planning module helps you determine whether your target infrastructure has sufficient CPU, memory, and storage to accommodate the workloads you are planning to migrate into it. It combines current utilisation data from the target environment with the planned resource demand from your migration wave schedule to produce a projected utilisation figure for each cluster or environment.
Capacity failures during migration are among the most disruptive — and most avoidable — migration incidents. A target cluster that becomes CPU-saturated during cut-over causes performance degradation across all newly migrated workloads simultaneously. Capacity Planning in Clarity Migrate is designed to surface these risks during the planning phase, when you can still adjust.
The module has three primary screens: the Capacity Planning Dashboard (overview of all target clusters), Cluster Detail (per-cluster breakdown by resource type), and Capacity by Move Group (demand analysis per planned migration wave).
When to Use
- Before finalising move group assignments — check whether the target cluster can absorb the workloads you are planning to put on it before committing to the assignment.
- When planning wave schedules — ensure that waves scheduled close together don't collectively over-commit any single cluster.
- When stakeholders ask "can the target handle this?" — the Projected Utilisation figures and threshold indicators give a clear, data-driven answer.
- After adjusting wave assignments — re-run capacity planning to confirm that your changes have resolved the over-commitment.
- Periodically during planning — as more devices are added to move groups, the demand figures change. Re-check capacity whenever significant additions are made.
Key Features
Step-by-Step Instructions
Go to Migrations → Capacity Planning. The Capacity Planning Dashboard loads, showing all target clusters with their current utilisation summary. If no clusters appear, your CMDB may not have target cluster records populated — contact your system administrator.
Use the filter panel on the left to select a specific target cluster, data centre, or environment. The dashboard updates to show capacity figures for the selected scope. For a full programme view, leave the filter set to "All Clusters" to see all targets simultaneously.
The Current Utilisation column shows the resource usage of workloads already running on the target cluster — machines that have already been migrated or that were born in the target environment. This is the baseline before any planned migration waves are applied.
The Planned Addition column shows the combined resource demand of all workloads assigned to move groups targeting this cluster. This represents the total new load that your migration plan will add when all planned waves complete.
The Projected Utilisation column shows Current + Planned. This is the estimated utilisation after all planned migrations to this cluster have completed. Look for cells shaded amber (approaching warning threshold) or red (above warning threshold). Default thresholds: CPU warning at 80%, memory warning at 85%.
For any cluster where projected utilisation exceeds thresholds, click into the Cluster Detail view and then the Capacity by Move Group tab. Identify which move groups contribute the most demand. Consider: (a) redistributing some move groups to a different target cluster, (b) splitting a large move group across two waves on different clusters, or (c) requesting additional capacity on the target cluster before the migration.
After reassigning move groups or splitting waves, return to the Capacity Planning Dashboard. The figures update to reflect your new assignment. Confirm that projected utilisation on all affected clusters is now within acceptable thresholds before proceeding to finalise the migration wave schedule.
If your organisation's operational standards differ from the defaults, navigate to Administration → Settings → Capacity Thresholds. Update the warning and critical thresholds for CPU, memory, and storage to match your standards. These changes apply globally to all capacity planning views.
Dashboard Sections
The Capacity Planning module is a read-only analysis view. The following describes each section and metric.
| Section / Metric | Description | Action if Above Threshold |
|---|---|---|
| Current Utilisation | Resource usage of workloads already running on the target cluster. Expressed as a percentage of cluster capacity for CPU, memory, and storage separately. | If already above 60–70%, the cluster has limited headroom. Consider requesting additional cluster resources before planning any migrations to it. |
| Planned Addition | Combined resource demand of all workloads assigned to move groups targeting this cluster. Derived from CMDB resource records for each device in the move groups. | If the planned addition alone is large, consider distributing move groups across multiple target clusters to spread the load. |
| Projected Utilisation | Current Utilisation + Planned Addition. The estimated cluster utilisation after all planned migrations to this cluster complete. | Above 80% CPU or 85% memory = warning (amber). Above 90% = critical (red). Redistribute workloads or add cluster capacity. |
| Cluster Detail | Per-cluster breakdown showing the three metrics above for each resource type (CPU, Memory, Storage) in a drill-down view. | Identify which resource type is the binding constraint — memory is typically the limiting factor in virtualised environments. |
| Capacity by Move Group | Shows the resource demand contributed by each individual move group assigned to a cluster. Sorted by demand to identify the biggest contributors. | Identify the heaviest move groups and consider redistributing them to less-utilised clusters. |
Example Workflow
An infrastructure engineer is about to finalise the wave schedule for Wave 2. Before doing so, she opens Migrations → Capacity Planning and selects the target Nutanix cluster NTX-PROD-CLU-02.
The dashboard shows: Current Utilisation = 62% CPU, 71% memory, 55% storage. Planned Addition from Wave 2 = 32% CPU, 18% memory, 22% storage. Projected Utilisation = 94% CPU (above the 80% warning threshold, shown in red), 89% memory (above 85% warning), 77% storage (within limits).
She clicks into Capacity by Move Group and sees that MG-003 Databases is responsible for 18% of the planned CPU addition — by far the largest single contributor. The databases have high CPU reservation values in the CMDB.
She speaks with the architect, who suggests migrating MG-003 Databases to a different target cluster (NTX-PROD-CLU-03) which currently has only 41% CPU utilisation. She reassigns MG-003 in Clarity Migrate.
Back in Capacity Planning, she refreshes the view. Projected Utilisation on NTX-PROD-CLU-02 is now 76% CPU (within threshold). She also checks NTX-PROD-CLU-03: 59% CPU projected — well within limits. Both clusters are now safe for the planned migration.
Tips
Run capacity planning before finalising any move group assignments. It is far easier to reassign a device to a different move group or target cluster before the schedule is locked than to discover an over-commitment the night before cut-over.
Memory is usually the binding constraint. In virtualised environments, memory reservations are typically tighter than CPU. Pay close attention to the memory column — a cluster can appear healthy on CPU while being critically over-committed on memory.
Account for HA overhead. Most virtualised environments reserve 10–25% of cluster capacity for high-availability failover. Your operational utilisation target should account for this overhead, meaning your effective threshold may be lower than the raw cluster capacity number suggests. Factor this into your threshold configuration in Administration → Settings.
Common Mistakes
Running capacity planning after all move groups are finalised. If the schedule is locked and stakeholders have been briefed on the dates, discovering a capacity issue at this point creates significant pressure to "just proceed anyway." Run capacity planning early and often — ideally as a gate check before any wave schedule is socialised externally.
Ignoring storage capacity. Teams often focus on CPU and memory and overlook storage. Databases and application datastores can have significant storage footprints. A target cluster with sufficient compute but insufficient storage cannot complete the migration. Always check all three resource dimensions.
Not accounting for HA overhead when assessing cluster capacity. A cluster rated at 100% capacity is not safely usable at 100% utilisation in a production environment. Ensure your thresholds reflect your operational standards including HA overhead, not just raw cluster capacity.