Overview

Migration planning is the process of taking your enriched CMDB data and building an executable plan that your entire team can follow. The plan isn't a spreadsheet or a project schedule — it's a structured set of objects inside Clarity Migrate that drive every action from the first stakeholder notification through to post-migration monitoring. A complete migration plan consists of four building blocks: Migration Events (the container for the overall migration), Move Groups (waves of assets that move together), Runbooks (the task lists that govern how each wave is executed), and the T-Minus Runbook (the countdown timeline anchored to your go-live date).

The discipline of migration planning exists to convert chaos into controlled execution. Without a formal plan, cut-over nights become improvised marathons where everyone is making decisions under pressure and nobody is sure what's been done and what hasn't. With a plan in Clarity Migrate, every task has an owner, every dependency is visible, and the Command Centre gives the PM a real-time view of progress across the entire event. Blockers surface early rather than at 2am when a critical workload is halfway through migration.

This workflow has three phases. In the Scope phase you decide what to migrate and group assets into waves. In the Design phase you create Migration Events, Move Groups, Runbooks, and the T-Minus timeline. In the Validate phase you check capacity, review the RAID log for risks, and confirm the Command Centre shows everything correctly linked before go-live.

When to Use This Workflow

  • After CMDB completeness reaches 80%+ — planning before this threshold guarantees constant rework as missing data surfaces. Complete the Infrastructure Discovery workflow first.
  • When planning a new migration event — data centre consolidation, cloud migration, hardware refresh, or any other initiative that requires moving assets from one environment to another.
  • When re-planning after scope changes — if the business adds or removes assets from scope mid-project, this workflow helps you restructure move groups and update runbooks accordingly.
  • For each new wave in a multi-wave migration — while the Migration Event container is created once, Move Groups and Runbooks are created per wave. Return to this workflow at the start of each new wave.

Key Features

Migration Events Manager
Create and manage migration events with go-live dates, owners, and full lifecycle tracking from planning through post-migration review.
Move Groups & Wave Design
Organise assets into logical migration waves with individual go-live dates, owners, and runbooks per group.
Asset Assignment (Bulk)
Assign hundreds of assets to move groups in seconds using Bulk Edit from the Device or Application list view.
Capacity Planning Integration
Check target cluster resource headroom (CPU, memory, storage) before finalising move group assignments.
Runbook Builder (Task Library)
Build detailed runbooks by importing pre-built task templates from the Task Library and adding custom tasks specific to each move group.
T-Minus Countdown Runbook
A countdown timeline anchored to your go-live date, automatically calculating due dates for T-30, T-14, T-7, T-1, T-0, and T+1 tasks.
RAID Log & Risk Register Integration
Log risks, assumptions, issues, and dependencies against the migration event to maintain a live risk register throughout the project.
Command Centre Visibility
Real-time dashboard showing task completion status across all move groups and runbooks during live migration execution.

Step-by-Step Workflow

1
Confirm CMDB completeness
Open Inventory → Completeness. Your target before beginning migration planning is 80%+ overall completeness, with Owner and Environment populated at 90%+ each. If you're below this threshold, stop and complete the enrichment steps from the Infrastructure Discovery workflow first. Beginning migration planning with a low-completeness CMDB means you'll constantly be revising your wave design as new information about assets comes to light — it's far more efficient to invest the extra days in enrichment than to restart planning repeatedly.
2
Define migration scope
Navigate to Inventory → Device List and Inventory → Application List. Use the filter controls to identify assets that should be in scope for this migration event — typically filtered by Environment, Location, Business Unit, or a combination. For each in-scope asset, set the Scope field to In Scope using Bulk Edit (select all filtered records → Bulk Edit → Scope = In Scope). This lets you clearly separate in-scope assets from out-of-scope ones in all subsequent views and reports. Consider using tags (e.g. "In Scope Wave 1", "In Scope Wave 2") if you want more granular pre-assignment before creating Move Groups.
3
Create the Migration Event
Navigate to Migrations → Events → Add Event. Complete all required fields: Event Name (descriptive and unique — e.g. "DC1 to Azure Migration 2025"), Description (brief summary of scope and objectives), Start Date (when planning activities begin), and Go-Live Date (the date of the final cut-over — this is critical because it anchors all T-Minus task due dates). Assign an Owner (the PM responsible for the overall event). Save the event. You will return to this record to add Move Groups, Runbooks, and the T-Minus Runbook in subsequent steps.
4
Design migration waves
Before creating Move Groups in the application, spend time designing your wave structure on paper or in a spreadsheet. Consider four factors: Dependencies — assets in the same dependency chain must be in the same move group (an application server cannot move without its database server). Environment — migrate non-production environments before production; this gives teams practice runs and builds confidence. Risk — migrate least-critical assets first; early waves are learning experiences. Volume — don't make move groups too large; 50–100 devices is a manageable size that allows rollback if something goes wrong. Map out your waves with asset counts, go-live dates, and owners before proceeding.
5
Create Move Groups
Navigate to Migrations → Move Groups → Add Move Group for each wave in your design. For each Move Group, enter: Name (descriptive — e.g. "Wave 1 — Dev Servers", "Wave 2 — Non-Critical Production"), Migration Event (select the event you created in Step 3 from the dropdown), Go-Live Date (the specific date this wave migrates), Owner (the technical lead responsible for this wave), and Description (what's in this wave and any specific constraints). Create all move groups before assigning assets — it's easier to bulk-assign all Wave 1 assets at once than to assign them piecemeal.
6
Assign assets to move groups
In Inventory → Device List, filter to the assets for Wave 1 (use Environment, tag, Business Unit, or any combination of filters that isolates your Wave 1 assets). Select all filtered records using the header checkbox, then click Bulk Edit → Move Group and select Wave 1 from the dropdown. Confirm and save. Repeat this process for each wave. After assigning all waves, navigate to each Move Group and review the asset list to confirm the assignment looks correct. Check that no assets are missing a move group assignment if they should be in scope.
7
Check Capacity Planning
Navigate to Migrations → Capacity Planning. For each target cluster or resource pool, review the projected utilisation after the migration of each wave: CPU should have at least 20% headroom post-migration, Memory should also have at least 20% headroom, and Storage should have at least 15% headroom plus the space needed for snapshots during migration. If any cluster is projected to exceed safe thresholds, split the corresponding move group into smaller waves, or move some assets to a different target cluster. Do not skip this step — resource exhaustion on the target cluster is one of the most common causes of migration failures.
8
Build Runbooks
For each Move Group, navigate to the Move Group detail page → Runbooks tab → Add Runbook. Start by importing tasks from the Task Library — pre-built task templates cover common migration activities such as "Pre-Migration Backup", "Snapshot VMs", "Update DNS", "Validate Application Functionality", and "Notify Stakeholders". After importing standard tasks, add any custom tasks specific to this move group (e.g. application-specific validation scripts, custom notifications, third-party tool steps). Assign each task to an owner and set task dependencies where the order matters. Review the final runbook with the technical team who will execute it — they often identify missing steps or incorrect sequencing.
9
Create the T-Minus Runbook
In the Migration Event detail → T-Minus tab → Import from Template. The T-Minus Runbook is the countdown timeline that governs activities in the weeks and days leading up to go-live. Key milestones include: T-30 (notify stakeholders, confirm scope finalised, begin change freeze planning), T-14 (change freeze begins, final runbook reviews, pre-migration testing complete), T-7 (final capacity checks, confirm all task owners briefed), T-1 (pre-migration briefing, final backup confirmation, comms to end users), T-0 (go-live execution), and T+1 (post-migration monitoring, issue triage, stakeholder confirmation). After importing the template, verify that every task has an assigned owner and that due dates are correctly calculated from your go-live date. T-30 tasks in particular tend to arrive faster than teams expect — assign them immediately.
10
Log initial risks to the RAID Log
Open the Risk Register and RAID Log for the Migration Event. At the start of planning, log everything your team already knows might go wrong or might not be true. Common early risks include: network bandwidth limitations during the migration window, storage array performance degradation under heavy vMotion load, application teams being unavailable during cut-over, and database replication lag exceeding acceptable thresholds. For each risk, set a Probability, Impact, Owner, and Mitigation Plan. Log assumptions (e.g. "Target cluster will have 30% headroom") as separate RAID items — assumptions that turn out to be false are a leading cause of migration delays.
11
Review the Command Centre
Navigate to Migrations → Command Centre. This is the real-time operations view that your team will use during live migration execution. Before go-live, use it to verify that your planning is correctly structured: all Move Groups are linked to the Migration Event, all Runbooks are linked to their Move Groups, T-Minus tasks are showing with correct due dates, and all task owners appear correctly. If anything appears missing or incorrectly linked, navigate back to the relevant record to fix it. The Command Centre is only useful during execution if the data was set up correctly during planning — this is your pre-flight checklist.
12
Execute the migration
On go-live day, the team uses the Runbook view for each Move Group as their primary task tracking tool. As each task is completed, the assigned owner marks it as Complete. Tasks that hit problems are marked as Blocked, with a note explaining the blocker — this immediately surfaces on the Command Centre for the PM. The PM monitors the Command Centre view throughout the migration window and escalates blocked tasks without waiting to be told about them. At the end of the migration window, the T-Minus T+1 tasks guide post-migration monitoring and stakeholder sign-off. Once all T+1 tasks are complete and stakeholders have confirmed success, close the Migration Event.

Real-World Example

Sarah's DC-to-Azure Migration — Acme Corp

Sarah is the PM for a major datacentre-to-Azure migration. The CMDB contains 850 in-scope devices across development, non-critical production, critical production, and DR tiers. Completeness is 84% — Owner and Environment are fully populated. Sarah creates the Migration Event "DC1 to Azure Migration" with a Go-Live date of March 15. As soon as she saves the event, the T-Minus Runbook calculates that T-30 stakeholder notification tasks are due in 2 days. She immediately assigns those tasks to her team before the date slips.

Sarah designs 4 waves based on environment and criticality. Wave 1 — Dev/Test servers (200 devices, go-live February 1). Wave 2 — Non-critical Production (300 devices, go-live February 22). Wave 3 — Critical Production (250 devices, go-live March 8). Wave 4 — DR Systems (100 devices, go-live March 15 — the final go-live date for the event). She creates all four Move Groups in Clarity Migrate, then uses Bulk Edit in the Device List to assign all 850 assets. Capacity Planning confirms all target Azure resource groups have 30%+ headroom post-migration for each wave.

Sarah builds runbooks for each wave using the Task Library, adding application-specific validation tasks for the 12 Tier 1 applications in Wave 3. Wave 1 executes on February 1 without incident — a useful rehearsal that reveals two tasks that need additional detail in the runbook. Sarah updates the Wave 2 and 3 runbooks accordingly before those waves execute. On March 15, Wave 4 completes by 4am, T+1 monitoring tasks are completed by 9am, and stakeholder sign-off is obtained by noon. The migration event is closed the same day.

Tips & Best Practices

Don't assign assets to move groups until capacity planning confirms headroom. Wave design and asset assignment feel productive, but they're wasted work if Capacity Planning forces you to split waves afterwards. Run a quick capacity check for each target environment before finalising your wave assignments — it takes 10 minutes and could save hours of rework.
Build the T-Minus Runbook as early as possible. T-30 tasks (stakeholder notifications, scope freeze, change advisory board bookings) arrive before most PMs expect. Build the T-Minus Runbook on the same day you create the Migration Event and immediately assign the T-30 task owners. Calendar the key milestone dates for your own awareness as soon as the go-live date is set.
Run a dry run with a small non-critical move group before tackling critical workloads. If your migration includes a development or test environment, treat Wave 1 as a formal dress rehearsal. Execute it using exactly the same process you'll use for production — the same runbook template, the same Command Centre monitoring, the same post-migration sign-off process. Lessons learned in Wave 1 are extremely valuable and arrive while the stakes are low.
Keep move groups to 50–100 assets for manageable rollback. A move group of 400 assets that partially fails mid-migration creates an enormous rollback challenge. Smaller move groups mean a smaller blast radius for any issues, faster rollback decisions, and shorter cut-over windows that are easier to schedule and staff. It's better to have more waves than to have each wave be too large to safely manage.

Common Mistakes to Avoid

Skipping capacity planning entirely. Resource exhaustion on the target cluster during a live migration is catastrophic — VMs fail to power on, storage runs out mid-copy, or the cluster becomes unresponsive under load. Always verify CPU, memory, and storage headroom against your projected post-migration utilisation for every target environment before a single asset moves.
Having no runbook for a move group. "We know what we're doing" is not a runbook. Without a formal task list in Clarity Migrate, there's no shared visibility of what's been done, no way for the PM to track progress, and no audit trail if something goes wrong and you need to understand the sequence of events. Every move group needs a runbook — even a simple one.
Forgetting to set the Go-Live date on the Migration Event. The Go-Live date is the anchor for the entire T-Minus countdown. Without it, T-Minus tasks have no due dates, the countdown doesn't function, and the Command Centre can't show timeline progress. Set the Go-Live date when you create the event, and update it immediately if the date changes — every task due date recalculates automatically.
Putting all assets into one giant move group. A single move group with 500+ assets means if anything goes wrong during migration, your only options are "push through" or "roll back 500 assets". Both are high-risk. Phased wave design exists precisely to limit this scenario. Divide and conquer — smaller waves mean safer, more controlled migrations with genuine rollback options at each stage.