01
The inventory
Do not stop at the MDF files. Check the jobs, logins, linked servers, credentials, and connection strings before you say the environment is understood.
Reality check
If this instance moves, what else breaks?
MKhub / sql server migration guide
SQL Server migrations are predictable until they are not. Usually, it is a 10-year-old linked server or a missing Agent job that turns a four-hour window into an all-nighter.
This guide is for estates with scheduled jobs, cross-server dependencies, vendor quirks, and cutover steps that look shorter on slides than they do in production. If version support is part of the trigger, keep the SQL Server update guide and the live updates tracker open with it.
Related
Bring in SQL Server consulting when the cutover plan still depends on tribal knowledge or the rollback path feels too optimistic. Use the SQL Server backup guide when copy and validation strategy need work, and keep the SQL Server recovery guide open when outage timing and restore order are part of the same move.
Critical path
Forget the pretty project plan. Most migration work still comes down to the same four stages: inventory, dry run, the real window, and proof that the new box is actually live.
01
Do not stop at the MDF files. Check the jobs, logins, linked servers, credentials, and connection strings before you say the environment is understood.
Reality check
If this instance moves, what else breaks?
02
Restore it on the new hardware. Time it. If the restore takes six hours and the window is four, you need a different plan.
Reality check
Does the dry run fit the real window?
03
Follow the checklist. No improvisation. If validation fails halfway through, pull the plug and roll back instead of arguing at 2 AM.
Reality check
When do we stop and roll back?
04
Make sure the applications are on the new instance, the jobs are running, the backups are writing, and monitoring can see the new hardware.
Reality check
What has to be true before we call it done?
In this guide
Use this when
1 / Why move
Greenfield moves are easy to describe. Real migrations are not. They involve agent jobs nobody has looked at for years, linked servers that still matter, login and permission history, application assumptions, reporting dependencies, and batch work that only breaks after business hours.
That is why planning has to start with the environment you really have, not the target box on the slide. If the source has age, drift, or unknown dependencies, that baggage is part of the move whether anyone likes it or not.
| Why are we moving? | What it really means |
|---|---|
| End of support or upgrade pressure | You are balancing version risk with production change risk. |
| Platform move or consolidation | You need deeper inventory and dependency mapping before design. |
| Performance or stability problems | You may be mixing migration goals with cleanup work. |
| Data-center or hosting change | Network paths, security, jobs, and integrations matter as much as the database copy. |
2 / Scope
Teams often jump straight to backup and restore, log shipping, replication, or some external migration tool. Those are method choices. The earlier question is simpler: what exactly is moving, what cannot break, and what level of downtime is acceptable?
Skip that step and teams usually pick the wrong method for the wrong problem. Then the cutover plan turns into a pile of tooling choices instead of a move the business can survive.
Start here
| If the project is driven by | Start by clarifying |
|---|---|
| Support deadlines | Version target, compatibility, and downtime tolerance. |
| Infrastructure change | Dependency map, networking, and security flows. |
| Performance complaints | Whether migration is being used to hide unresolved workload issues. |
| Cost pressure | What can be simplified without creating new operational risk. |
3 / Discovery
The obvious databases are not usually the main surprise. The surprises are jobs, operators, linked servers, certificates, maintenance logic, application connection strings, SSIS or report dependencies, and security objects that nobody listed because they seemed "outside the migration."
A decent inventory is what turns the project from hopeful into controlled. It also tells you whether the migration is one move or three separate pieces of work hiding under one label.
| Area | What to check |
|---|---|
| Data layer | Databases, recovery model, size, growth, and special features in use. |
| Instance layer | Jobs, alerts, linked servers, credentials, and server-level configuration. |
| Access layer | Logins, service accounts, application connection paths, and network assumptions. |
| Operational layer | Backups, monitoring, maintenance, and restore expectations before and after cutover. |
4 / Risk review
Version changes affect more than syntax. Compatibility level behavior, deprecated features, query plans, drivers, agent steps, and external tooling can all shift enough to matter. The clean-looking target environment does not reduce that risk by itself.
This is usually the point where the plan stops being one plan. You either upgrade in place, build a new box, move it in phases, or split the work because the estate is messier than the slide deck said.
Compatibility checks
Reality check
These are the real options people end up choosing between. The main difference is how much downtime, rollback pain, and failure scope you are willing to carry.
Decision point
Do you just need the new version on the same box?
Option
In-place upgrade
Fast, but high risk. If it goes bad, there is no easy way back. Only worth it when the estate is small and the rollback story is still acceptable.
Decision point
Do you want the old server there as a safety net?
Option
Side-by-side new build
Usually the safest way. Build the new environment, sync the data, and flip the switch when the checks are green. The old box stays there if you need to back out.
Decision point
Is the estate too big to move in one shot?
Option
Phased move
Move one application, one database group, or one dependency lane at a time so the scope of failure stays small and the rollback story stays believable.
Decision point
Are you also changing host, cloud, or network layout?
Option
Platform shift
At that point the data copy is the easy part. Networking, latency, security flow, and connection switching are usually what bite first.
5 / Test run
The dry run tells you how long the job actually takes and what breaks on the way. The rollback plan tells you exactly when to stop and how to get back to the old state without making the outage worse.
Teams that skip this usually rely on confidence instead of evidence. That works until the first hidden dependency or validation failure shows up mid-cutover.
| Document | What it should answer |
|---|---|
| Rehearsal plan | How long each step takes and what fails in a realistic dry run. |
| Cutover checklist | Who does what, in what order, with what sign-off points. |
| Rollback checklist | What forces rollback and how the old state is safely restored. |
| Validation sheet | What must be true before the migration is called complete. |
6 / Production window
The real cutover question is not whether you can move the data. It is whether the team knows the sequence, the freeze point, the validation checks, and the exact point where rollback becomes the only sane option.
No improvisation. No invented steps. If the window depends on people making up the plan live, the plan is not ready.
Common cutover traps
| Trap | What it causes |
|---|---|
| Assuming application switching is trivial | Unexpected downtime after the database move itself succeeds. |
| No agreed rollback point | Longer outages while the team debates what to do. |
| Validation too vague | False confidence and late discovery of broken flows. |
| Too many manual steps discovered on the night | Operator error exactly when tolerance is lowest. |
7 / Sign-off
After the switch, nobody cares that the restore finished. They care whether the applications connect, the jobs run, the monitoring sees the new host, the backups write, and the workload behaves.
This is where migrations often expose weak ownership. If nobody owns the validation list up front, important checks slip into the hours after the change, when the business already assumes the move is finished.
| Validation area | What to confirm |
|---|---|
| Application behavior | Connection paths, critical transactions, and user-facing workflows. |
| Operational controls | Jobs, alerts, monitoring, backups, and maintenance tasks. |
| Security | Logins, permissions, service access, and network boundaries. |
| Performance | Expected workload behavior, query stability, and resource pressure. |
8 / What goes wrong
| Mistake | What it leads to |
|---|---|
| Treating migration as a pure tooling project | Weak handling of dependencies, cutover, and rollback. |
| Inventorying only the main databases | Missed jobs, security objects, or integration failures. |
| Skipping rehearsal | Timing surprises and procedural gaps during the real window. |
| Using vague validation | Late discovery of breakage after the move is declared done. |
| Assuming rollback is obvious | Slow, improvised recovery if the target state is not acceptable. |
9 / Outside review
When a team has been looking at the same migration plan for months, they stop seeing the holes. I am not there to replace the team. I am there to read the plan cold before it hits production.
That usually means finding the gaps in the rollback logic, the missed dependencies in the inventory, and the weak parts of the validation checklist before the window starts.
| Good time to bring help in | Reason |
|---|---|
| Before the plan is locked | There is still time to change the path instead of defending a bad one. |
| After the first dry run | The mistakes are finally visible. |
| Before a high-risk cutover | That is when rollback and validation logic matter most. |
| After a bad migration | The first job is to stabilize the environment and work out what actually failed. |
Conclusion
The hard part is rarely copying data. It is knowing what matters before the production window, rehearsing the real move, and validating the new state without self-deception.
If the migration is tied to version change, go next to the SQL Server update guide or the live updates tracker. If restore confidence and incident timing are part of the same work, continue to the SQL Server recovery guide. If you want the other SQL reference pages, go back to the hub.
Next step
If the cutover plan needs one more hard review before real users and real timing are involved, use SQL Server consulting.
Next useful reads: the SQL Server backup guide for restore confidence, the SQL Server recovery guide for incident readiness, and the SQL Server update guide for version and support context.