Most modernization programs fail for a boring reason: teams pick a single path, then try to force every workload through it.
That is risky because the portfolio is never uniform. A payroll batch job, a claims API, and a report generator may sit in the same repo, but they behave like different products. And the money going into cloud makes the trade-offs more visible than ever. Gartner forecasted worldwide public cloud end-user spending at $723.4B in 2025.
Also, most organizations do not modernize in one big jump. Red Hat’s application modernization report found 85% of applications are modernized in 2 or 3 iterative steps such as rehosting, replatforming, and refactoring. That lines up with what seasoned teams see in the field: you earn your way into deeper change.
This is where Cloud modernization services should act less like a factory line and more like a triage unit. You do not “modernize the portfolio.” You modernize one workload at a time, based on evidence.
Overview of common cloud modernization paths
There are three mainstream paths. They are not “better or worse.” They are different tools.
Rehost (lift-and-shift)
- Fastest path to move a workload.
- Often used when time is tight, or the app is near end-of-life.
- Biggest risk: carrying old cost and old operational friction into a new runtime.
Replatform (lift-tinker-and-shift)
- Move the app but change the operating surface.
- Examples: managed database, managed cache, container runtime, standard logging.
- Often the best “first upgrade” when code change budget is limited.
Refactor (re-architect)
- Change the design to fit modern runtime patterns.
- Useful when the workload needs frequent releases, better resilience, or long-term cost control.
- Biggest risk: the hidden dependency graph, plus test debt.
A practical note: if you are buying Cloud modernization services, ask for a plan that assumes mixed paths inside the same program. If a provider sells a single method, they are selling their comfort zone, not your outcomes.
Assessing each workload’s constraints and opportunities
Before you argue rehost or refactor, you need to learn what the workload really is.
I use six lenses. You can capture these in a one-page brief per workload.
- Business criticality
- What breaks if this goes down for 30 minutes?
- Who complains first: customers, finance, or a back-office team?
- Change profile
- How often does it change?
- Is it “set and forget” or is it a feature factory?
- Technical surface
- Runtime and language.
- Database type.
- OS dependencies.
- Middleware.
- Scheduled jobs and batch windows.
- Risk profile
- Regulated data.
- Audit needs.
- Known incident history.
- Cost and capacity drivers
- Spiky traffic, steady traffic, or seasonal traffic.
- Storage growth.
- License constraints.
- Team reality
- Who can support it at 2 a.m.?
- Do you have tests, or only hero debugging?
This is also where I bring in a simple “waste meter.” Sonar research estimated that for a codebase around one million lines of code, attributed technical debt can cost about $306,000 per year, tied to thousands of developer hours. You do not need to accept that number as universal. Use it as a forcing function to ask: “Is this workload already taxing us every year?”
That question changes many rehost vs refactor decisions. If the app is already draining engineering time, a fast move that preserves the drain may not be the win it looks like on a migration dashboard.
This is exactly why Cloud modernization services should include assessment, not just execution.
Using an assessment matrix to choose between rehost, replatform and refactor
A workload assessment matrix is simply a repeatable way to avoid opinion wars. It makes teams show their reasoning.
Here is a version you can use tomorrow. Score each row 1 to 5, then read the decision hints.
Workload assessment matrix (example)
| Dimension | What to look for | Rehost fits when… | Replatform fits when… | Refactor fits when… |
| Release frequency | How often you ship | Rare releases, low churn | Moderate churn | Frequent releases, fast learning loop |
| Runtime risk | End-of-life OS, brittle deploy | You need speed first | You can change platform layer safely | You need a new deploy model |
| Dependency complexity | Point-to-point calls, shared DB | Dependencies are limited | Dependencies exist but are known | Dependencies are messy and must be redesigned |
| Performance profile | Latency and throughput needs | Current performance is acceptable | You can improve with managed services | You need new performance patterns |
| Data gravity | Data size, movement cost | Data stays mostly put | Managed data services help | Domain redesign helps data ownership |
| Test maturity | Coverage, CI, automation | Minimal tests | Some tests, stable smoke suite | Strong tests or plan to build them early |
| Cost pressure | Licensing, idle capacity | Short-term move is priority | Targeted cost fixes | Cost structure needs redesign |
How to interpret quickly
- If “test maturity” is 1 or 2, refactor is possible, but the first milestone must be test enablement.
- If “dependency complexity” is 4 or 5, plan time for integration redesign, regardless of the path.
- If “runtime risk” is 5 (end-of-life), prioritize removing the risk first, then deepen modernization later.
A second table helps when stakeholders want a crisp recommendation. This one ties your scores to a simple call.
Simple decision guide
| Pattern you see | Likely path | Why |
| Stable workload, low churn, deadline-driven | Rehost | You buy time and reduce infra risk first |
| Stable workload but ops pain is high | Replatform | You reduce toil with managed building blocks |
| High churn product, roadmap pressure, reliability targets | Refactor | You need design changes to support frequent delivery |
| Unclear ownership, unknown dependencies | Replatform first, refactor later | You reduce unknowns before deep redesign |
Use this matrix in a workshop, not in a document. The value is in the debate it forces.
This is where Cloud modernization services can shine, if they provide an assessor who can ask uncomfortable questions and still keep the room moving.
Handling dependencies and integration points during modernization
Dependencies are the trap door under most modernization budgets.
Start by naming the dependency types. Teams miss entire classes of coupling.
Common dependency types
- Database coupling (shared schemas, shared stored procedures)
- File drops (SFTP jobs, nightly CSV exports)
- Message brokers and queues
- Identity and access flows
- Reporting tools that read production tables directly
- External vendors with fixed IP allowlists
- Time-based coupling (batch windows, end-of-day close)
Now, handle them with two artifacts that stay current.
1) Dependency map that is brutally simple
- Producer
- Consumer
- Protocol
- Data shape
- Frequency
- Owner
- Failure mode
2) Contract tests for integration points
- Even a small set of contract tests pays off more than people expect.
- It makes regressions visible without needing full end-to-end runs.
A practical pattern that reduces risk is “strangler routing” at the edge:
- Put a stable API gateway pattern in front.
- Route a small slice of traffic to the modernized path.
- Increase gradually, with clear rollback.
You will also want to decide early on observability expectations. DORA’s research program highlights the value of operational performance metrics, like lead time and reliability measures. You do not need a perfect platform. You do need consistent telemetry across old and new components.
Building a phased roadmap for complex portfolios
Portfolios do not modernize in a straight line. They move in waves.
Here is a phased roadmap that avoids the “big bang” pressure while still showing progress.
| Phase | Goal | What “done” looks like |
| Phase 0: Portfolio triage | Pick candidates with evidence | Ranked backlog, clear owners, initial matrix scores |
| Phase 1: Landing zone and guardrails | Make a safe target runtime | Identity, networking, logging, backup, baseline security controls |
| Phase 2: Quick wins | Build trust and reduce risk | 2–5 workloads moved, stable ops, cost and incident baselines |
| Phase 3: Platform improvements | Reduce repeat effort | Reusable pipelines, standard images, golden paths, runbooks |
| Phase 4: Deep modernization | Refactor where it pays | Domain boundaries improved, integration contracts, tests strengthened |
| Phase 5: Retire and simplify | Stop paying for dead things | Decommissioned servers, removed licenses, fewer integration points |
Two rules keep this roadmap honest:
- Do not start Phase 4 for a workload that has no owner.
- Do not modernize dependencies you do not understand. Isolate it first.
If you are engaging in Cloud modernization services, ask them to show how they sequence these phases. If they only talk about Phase 2, you will pay later.
Tracking business value from modernization investments
Executives do not fund “modernization.” They fund outcomes.
Pick a small set of value signals, then track them from day one. Tie them to a baseline.
Suggested value signals
- Time to release: from approved change to production.
- Change risk: defects or rollbacks per release.
- Operational load: incidents per month and mean time to recover.
- Unit cost: cost per transaction, cost per active user, or cost per batch run.
- License reduction: count of retired commercial components.
- Resilience: documented recovery steps, tested restore outcomes.
A simple reporting table works well for portfolio reviews:
| Workload | Path | Baseline unit cost | Current unit cost | Release cycle | Incident trend | Next step |
| Claims API | Replatform | 1.00 | 0.82 | Weekly to twice-weekly | Down | Add contract tests |
| Billing batch | Rehost | 1.00 | 0.97 | Monthly | Flat | Replatform DB next |
| Customer portal | Refactor | 1.00 | 0.88 | Weekly | Down | Split auth service |
This keeps the conversation grounded. It also makes it easier to defend deeper work when it is justified.
One more reality check: if technical debt is already consuming meaningful engineering time, then “business value” includes reclaimed time. Sonar’s estimate of $306,000 per year for a large codebase is one way to quantify the drag. Use your own numbers if you have them.
Closing thought
The right modernization path is not a religion. It is a decision you can explain.
Use a workload assessment matrix to keep choices consistent. Expect iteration. Red Hat’s data suggests most apps modernize in steps anyway. And keep the program accountable to outcomes, because cloud spend is only going up. If you do that, Cloud modernization services become a practical engine for delivery, not a set of slideware promises.
Please visit my site, Itbetterthisworld, for more details.

