Modernization Readiness Assessment
A self-administered 18-question assessment that surfaces the operational, contractual, and data conditions a benefits program must meet before a modernization engagement can succeed.
Most failed modernizations were doomed before the first sprint. The cause was rarely the technology. The cause was that the program could not produce a reproducible data extract, could not name a single accountable decision-maker, or could not get the rule book out of a 1998 mainframe.
This is the assessment we run with our own clients before signing a statement of work. We are publishing it so you can run it yourself — and either fix what you find, or use it to interrogate any vendor (us included) before contracting.
Score honestly. A "No" on any question is a finding, not a failure.
Section A — Authority and accountability
- Is there a single named executive accountable for the modernized system's go-live? (Not a steering committee.)
- Has that executive's appointment outlasted at least one administration change?
- Do you have written sign-off authority for production deploys, distinct from CIO IT-change approval?
- Is the program office, not the IT office, accountable for benefit-determination correctness?
Section B — Data condition
- Can the program produce a reproducible, point-in-time extract of the last 36 months of decisioning data, with claimant identifiers consistently joinable across systems?
- Are policy versions traceable — i.e., for a given decision date, can you retrieve the policy text that bound the decision?
- Does the data dictionary distinguish system-state fields from fact-of-claim fields? (If the answer is "what?", score "No.")
- Is there a documented appeals-and-overturn dataset that can be used to evaluate model decisions retrospectively?
Section C — Continuity of service
- Is "no flag-day cutover" written into the procurement requirements?
- Does the program have a feature-flagging capability — not just a vendor selling one, but a working one?
- Can the program run two decisioning systems in parallel for 90+ days while reconciling divergences?
- Is there a documented rollback procedure that has been exercised in the last 12 months?
Section D — Due process and audit
- Are adverse-action notices for AI-influenced decisions reviewed by counsel before deployment?
- Is decision reasoning persisted per record, in a form that can be produced under FOIA or appellate review?
- Has the OIG or Inspector General office reviewed the planned modernization scope?
- Is there a documented process for handling a model-driven incorrect determination, including remediation owner and SLA?
Section E — Procurement structure
- Does the contract avoid vendor lock-in on the trained models — i.e., does the agency receive trained weights, training data lineage, and evaluation suites at no marginal cost?
- Does the contract include a re-evaluation cadence and a model-update protocol?
How to score
- 0–6 Yes. Modernization should not begin yet. Address the lowest-numbered "No" first.
- 7–12 Yes. A scoped pilot is feasible. A full-program modernization is not.
- 13–16 Yes. The program is ready for a phased modernization. Plan for 18–24 months of patient delivery.
- 17–18 Yes. Rare. Vardr would be honored to be considered.
If you want our read on your scores, send them. We will not use the assessment as a sales hook — half of our briefings end with "you are not ready, here is what to do first."
Next step
Bring this to your next vendor meeting. If you'd like our help applying it to your program, we're 45 minutes away.