Federal AI Readiness (M-24-10)
Nine questions about inventory, impact assessments, minimum practices, notice and redress, and procurement. Scored against the engineering artifacts the OMB AI memo actually requires you to build.
- 01Inventory
Is your agency's AI use-case inventory maintained as a data product — versioned, owned, and queryable — rather than as a SharePoint spreadsheet?
The OMB inventory requirements describe a data product. A document is filed and forgotten; a data product survives staff turnover.
- 02Inventory
For each inventory entry, can you point to the specific production system, model artifact, and policy boundary the entry describes?
An inventory that doesn't connect to systems is a list. An inventory that does is a control plane.
- 03Impact assessment
For rights- or safety-impacting AI, do your impact assessments include an evaluation harness — runnable tests — not just a document?
M-24-10's impact assessment requirements describe tests. A harness is the evidence those tests can be re-run.
- 04Impact assessment
Are the fairness criteria for each system named explicitly — e.g., equality of opportunity, calibration, equalized odds — and tested before deployment?
"We tested for fairness" is not a claim. The criterion is the claim. Without one, downstream challenge is unavoidable.
- 05Minimum practices
Does each minimum practice have a named on-call, a detector, a remediation SLA, and a notification path to affected individuals?
A minimum practice without these four is a wish. With them, it's a runbook.
- 06Minimum practices
Is there an ongoing monitoring cadence with paging thresholds — model drift, accuracy degradation, fairness regression — for each production system?
Pre-deployment evaluation is the easy part. The minimum practices require ongoing monitoring with humans on the hook.
- 07Notice & redress
For decisions that affect individual rights, do affected individuals receive a notice that the decision was AI-influenced, with a path to human review?
M-24-10's individual-notice and redress requirements have specific shape. Boilerplate disclosures don't qualify.
- 08Procurement
Does your AI-procurement contract language deliver — at no marginal cost — the trained models, training-data lineage, evaluation suites, and the right to re-run them?
Vendor lock-in on the model layer is the structural risk M-24-10 doesn't explicitly fix. You have to write it in.
- 09Procurement
Does the contract include a re-evaluation cadence and a model-update protocol that the agency, not the vendor, triggers?
The agency that can't decide when to re-evaluate is the agency that doesn't operate the system.
Optional — get the score plus a tailored next-step note
You can see your score without leaving any details. If you give us a work email, we'll send you the score plus a written next-step note from Frank or Payton, and add you to the briefing list — never to a marketing list.
Scoring is done locally — your answers never leave the page until you press the button.