02Open Workbench
We publish our methodology before we propose work.
Procurement teams can read the framework we'd use on the engagement before a single email is exchanged. If a competitor's approach looks better on paper than ours, you'll be the first to know.
01Engagement model
Four phases. None of them is “flag day.”
Phase 01
Mapping
Weeks 1–3
Read the program. Map statute, policy, rule, system, and decision in one artifact. Identify the smallest end-to-end slice that can ship behind a feature flag.
Phase 02
Parity
Weeks 3–10
Build the slice. Run it in traffic-shadow against the production system. Reconcile every divergence — calibrate, document, or fix. Nothing ships without parity evidence.
Phase 03
Promotion
Weeks 10–16
Promote the slice from shadow to A/B to default-on. Caseworker training, due-process review, and OIG-grade audit trails are built in, not bolted on at the end.
Phase 04
Operating
Ongoing
Production agents must be observable, replayable, and policy-bounded. We run the system with you for as long as the contract requires, then hand off with full runbooks.
02Interactive tools
Three working tools built from the methodology.
We treat the workbench as a real engineering surface, not a brochure. Each tool below runs against the published Vardr methodology and produces a substantive artifact you can take to your next meeting.
Live AI critique
Architecture Critic
Paste an RFP, architecture, or vendor proposal — get a severity-tagged critique scored against our reference architecture in about thirty seconds.
Run the Critic →Interactive assessment
Readiness Assessments
Two diagnostics — Modernization Readiness and Federal AI Readiness — scored locally in your browser. Tier, breakdown, and three concrete next moves.
Choose a variant →Copy-paste library
Procurement Language
Twelve contract clauses for AI and modernization procurements. Each names the outcome it prevents and the artifact it produces. Use freely.
Open the library →
03Readiness, in depth
Run it on your program
Modernization Readiness — the working tool.
Nine questions. About two minutes. Local scoring — your answers never leave the page unless you decide to share them. The output is a tier, a section-by-section breakdown, and three concrete next moves for where you actually are.
PREVIEW
Is there a single named executive accountable for the modernized system’s go-live?
A steering committee is not an executive.
04The Workbench
Three artifacts you can read right now.
Each artifact is a self-contained piece of intellectual property. Use them. Cite them. Bring them to your next vendor review and ask what their answer is.
Assessment · v1.0
Modernization Readiness Assessment
A self-administered 18-question assessment that surfaces the operational, contractual, and data conditions a benefits program must meet before a modernization engagement can succeed.
Read artifactReference Architecture · v1.0
Reference Architecture for Agentic Benefits-Decisioning Systems
A canonical architecture for state and federal benefits-decisioning systems that operate at L2 (assisted) and L3 (agentic) — the layers, the seams between them, and what each is contractually accountable for.
Read artifactMaturity Model · v1.0
The Agentic Maturity Model for Government Benefits Systems
A five-level model for diagnosing where a benefits-decisioning system actually sits today — and what is required to move it one rung up without breaking due process.
Read artifact
Want our take on your current modernization plan?
Send us the SOR, RFI, or vendor proposal. We'll mark it up against the workbench and return our notes within five business days.