Legal-aid programs pilot AI intake systems to cut long queues and improve referrals

Access to Justice Technology Panel · February 4, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

At a panel on AI for access to justice, legal-aid and bar-referral programs described pilots that use voice and web-based AI to triage callers, integrate with case-management systems, and preserve human review; leaders said the tools aim to reduce average wait times (about two hours in one program) and improve matching while keeping opt-out and off-ramps.

Quentin, a consultant with Lehi Legal, convened a panel of legal-aid and referral programs to describe pilot projects using artificial intelligence to automate client intake and triage.

David Miller, former executive director of the Virginia Legal Aid Society, said the goal of his project is to ‘‘speed up our intake’’ and reduce an average phone queue that he estimated at about two hours for roughly 18,000 calls a year. Miller said the project uses a voice-first AI to conduct verbal intake, populate fields in Legal Server, and present a summarized recommendation for a paralegal to confirm. ‘‘A new hope arises,’’ Miller said of AI’s promise to shorten waits.

Miller said philanthropic funding made the Virginia pilot possible: his program received an unsolicited $900,000 award from a Mackenzie Scott–funded program that removed reporting constraints. He told the panel the AI-specific portion of the build has cost roughly $30,000 so far; other costs (for moving to Legal Server or telephony changes) were not separately itemized during the session.

Karen Farkas of the Oregon State Bar described a different approach for a statewide legal referral service that matches callers with private attorneys. Her project emphasizes a plain-language, web-based path that uses conversational AI to ask follow-up questions and then match attorneys from a custom panel taxonomy. She said the system returns two identified issues and will pick a single attorney when one covers both topics; otherwise it returns multiple referrals.

Kirsten Dunham, executive director of Mid Missouri Legal Services, said her organization focused its redesign on accessibility and reliability after a callback system produced long lists and low contact rates. Mid Missouri’s approach emphasizes a more robust online intake, partner-only soft launches, and automated scheduling to increase successful connections. Dunham said the project includes an opt-in consent box: ‘‘This use of AI … is not going to influence our decision to help you or not. This is completely anonymous,’’ she said, describing the dropdown explanations that allow users to review and correct AI classifications.

Across the projects, speakers described a modular technical stack: a telephony provider (Virginia chose Dialpad), speech-to-text, an AI classification engine (Quentin described a Fetch classifier that ensembles several models), and text-to-speech. The panel cited tools and platforms including DocAssemble, Legal Server, Pipecat (an open-source voice-app framework), FastAPI, and text-to-speech systems such as Google Chirp and third-party demos like Safe Haven AI.

Panelists stressed design trade-offs. Some designs favor an automated handoff into case management to minimize user burden; others show AI results and preserve explicit off-ramps to a human if a user objects or the match looks wrong. On conflict checks, Miller said the systems use Legal Server’s API for an initial automated conflict screen and then a paralegal verifies any ambiguity. On data and testing, Dunham and Farkas said soft launches with community partners were essential to catch mapping bugs and refine generated follow-up questions.

Panelists also discussed taxonomy challenges and how to measure accuracy. Quentin said there is not a single taxonomy that fits every program; Virginia and Mid Missouri map to Legal Server problem codes, Oregon uses a custom panel taxonomy, and the team uses a weighted voting ensemble to combine model outputs when needed. Audience members raised practical concerns—whether AI triage would increase case volume beyond capacity, how to provide multilingual support, and whether callers would find voice agents frustrating—but panelists described off-ramps (requesting a live human, or routing to scheduled callbacks) and routine post-call satisfaction surveys as mitigation.

Next steps for the projects include continued testing, partner soft launches, and phased public rollouts. Miller said Virginia is ‘‘not live yet’’ and is awaiting cooperation from a partner referenced in the session; Farkas and Dunham said they will expand partner testing and measure accuracy and user experience before broad public launches.