Lone Star Legal Aid pilots three chatbots to speed client help and staff work
Loading...
Summary
Lone Star Legal Aid described a multi‑year project that produced three chatbots—Juris for internal legal research, LSAS for internal administrative support, and Navi for public information and referrals—funded in part by a TIG grant and designed with strict privacy, citation and failure modes to limit hallucinations.
Ashley Oborne, director of data analytics for Lone Star Legal Aid, onstage in San Antonio outlined a three‑year effort to build three chatbots aimed at improving staff efficiency and public access to legal information.
The project produced Juris, an internal legal research assistant that uses retrieval‑augmented generation (RAG) with mandatory citations; LSAS, an internal administrative assistant for HR and IT procedures; and Navi, a public‑facing bot that provides plain‑language information and referrals but does not perform intake or give legal advice. Oborne said the tools target different audiences and “three different security postures.”
Oborne said the pilot began after a proof‑of‑concept demo presented at ITC in 2024 and was scaled with a TIG grant. She described the grant budget’s largest line item as personnel, estimating roughly 75% of the grant supports staffing over two years and listing staff time allocations (Ashley said she anticipated about 20% of her own time on the project, with two lawyers at roughly 10–15% and communications support at about 15%).
The team emphasized privacy and replicability. Oborne said demos and testing use no client data and that the group intends to publish documentation and a GitHub repository this coming summer so other legal aid programs can adopt the same RAG structure. The group described plans for a public blog to track rollouts and invited collaboration.
On accessibility, Oborne said the project baked in plain‑language outputs and a multilingual roadmap starting with Spanish and Vietnamese, chosen because they are the next most common languages in the program’s Texas service area. She said session‑only memory is used for the external bot so that no information persists after the browser session ends.
Audience questions focused on cost and evaluation. Oborne said the team has compiled an internal cost matrix used in grant planning and estimated per‑question token costs in testing at roughly $0.06–$0.12, though exact totals depend on the retrieval method chosen. She described success measures as user adoption and usability metrics (thumbs up/down feedback, audit logs and traffic analytics) and said broader rollout will follow expanded testing.
The session closed with offers to share materials and an open invitation for other organizations to collaborate on the open‑source repository when it goes live.

