Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows
California community colleges pilot AI tools to fight enrollment fraud and speed credentialing
Loading...
Summary
Board heard presentations on campus-developed AI tools — including a fraud-detection model (Light Leap AI) adopted by dozens of colleges — and pilots for apprenticeship tracking, document ingestion, and student support. Presenters emphasized human-in-the-loop safeguards and data governance amid public privacy concerns.
Visiting executive Jory Hadsell told the Board of Governors that a new wave of locally co‑designed artificial intelligence tools is already being used across the California Community Colleges system to address urgent operational problems, from fraudulent enrollments to multi‑year degree backlogs.
The chancellor's office framed the May report as part of a broader digital innovation push. "We partnered with end‑to‑end and with Kieran to co‑design an AI model that we could train on our institutional data," Hadsell said during the AI educational series presentation. He said the fraud‑detection product—marketed as Light Leap AI and developed with End to End Services—was quietly placed into production at several districts and, "as of today, 48 colleges in our system have locally adopted this solution." (Presenter statement.)
Santiago Canyon College President Jeannie Kim described implementing the tool after a 2024 surge in fraudulent applications. "Before the End to End solution was put into place ... we had around 10,000 enrollments that were fraudulent," she said. "When we did that, of course, our enrollment numbers in our FTES dropped tremendously, but we needed to do it because we needed to bring our real students in." Kim said her college has recorded an estimated 99 percent efficacy rate with a multi‑tiered verification approach and that the implementation allowed the college to replace roughly 10,000 suspect enrollments with about 8,000 verified students.
Vendor representatives said the product was offered on a "try‑before‑you‑buy" basis (free historic term reports) and widely tested against prior‑term data. A vendor speaker told the Board that in the recent eight‑month period the tool processed nearly 3,000,000 applications and flagged more than 360,000 suspected fraud attempts systemwide; college leaders said the product's local configuration options and a shared bilateral API enable colleges to log IPs, emails and other indicators to track repeat offenders.
Panelists also described other AI prototypes. Santiago Canyon described a degree‑audit and apprenticeship‑tracking model designed to reduce a three‑year manual backlog in awarding certificates and degrees; the college said pilots cut turnaround time dramatically by automating course substitutions, audits, and verification workflows. Presenters showed pilots that read handwritten applications for incarcerated learners, routing them directly into colleague records and assigning counselors faster than manual review.
Board members pressed presenters on procurement, confidentiality, and oversight. "We follow a whole lot of protocol when it comes to procurement," Chancellor Christian said, adding that the digital center provides flexibility for scalable pilots while procurement and legal processes remain in place. Several governors and public commenters asked for clear privacy protections; a virtual commenter, Omar Zavala, said he opposed vendor privacy policies that require sharing sensitive student identifiers, and urged stronger retention limits and opt‑out rights.
Presenters and the chancellor's office repeatedly returned to a "humans‑in‑the‑loop" principle. "One of the AI council's human principles is keeping the human in the loop," one presenter said, adding that successful deployment requires integrated security, configurable local thresholds, and active coordination between colleges to avoid single‑point failures.
The Board did not take any formal action on the AI demonstrations during this session but directed staff to continue evaluation and to pursue data‑governance work across the system. Public commenters and several governors urged explicit privacy safeguards and clearer vendor transparency before broad, systemwide mandates.
What happens next: the chancellor's office said it will continue pilots through the digital center for innovation, expand evaluation measures for outcomes and equity, and develop data‑governance and privacy rules for any AI tools the system considers for scale.

