Citizen Portal

Witnesses warn AI surveillance and bias could deepen inequities in schools without federal oversight

2916234 · April 1, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Experts and lawmakers told a House subcommittee that unchecked classroom surveillance, biased algorithms and cuts to Department of Education enforcement resources risk reinforcing disparities for students of color and those with disabilities.

At a House Education and Labor subcommittee hearing on Oct. 12, 2025, lawmakers and witnesses raised civil‑rights alarms about the use of AI surveillance and algorithmic decision tools in schools and the consequences of reduced federal oversight.

Representative Mary Lee (remarks during hearing) and Representative Lee of Pennsylvania highlighted studies and surveys showing disproportionate harms from automated surveillance and predictive tools: “AI often reinforces the same implicit and explicit biases that humans carry,” Representative Lee said, warning that surveillance tools such as facial recognition and so‑called aggression detection have produced racially disparate results.

Witnesses said mitigation requires data, testing and accountability. Erin Moe and Dr. Julia Rafalvaire recommended that tools be evaluated for disparate impacts, and that states and districts collect the data necessary to measure and reduce bias. Rafalvaire proposed state “AI assurance labs” to vet tools and monitor outputs over time.

Several witnesses and members warned that recent staffing cuts at the Department of Education—specifically the elimination of the Office of Educational Technology and reductions to the Institute of Education Sciences—reduce the federal government’s capacity to issue guidance, share research, and enforce civil‑rights protections through the Office for Civil Rights. “Without resources like this from the Department of Education, are schools equipped to keep students' civil rights intact when deploying AI surveillance?” Representative Lee asked.

Panelists noted industry limitations: companies can test tools only on the data they possess, limiting their ability to surface harms that appear in other populations. Erin Moe and other witnesses urged stronger public‑sector data infrastructure and federally funded research to enable independent evaluation. They also highlighted immediate district‑level precautions, including limiting student‑identifiable data, using local hosting where FERPA permits, involving parents and students in vetting, and avoiding automated disciplinary decisions without human review.

No formal regulatory action was proposed during the hearing; members on both sides called for more research, community engagement and clarified safeguards to ensure that AI tools do not exacerbate the school‑to‑prison pipeline or otherwise deepen existing disparities.