Lawmakers warn AI adoption could deepen inequities; civil‑rights enforcement and data infrastructure cited as critical

3340452 · May 12, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Members and witnesses at a House subcommittee hearing raised concerns that without federal leadership and data infrastructure, AI deployment in schools could worsen existing disparities and expand surveillance, particularly affecting students of color, students with disabilities, and low‑income communities.

Members of both parties discussed equity risks as AI tools enter K–12 classrooms. Representative Suzanne Bonamici and other Democrats emphasized that recent cuts to Department of Education capacity — including the Office of Educational Technology and reductions at the Institute of Education Sciences — jeopardize research, student privacy protections and civil‑rights enforcement.

"Without a robust and equitable funding system with a strong accountability framework, the digital divide will widen," Ranking Member Bonamici said, adding that the Office of Educational Technology had provided guidance on safe and ethical integration of new technologies.

Democrats including Representative Summer Lee (identified in questioning as raising civil‑rights concerns) warned about school surveillance technologies and algorithmic bias. Testimony noted that certain facial‑recognition and behavioral surveillance tools have high error rates for Black students and that predictive algorithms can reflect historical bias.

Erin Moe, CEO of Innovate EDU, described the need for data infrastructure and resources so states and districts can train and vet tools to reduce bias: "There's no tool in education that is without bias…there's tools to mitigate this effect, something called reweighting," she said, adding that mitigation requires access to high‑quality datasets and transparency.

Witnesses recommended concrete federal supports: funding for large‑scale research on educational effects of AI, cybersecurity guidance, and public data to assist in training and auditing models. Members and witnesses also urged that civil‑rights enforcement remain robust to handle discriminatory effects of surveillance and automated decision‑making in schools.

The subcommittee kept the hearing record open for post‑hearing submissions; members asked witnesses for follow‑up materials on vetting AI vendors and best practices for equity assessments.