Witnesses urge more federally funded research, state assurance labs and teacher training before large‑scale AI adoption
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Witnesses recommended targeted federal funding for research on AI’s educational effects, creation of state 'AI assurance labs' to vet tools, and wide investment in professional development so teachers can use AI as an augmenting tool rather than a substitute.
Multiple witnesses urged the subcommittee to fund rigorous, large‑scale research and to support teacher professional development before widespread AI adoption in K–12 classrooms.
Dr. Sid Dobrin said the field needs more long‑term studies on cognitive effects and on differences tied to access: "We need to figure out the differences of what happens with those access points," he said, citing disparities between students using free public models and those with paid or proprietary platforms.
Dr. Julia Rafalvaire proposed a three‑pronged approach for states: deep stakeholder engagement, principles/guardrails based on local values, and then optimization and scaling. She recommended states create "AI assurance labs" to vet vendor tools and suggested convening a White House Summit on AI in education to coordinate federal research and share evidence based practices.
Erin Moe emphasized professional development: "We must support AI literacy and professional development for educators throughout this country," she said, noting that many districts lack the capacity to evaluate cybersecurity, privacy and vendor claims. Witnesses also cited specific research leads (evaluation of tutor modalities, teacher efficacy, long‑term cognitive effects and equity impacts) and called for federal support to aggregate and disseminate findings.
Members from both parties asked about the proper federal role; witnesses generally recommended limited federal prescription on curriculum but stronger federal investment in research, cybersecurity guidance and equitable infrastructure.
