Citizen Portal

Experts tell Senate that screen time, algorithmic feeds and AI chatbots are linked to youth mental‑health and learning harms

Commerce, Science, and Transportation: Senate Committee · January 15, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Researchers and clinicians testified that the smartphone era and engagement‑based design contribute to rises in adolescent depression, learning declines and addictive behaviors; witnesses called for design guardrails, age verification, and stronger school filters and third‑party evaluation of EdTech.

Four witnesses presented converging evidence that high levels of screen time and engagement‑optimized product features can harm children’s mental health, learning and development.

Professor Jean Twenge summarized large‑scale generational data and intervention studies, saying major depressive episodes among adolescents ‘‘doubled’’ between 2011 and 2019 and noting declines in some standardized test results after widespread adoption of smartphones in 2012. Twenge recommended raising minimum ages for social media and AI companionship apps, instituting bell‑to‑bell school phone bans and ensuring school devices are restricted to educational content.

Dr. Jared Cooney Horvath emphasized cognitive and learning mechanisms. He summarized cross‑national evidence showing that broad adoption of one‑to‑one digital technology in classrooms correlates with lower performance on measures such as NAEP and PISA and argued that screens can circumvent social, interactive learning processes that build attention and memory.

Emily Chirkin described four crises—mental health, learning, creativity and civic harms—tracing how early, pervasive technology use changes play and social development. She urged parents to delay device access, recommended collective parental norms and said schools should not use EdTech for behavior management.

Dr. Jenny Radesky reviewed product‑level mechanisms: frequent notifications, algorithmic feeds, in‑app monetization, and the use of AI chatbots embedded in social platforms. She said filters and accountable blocking on school devices are a priority, and she urged third‑party evaluation of educational apps and transparency about data flows and engagement metrics.

Witnesses also warned about AI chatbots producing nonconsensual sexualized images and other outputs that could harm minors; several senators, including Cantwell and Lujan, pressed for urgent federal action. Senators and witnesses discussed state‑level experiments and international examples—witnesses noted Scandinavian restrictions on EdTech for younger students—and several senators signaled support for a mix of federal legislation, school policy changes and oversight tools including subpoenas for platform executives if necessary.

The discussion made clear the committee is now balancing educational and equity concerns (including the role of E‑Rate connectivity in rural areas) against arguments for tighter age and design restrictions. Witnesses stressed that connectivity and classroom technology can support workforce training and that any federal action should preserve beneficial uses while removing engagement‑optimizing harms.