Connecticut hearing exposes split over social‑media age checks, algorithms and minors’ safety
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Supporters including mental‑health and child‑safety groups pushed HB5037 to require age verification, default private settings and limits on algorithmic 'addictive feeds' for minors. Tech and privacy groups warned of First Amendment and privacy harms from mandatory verification and compelled warnings.
Lawmakers heard sharply divided testimony on HB5037 and related measures aimed at reducing harms to minors from social‑media platforms.
Clinicians, mental‑health advocates and researchers described growing evidence that algorithmic feeds and night‑time notifications harm adolescents’ sleep, mental health and expose them to exploitation. Michael Shelby, a clinician at the Technology Addiction Center, told the committee that "social media is addictive" and that platforms’ engagement systems operate on the same reward circuitry that drives other behavioral addictions.
Supporters urged baseline protections: parental consent for account settings, private default accounts for minors, no push notifications during vulnerable overnight hours, and warning language displayed to young users. Professor John Murphy of UConn argued these are modest, harm‑reduction steps and said the bill focuses on functionality rather than content.
Industry and civil‑liberties witnesses pushed back. Chamber of Progress and tech trade groups warned mandatory age verification can require sensitive personal data (IDs or biometrics) and could be misused; they also cautioned that changing algorithmic feeds to chronological order could prevent platforms from surfacing helpful community content to vulnerable youth. CCIA and other groups cited recent court decisions and said prior rulings raise First Amendment questions for age‑verification mandates and compelled warnings.
Third‑party vendors described commercially reasonable, privacy‑minimizing age‑estimation tools ranging from government‑ID checks to anonymous age‑estimation techniques and suggested interoperability with ongoing New York and California rulemaking. Witnesses also raised operational questions about VPNs, jurisdictional enforcement and parental conflicts where family dynamics complicate parental-consent models.
The attorney general's staff argued the bill is content‑neutral because it targets functionality (feed algorithms and age-based defaults) rather than specific speech, and noted similar measures are being implemented or litigated in other states. No committee vote was taken; lawmakers signaled intent to continue technical consultations to narrow privacy and constitutional risks while addressing child safety concerns.
