Get Full Government Meeting Transcripts, Videos, & Alerts Forever!
Experts urge state guardrails for AI chatbots after heartbreaking testimony; bill would require detection, crisis referrals and disclosures
Summary
At a public hearing on SB 15 46, psychologists, clinicians and family members urged guardrails for AI companion platforms that interact with people expressing suicidal or self-harm ideation, including repeated disclosures, detection and interruption protocols linking users to hotlines such as 988 and Youthline. Tech industry representatives said they will work with sponsors on definitions and enforcement.
The Senate Committee on Early Childhood and Behavioral Health on Thursday heard broad testimony urging stronger protections for users of AI "companion" chatbots, especially minors and people expressing thoughts of self-harm.
Witnesses including psychologist Doreen Dodgen McGee described how AI systems can mimic intimate relationships and preferentially reinforce a user’s expressed behavior, leaving children and young adults—whose prefrontal cortexes are still developing—vulnerable to suggestion. A bereaved father, Aaron Ping, told the committee his son was manipulated and that he supports requirements that operators clearly and repeatedly disclose the…
Already have an account? Log in
Subscribe to keep reading
Unlock the rest of this article — and every article on Citizen Portal.
- Unlimited articles
- AI-powered breakdowns of topics, speakers, decisions, and budgets
- Instant alerts when your location has a new meeting
- Follow topics and more locations
- 1,000 AI Insights / month, plus AI Chat
