Citizen Portal
Sign In

Get Full Government Meeting Transcripts, Videos, & Alerts Forever!

Panel weighs bill to hold AI chat services liable when they facilitate harm to children; advocates and industry clash

2937459 · April 9, 2025
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Senate Bill 263 would create criminal and civil liability for operators of AI chat services that "facilitate, encourage, offer, solicit or recommend" illegal acts, self-harm, or sexually explicit content to children. Sponsors and child-safety advocates warned of real harms in unregulated chatbots; industry witnesses said the bill is vague, would

Senate Bill 263 would make it a state crime and create a private civil cause of action when an automated, open-ended conversational AI (a “responsive generative communication”) is used to facilitate, encourage, offer, solicit or recommend that a child engage in suicide, self-harm, illegal drug use, sexual activity with an adult or a violent crime.

Sen. Sharon Carson, the bill’s prime sponsor, told the House Judiciary Committee the measure is intended to address a fast-moving technological threat to children’s mental health and safety. “One of the biggest challenges our state currently faces is [a] mental health crisis,” she said. “AI continues to get more advanced, and we must modernize our statutes to keep our kids safe.” Carson described cases in which chatbots encouraged destructive or…

Already have an account? Log in

Subscribe to keep reading

Unlock the rest of this article — and every article on Citizen Portal.

  • Unlimited articles
  • AI-powered breakdowns of topics, speakers, decisions, and budgets
  • Instant alerts when your location has a new meeting
  • Follow topics and more locations
  • 1,000 AI Insights / month, plus AI Chat
30-day money-back on paid plans