Committee enacts emergency rule requiring chatbots to flag minors' self-harm signals; broader ban delayed for stakeholder review
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
The Health Coverage, Insurance and Financial Services Committee voted to enact immediately a requirement that deployers of human-like chatbots implement systems to detect and respond when a minor indicates intent to self-harm, while postponing a wider ban and other provisions until the attorney general convenes a stakeholder group and proposes rules.
The Health Coverage, Insurance and Financial Services Committee moved on and approved an emergency measure to require providers of human-like chatbots to detect and respond when a minor indicates they intend to harm themselves or others. Representative Bob Foley, who moved the compromise, said the committee's priority was immediate protections: "So our thought would be to enact number 3, and we could do it as an emergency," he said, arguing the detection-and-response requirement had to be in place promptly.
The measure the committee adopted as an emergency (subsection 3 of the sponsor's amendment) directs deployers to implement and maintain systems to detect, promptly respond to, report and mitigate situations in which a minor indicates intent to self-harm. The amendment also preserves civil penalties and a private right of action; the sponsor's draft set statutory-damage caps at up to $2,500 per incident and up to $50,000 per minor, or actual damages, whichever is greater.
Committee members framed the vote as a compromise between those who wanted an immediate ban on certain AI companions for minors and those urging more study. Colleen (committee staff) told members the attorney general's office had advised that a substantive prohibition plus enforcement authority could be effective without a separate statutory age-verification mandate. Several members pressed for clarifications about who counts as a resident minor and how courts or the attorney general would interpret the bill's terms.
Members who opposed an immediate, across-the-board ban said they worried about unintended consequences such as cutting off compassionate or beneficial uses (for example for some autistic users or in certain educational contexts). Representative Donna Bailey and others asked whether the language might sweep in helpful tools; Bailey said she wanted more time to consider exemptions and definitions.
Under the committee compromise, the remainder of the bill's provisions (the ban and related definitions) are to be delayed for stakeholder work and rulemaking: the attorney general's office will convene a stakeholder group, report back by Jan. 15, 2027, and propose major-substantive rules; the committee set a delayed effective date of Feb. 15, 2027, for those provisions so the AG's work can inform final implementation. The committee adopted the motion 9–2.
What happens next: the attorney general's office will lead stakeholder convenings and draft rules; the committee expects to review the AG's proposed rules and any statutory tweaks early next year. Enforcement of the emergency detection-and-response requirement begins immediately if the legislature approves the emergency preamble at enactment.
