Citizen Portal
Sign In

Lawmakers debate limits on AI ‘therapy’ as researchers and clinicians clash

House Executive Departments and Administration Committee · April 15, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

A House committee heard testimony on SB640, which would bar AI chatbots from providing therapy without a licensed professional and create a study commission. Clinicians warned the bill’s drafting is too broad and could block validated AI tools; consumer‑protection advocates pushed for guardrails after citing harms.

The House Executive Departments and Administration Committee on April 1 heard sharply divided testimony on Senate Bill 640, legislation that would prohibit AI systems from independently providing therapy or therapeutic communication and establish a commission to study implementation.

Senator Howard Pearl, sponsor of SB640, told the committee the bill is intended to ‘‘protect public safety while allowing responsible innovation.’’ He said the measure would require licensed New Hampshire professionals to oversee care where licensure is required and would not prevent clinicians from using FDA‑authorized and HIPAA‑compliant AI tools.

Supporters emphasized consumer protection. Lynn Currier, executive director of the National Association of Social Workers–New Hampshire, said the bill responds to cases where chatbots behaved harmfully and argued that the legislature should not let platforms ‘‘set themselves up as an equivalency to therapy.’’ Currier told the committee that AI lacks mandated‑reporter duties and a duty to warn, and recounted a reported instance in which a teenager obtained harmful instructions from a chatbot.

Researchers and some clinicians warned the committee the bill’s present language is overly broad and would have the opposite effect, driving evidence‑based tools out of the state while leaving general‑purpose chatbots unregulated. Nicholas Jacobson, an associate professor at Dartmouth who develops AI psychotherapy tools, said his team’s randomized trial of a generative AI psychotherapy platform showed clinical benefit and that the draft would ‘‘effectively block deployment of this work here in New Hampshire’’ by requiring a licensed clinician to review routine AI interactions.

Jacobson told the committee he favors an alternative, claim‑based consumer‑protection framework: require safety features (crisis protocols, age verification, data privacy), mandate adverse‑event reporting, and exempt evidence‑based tools validated in peer‑reviewed trials from impractical clinician‑review mandates.

Executive Director Juris of the Office of Professional Licensure and Certification said the office assisted in drafting some enforcement language. She highlighted two changes in the unlicensed‑practice section (adding ‘‘individuals and entities’’ and language about goods or services requiring licensure) and explained OPLC’s request for authority to recover investigation costs when pursuing unlicensed‑practice enforcement.

Committee members asked witnesses about enforceability, privacy (for locally run models or encrypted chats), the role of FDA regulation, and whether the bill would reach widely used chatbots that disclaim therapeutic intent. Witnesses agreed that regulation is difficult while noting consumer risk; Jacobson said the FDA has solicited input but has not issued a clear regulatory approach for AI‑driven mental‑health tools.

Chair Leon closed the hearing and scheduled a work session on the bill for April 29. The committee did not vote on the measure during the hearing.

The committee’s next step is a work session; proponents and opponents asked for additional drafting to clarify scope, enforcement, and exemptions.