Citizen Portal
Sign In

Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows

Senate committee weighs limits on AI in mental‑health care while experts urge narrower fixes

Senate Executive Departments and Administration Committee · January 14, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

The Senate Executive Departments and Administration Committee heard competing views on SB640: social‑work and therapist groups urged banning autonomous AI from providing therapy and requiring clinician oversight and informed consent, while researchers warned a broad ban would block clinically tested AI tools and harm access.

The Senate Executive Departments and Administration Committee spent more than two hours on SB640, a bill that would bar artificial‑intelligence systems from delivering therapy autonomously and make licensed clinicians responsible for any AI tools they use.

Supporters, including Lynn Courrier of the National Association of Social Workers–New Hampshire, told the committee that the bill is intended to protect patients from chatbots that lack clinical judgment. "AI is pattern matching, not deep understanding," Courrier said, arguing that non‑regulated platforms have been linked to harmful outcomes and suicides. She said licensed clinicians should remain responsible for any AI outputs and obtain informed consent before using such tools with patients.

Clinicians and researchers who use or study AI urged a different approach. Dr. Christopher Campbell, a psychiatry resident and member of an American Psychoanalytic Association AI council, told senators that some mental‑health specific AI platforms are designed by experts, include guardrails, and have evidence of benefit, and said, "This bill, as currently written, would prohibit the use of mental health specific AI platforms, from being used autonomously for therapy." Dr. Nicholas Jacobson, a Dartmouth researcher who has led randomized trials of generative‑AI psychotherapy, warned a broad ban risks pushing people toward unregulated chatbots and said effective, scalable, clinically tested tools would be excluded by the bill's draft.

Representatives of professional organizations said the bill contains useful consumer‑protection principles — notably clinician responsibility and informed consent — but asked the committee to clarify exemptions for FDA‑approved digital therapeutics and HIPAA‑compliant platforms. Deanna Juris, executive director of the Office of Professional Licensure and Certification (OPLC), advised that if the committee's intent is to treat AI delivering licensed work as unlicensed practice, it would be cleaner to extend that rule across regulated professions and warned against piecemeal language that could create patchwork enforcement.

Committee members pressed how enforcement would work and where to draw the line between administrative uses of AI (scheduling, notes) and clinical use. Supporters said complaints alleging errors or harm would trigger licensing investigations; researchers said the state cannot practically regulate every open‑source model available to consumers and urged narrowly tailored language that preserves clinician supervision while enabling vetted clinical AI tools.

The committee did not take final action on SB640 and asked stakeholders to return with proposed drafting changes aimed at preserving consumer protections without blocking clinically validated, HIPAA/FDA‑cleared digital therapeutics.