Citizen Portal
Sign In

Get AI Briefings, Transcripts & Alerts on Local & National Government Meetings — Forever.

Committee considers H.816 to bar AI from making independent therapeutic decisions

Senate Health and Welfare · April 2, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Sponsor and counsel described H.816, which would prohibit AI systems from independently diagnosing or making therapeutic decisions, allow administrative AI uses with professional oversight, require consent for recorded therapeutic communications and subject violations to consumer-protection enforcement.

On April 1 the Senate Health & Welfare Committee reviewed H.816, a bill intended to set boundaries for use of artificial intelligence in mental-health care so that clinical judgment remains the responsibility of licensed professionals.

"The purpose of this act [is] to safeguard individuals seeking mental health services in Vermont by ensuring that therapeutic judgment, clinical decision making, and therapeutic communication remain the responsibility of medical professionals and are not delegated to artificial intelligence systems," legislative counsel Jen Harvey said while walking the committee through the bill's purpose section.

Sponsor Wendy Critchlow told the committee H.816 prohibits representing AI as providing therapeutic judgment, diagnosis or treatment and instead allows AI for administrative, documentation and quality-improvement tasks when a licensed professional retains clinical responsibility.

Critchlow said the bill seeks to protect patients in vulnerable moments and does not aim to stifle innovation: "AI systems, while powerful, do not process clinical training, licensure, or ability to understand both human context behind someone seeking help." She told members the measure was reported out of committee on an 11-0 vote.

Harvey reviewed definitions in the bill, including engineered "artificial intelligence" and "generative artificial intelligence," and noted the text borrows language aligned with definitions used in other jurisdictions and companion House bills. The committee discussed concerns that a circular definition for generative AI was inserted to preserve that reference in the bill, and counsel said staff would continue reconciling definitions across companion measures.

H.816 includes a new chapter in Title 26 for AI regulation in professions, adds misuse of AI to lists of unprofessional conduct under existing professional-regulation statutes, and creates obligations for disclosure and consent: when AI is used to record identifiable therapeutic communications, consent by the patient or client is required. Counsel also said enforcement language treats violations as violations of the Consumer Protection Act, which would allow the attorney general to pursue enforcement and may leave private remedies available.

The bill lists covered professionals (physicians, APRNs in psychiatric specialties, psychologists, licensed social workers, certified peer-support providers and others) and permits AI uses such as scheduling, transcription, de-identified analytics and documentation support so long as the professional reviews and approves outputs. H.816 explicitly prohibits AI from independently generating diagnoses or treatment plans.

Carve-outs in the bill exclude religious counseling, noncertified peer support and general self-help or educational resources that are not represented as clinical services. Committee members raised questions about how the state could regulate advertising and online services that may appear to offer clinical care and asked staff to consult with House Commerce and other committees on enforcement and marketing definitions.

The committee did not take a final floor vote during this session and requested additional technical briefings and documents comparing companion bills.