Citizen Portal
Sign In

Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows

Committee reviews bill to bar AI from delivering mental‑health services without clinician review

Health & Welfare · April 22, 2026

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

A legislative committee reviewed a draft that would prohibit entities from offering mental‑health services through AI unless a licensed mental‑health professional reviews and approves the AI output; members debated definitions (including whether to name generative AI), enforcement via the Consumer Protection Act, and HIPAA/FDA language before asking staff to redraft and solicit stakeholder input.

A legislative Health & Welfare markup Thursday focused on a draft bill that would bar companies or other entities from offering mental‑health services directly through artificial intelligence without a licensed clinician’s review and approval.

Katie Grant, special education finance director (speaker 3), told the committee the bill’s purpose is “preventing psychological harm, including death by suicide, by ensuring that mental‑health services are delivered by mental‑health professionals and not independently by AI systems.” She read the draft prohibition aloud, saying an entity “shall not offer or provide mental health services through AI without the review and approval of a mental health professional.”

The committee’s discussion centered on three technical questions: scope, definitions, and enforcement. Staff emphasized the draft removes an earlier Title 26 provision that regulated individual mental‑health professionals and instead targets entities that would provide services solely through AI. Members pressed whether basic functions such as automated transcription or note‑generation would be captured, and whether the prohibition was intended to stop AI from producing treatment plans or merely to require clinician oversight of AI‑generated work.

On definitions, staff explained a sentence in a prior house draft that said “AI includes generative AI” was removed after House Commerce counsel called that phrasing circular. Several members urged further legal and technical review of the definitions and asked staff to consult House Commerce’s AI expert before finalizing language.

Enforcement and penalties were another point of debate. Staff said violations would be treated as violations of the Consumer Protection Act, with the Attorney General able to investigate and private parties able to pursue remedies; the draft also included a civil penalty of $10,000 per violation. Members discussed whether companies running chatbots should be pursued under consumer‑protection rules while mistakes by licensed clinicians who fail to supervise tools should be handled through professional‑conduct channels.

The bill also contains a limited exception allowing licensed professionals to use AI tools that meet specified privacy or regulatory standards. Committee members questioned a draft phrasing that referenced either HIPAA compliance or FDA authorization, noting many AI tools used for clinical documentation are HIPAA‑compliant but are not reviewed by the FDA; staff agreed to flag that language for stakeholders and consider removing the FDA reference if HIPAA coverage is sufficient.

Several professional groups and social‑work stakeholders had previously proposed alternative approaches for distinguishing enforcement against companies versus licensed practitioners; staff said they will circulate wording that clarifies the intended interplay between Consumer Protection Act enforcement and licensing‑board processes.

The committee asked staff to incorporate stakeholder feedback, clean up numbering and formatting issues in the draft, and return with revised language for further markup. No formal vote was taken.