Supporters urge safeguards in AI‑behavioral health bill; industry warns of overbreadth
Loading...
Summary
Delegate Lily Chi presented HB 883 to prohibit AI systems from misrepresenting themselves as behavioral‑health providers and to require disclosures and suicide/crisis protocols; social‑work and health groups supported the bill while tech and business groups called it overly broad and urged Utah‑style, risk‑based amendments.
Delegate Lily Chi told the Senate Finance Committee that House Bill 883 seeks to prevent consumer harms from AI systems that claim to be behavioral‑health professionals or otherwise give the impression they provide clinical care.
"AI is not licensed. They are not accountable to a regulatory body, and they cannot exercise professional judgment grounded in human relationships or lived experiences," Carissa Proctor of the National Association of Social Workers (Maryland chapter) testified in support. Proctor urged safeguards, including clear disclosures and crisis‑referral protocols when AI systems encounter suicide‑related content.
Kaiser Permanente’s Allison Taylor suggested a clarifying amendment to exclude administrative or supplementary uses of AI by licensed providers, saying the bill should not unintentionally restrict clinically supervised, provider‑led use of tools.
Industry witnesses — including Hannah Allen of the Maryland Chamber of Commerce, Tara Hoops of the Chamber of Progress and Margaret Durkin of TechNet — urged an unfavorable report in the bill’s current form. They argued the current language swept in broad categories of consumer‑facing AI (voice assistants, coding tools, photo editors) and said the requirement to display a disclosure at the beginning of each use or to implement suicide‑detection infrastructure would impose heavy compliance burdens and degrade user experience.
Opponents and some proponents pointed to Utah’s framework as an example of a narrower, risk‑based approach and asked the committee to consider removing or narrowing a private right of action that was added in the house.
The committee heard several favorable and unfavorable testimonies and questions about definitions (such as the scope of "developer" and whether chat‑GPT–style tools are captured). The hearing concluded without a committee vote recorded.

