House adds AI protections for explicit images and mental-health advertising language
Get AI-powered insights, summaries, and transcripts
Sign Up FreeSummary
The House adopted language to add artificially generated explicit depictions of minors to child-protection statutes and accepted an amendment requiring truth-in-advertising for AI mental-health tools used in clinically oriented settings.
The House adopted a committee substitute to H.B. 20-35 that updates state law to explicitly criminalize AI-generated explicit visual depictions of minors and expands prohibitions that already apply to photographic images to include artificially generated visual depictions.
The bill sponsor, the gentleman from Nottoway, told members the change closes a statute gap exposed by deep-fake and AI-image technologies: "This bill does two essential things. First, it prohibits the use of artificial intelligence to create explicit material using someone's image or voice without their written consent. Secondly ... it updates our child-protection statutes to explicitly cover AI generated images of minors."
Floor debate also approved a floor amendment creating a truth-in-advertising requirement for mental-health contexts to ensure that services represented as human-provided are not delivered solely by AI without clear disclosure. Supporters said the amendment addresses rising reports of minors and adults relying on AI platforms for therapy and the need to make sure qualified humans provide certain mental-health services.
The amendment passed after unanimous committee support, and the committee substitute was adopted on the floor. Members noted no recorded opposition at the committee stage and emphasized the urgency of statutory updates to keep pace with quickly changing AI capabilities.
The bill will proceed for perfection, with members noting implementation questions about enforcement and how to define AI-generated depictions in digital evidence and prosecution contexts.
