Witnesses urge carveouts to Section 230 immunity for generative AI and highlight risks from deepfakes
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Panelists recommended clarifying that Section 230 immunity does not extend to generative AI product outputs or to platforms that knowingly host criminal material; they urged red‑teaming and civil remedies for AI‑generated nonconsensual imagery.
As generative artificial intelligence becomes a tool for creating nonconsensual intimate images and hyperreal deepfakes, witnesses urged Congress to limit Section 230 protections for AI products and to create paths for liability and prevention.
Why it matters: Speakers warned that platforms and AI developers are increasingly able to produce realistic fake images of minors and adults, which complicates identification, law enforcement triage, and victims' ability to obtain relief.
Claire Morrell said generative AI is a product rather than third‑party speech and therefore should not receive broad Section 230 immunity: "AI products are not hosting third‑party speech. Section 230 was not meant to protect product design." She recommended a targeted statutory clarification to open a path for litigation and accountability.
NCMEC and other witnesses described a rising volume of AI‑generated exploitative imagery and said red‑teaming and pre‑deployment safety testing should be part of legislation. Yota Suras urged preventive measures that limit the ability to create exploitative images upstream rather than relying solely on downstream removal or criminal prosecution.
Members asked about constitutional limits; witnesses proposed narrowly drafted exclusions (for generative AI when the company is creating or co‑creating content and for knowingly hosted criminal content) to reduce First Amendment risks. Several witnesses said clarifying that product‑design actions do not fall under Section 230 would be a viable, enforceable change.
Ending: Lawmakers signaled interest in bipartisan, narrow statutory language to exclude generative AI output from Section 230 immunity and to require safety testing and clearer civil and criminal remedies for deepfake exploitation; no formal vote occurred.
