Citizen Portal
Sign In

Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows

Experts tell House panel AI fuels deepfakes, sextortion and synthetic CSAM; legal gaps identified

5423193 · July 17, 2025

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Witnesses told the House subcommittee that generative AI is powering deepfakes, sextortion and synthetic child sexual abuse material (CSAM), and recommended targeted criminal statutes, sentencing enhancements and preservation of constitutional protections. Dr. Andrew Bowne and others highlighted H.R.1283 as a possible statutory fix.

Witnesses before a House Judiciary subcommittee on July 16, 2025 described how generative artificial intelligence is being exploited to create deepfakes, automated sextortion schemes and synthetic child sexual abuse material (CSAM) and urged targeted legal remedies.

Dr. Andrew Bowne, a professorial lecturer in law at the George Washington University Law School, told the committee that generative models, computer vision systems and large language models can be repurposed for crimes ranging from identity theft to the creation of synthetic CSAM. "Deepfakes, AI generated CSAM and automated fraud are not theoretical threats. They are real, growing, and causing harm now," Bowne said.

Bowne recommended several legislative approaches, including criminal law reform to create offenses for malicious use of AI and sentencing enhancements when AI increases the scale or impact of a crime. He specifically referenced H.R.1283 as proposed legislation intended to amend federal CSAM statutes to cover AI‑generated material.

Industry witnesses described how accessible the tools are to bad actors and how they enable emotionally manipulative scams. Zahra Bridal, cofounder and chief technology officer of Overwatch Data, described voice cloning and apps that "turn ordinary photos into fake explicit images, which are then used to bully, harass, and extort victims," and urged expanded education and faster sharing of detection tools between private companies and law enforcement.

Civil‑liberties testimony warned against overbroad restrictions that could infringe constitutional rights. Cody Vinski of the American Civil Liberties Union said that responses to criminal uses of AI must adhere to "the constitution, civil rights, and civil liberties," and cautioned that measures such as forced scanning of private communications or prohibitions on encryption could raise Fourth Amendment concerns.

Committee members discussed remedies available today, including wire fraud and defamation laws, but witnesses said gaps remain for AI‑specific harms, especially where expressive protections under the First Amendment apply. The panel also discussed state efforts such as Tennessee's so‑called Elvis Act — cited as an example of state statutes addressing voice simulation — and warned that a broad federal moratorium on state AI rules would hamper state responses.

The committee did not vote on legislation. Witnesses urged narrowly tailored federal statutes focused on demonstrable harms such as synthetic CSAM and recommended concurrent investments in detection technology, cross‑sector sharing, and training for investigators.