Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows
House subcommittee holds first hearing on artificial intelligence and criminal exploitation
Loading...
Summary
House Judiciary subcommittee members and outside experts met July 16, 2025, for a hearing titled “Artificial Intelligence and Criminal Exploitation” to examine how AI is being used to commit fraud, produce deepfakes and synthetic child sexual abuse material (CSAM), and to discuss law enforcement uses and legal gaps.
House Judiciary subcommittee members and outside experts met July 16, 2025, for a hearing titled “Artificial Intelligence and Criminal Exploitation” to examine how AI is being used to commit fraud, produce deepfakes and synthetic child sexual abuse material (CSAM), and to discuss law enforcement uses and legal gaps.
The hearing, convened by Chairman Biggs, drew testimony from academic, industry and civil‑liberties witnesses who said AI both magnifies criminal activity and can help law enforcement respond. "We welcome everyone to today's hearing on artificial intelligence and criminal exploitation," the chair said at the start of the session.
Why it matters: witnesses said AI lowers barriers for criminals, enabling scalable fraud and highly realistic impersonations, and that rapid adoption by bad actors risks widespread financial and personal harms. At the same time, witnesses said law enforcement needs modern investigative tools, training and clear legal authority to use AI responsibly.
Experts described several concrete harms already reported, including voice‑clone extortion and large increases in AI‑enabled scams. Dr. Andrew Bowne, a professorial lecturer in law at the George Washington University Law School, told the committee that AI can be a "threat multiplier," noting that technologies such as computer vision, generative adversarial networks and large language models are used in surveillance, deepfakes and automated phishing.
Industry witnesses urged stronger public‑private partnerships and funding for investigative technology. Ari Redbord, global head of policy at TRM Labs, said, "We are rapidly approaching a world in which the bottleneck for crime is no longer human coordination, but computational power," and called for investment so defenders can match attackers' capabilities.
Civil‑liberties testimony emphasized constitutional protections. Cody Vinski, senior policy counsel at the ACLU, warned that responses to AI crime must preserve First and Fourth Amendment protections and cautioned against broad federal preemption that would block state and local regulation.
Committee members raised specific examples during questioning, including a cited Detroit facial recognition case that resulted in an arrest and 11‑hour detention of a woman later released and cleared; members used the example to discuss the technology's disparate impacts. Witnesses also discussed possible statutory fixes for AI‑generated CSAM and legislative options such as sentencing enhancements when AI increases the scale or impact of a crime.
The committee entered written statements and letters into the record, including letters from multiple governors and state attorneys general opposing a broad moratorium on state and local AI rules that had been proposed in prior legislation.
The hearing did not produce votes or formal legislative action. Members and witnesses repeatedly pressed for follow‑up work on funding, training and narrowly tailored statutes to address AI‑enabled harms while protecting civil liberties.
The hearing concluded with the chair noting this would likely be the first of multiple hearings on the subject and the committee adjourned.

