Witnesses: AI accelerates fraud but also helps detect deepfakes and scale defenses
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Testimony at the House subcommittee hearing emphasized that artificial intelligence has multiplied scammers' capabilities — producing convincing phishing, voice cloning and deepfakes — while firms are deploying AI tools to detect fraud in real time and protect consumers.
Witnesses told the House subcommittee that artificial intelligence (AI) has become a central factor in recent growth of fraud but can also be used to detect and stop scams.
Ian Bednowitz of LifeLock testified that breaches and AI are primary drivers of recent fraud increases. He said the number of Social Security numbers exposed on the dark web rose sharply and that AI is enabling criminals to “industrialize fraud,” creating convincing phishing messages, voice clones and deepfakes. He told the committee that seniors experienced particularly large average losses and that “in the hands of criminal organizations, artificial intelligence is allowing them to industrialize fraud.”
Paul Benda and other witnesses described AI as a “double‑edged sword.” Benda said banks and security firms use AI to scan millions of data points to detect fraud in real time, detect deepfakes and block fraudulent communications. Kate Griffin said law enforcement and governments must also have access to tools and training to use AI to query databases and build cases at scale.
Policy recommendations included: invest in public‑private AI tools for detection and takedown, modernize breach notifications so law enforcement and firms can act, fund training for local law enforcement to use AI tools, and encourage financial institutions and platforms to deploy defensive AI products that scale to meet automated attacks.
No regulation was adopted at the hearing; committee members asked witnesses for technical follow‑up and examples of AI‑driven defensive tools.
