Panel warns AI will raise new fraud and liability risks; urges stronger coordination and clear disclosure when models decide
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Witnesses and members flagged AI‑enabled fraud, deepfakes, and novel agentic threats as emerging hazards for consumers and financial institutions, and called for improved interagency coordination, public disclosure when AI is used, and liability frameworks for agentic systems.
Members and witnesses at the House subcommittee hearing highlighted growing threats from AI‑enabled fraud, deepfakes, and agentic systems that can autonomously carry out multi‑step tasks.
Why it matters: Lawmakers warned that fraudsters already use generative AI and deepfakes in scams, and that agentic AI introduces new avenues for data exfiltration, prompt injection, and autonomous malicious actions. Those risks directly threaten consumers and could create new supervisory challenges for regulators.
Nicole Turner Lee (Brookings Institution) said AI could "widen the racial wealth gap" if discriminatory training data or opaque decision‑making reaches mortgage, credit or other financial decisions. Members referenced recent data and industry surveys: one member noted industry reporting that "92% of financial institutions surveyed indicate that fraudsters are using generative AI."
Panelists urged immediate steps: more interagency coordination, clear consumer disclosure when AI influences credit or other decisions, enhanced sharing of fraud indicators among firms and regulators, and investment in PETs and incident response. Matthew Reisman said industry and regulators should create forums for rapid information exchange so defenders can match evolving attacker techniques.
Members also asked about liability for agentic systems that act autonomously. Witnesses recommended phased supervision, second‑look reviews of declined applicants, and transparency requirements for AI decisioning so that disparate impacts can be audited.
No formal enforcement actions were proposed at the hearing; members asked for additional written record evidence and technical follow‑ups.
