House subcommittee hearing stresses balancing AI innovation with consumer protection in financial services

5785141 · September 18, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

The House Financial Services subcommittee heard from industry and academic experts who urged Congress to preserve U.S. competitiveness while strengthening consumer protections, data safeguards and regulator capacity as generative and agentic AI are adopted across banking, markets and compliance functions.

The Subcommittee on Digital Assets, Financial Technology, and Artificial Intelligence convened a hearing titled “Unlocking the next generation of AI in the U.S. financial system for consumers, businesses, and competitiveness,” where members and witnesses discussed both opportunities and risks as generative AI and agentic systems are adopted across the financial sector.

Why it matters: Lawmakers and witnesses said the United States must strike a balance that allows firms — from large banks to community institutions — to adopt AI while ensuring consumer protection, market integrity and data privacy.

Several witnesses said AI offers high-return use cases in the financial sector, including fraud detection, model-driven compliance and developer productivity, while repeatedly flagging risks such as bias, data leakage, hallucination, and new fraud techniques.

"Responsible governance is not a brake on innovation. It is a mechanism that ensures that innovation can be deployed securely and sustainably," said David Cox, vice president for AI models at IBM Research. Christian Lau, cofounder and president of Dynamo AI, said many AI proofs of concept fail to reach production not because the technology cannot deliver but because "financial institutions struggle to answer open questions about managing AI risk in heavily regulated, high impact, and consequential environments." Nicole Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, warned that poorly governed AI could worsen discriminatory outcomes: "In other words, when algorithms make poor decisions, the quality of life for black and brown communities, seniors, and even some of us are placed in reverse."

Panelists described a range of policy responses. Several witnesses and members urged stronger federal data privacy standards, expanded use of privacy-enhancing technologies (PETs) such as synthetic data and differential privacy for model training, and expanded regulatory sandboxes or similar testbeds to let firms experiment under supervision. A number of witnesses also recommended strengthening regulators' in-house technical expertise and creating clearer, risk- and outcome-based supervisory expectations.

Lawmakers on both sides of the aisle raised national competitiveness and fragmentation concerns. Several members warned that a patchwork of state laws could create compliance burdens for nationally active firms and undercut U.S. competitiveness versus other countries. "There is a value in having interoperable federal standards," testified Matthew Reisman, director of privacy and data policy at the Center for Information Policy Leadership.

Discussion also surfaced open questions about liability and oversight for agentic AI: who would be responsible if an autonomous agent improperly acted or convinced a consumer to take harmful financial steps. Witnesses recommended phased testing (for example, second-look reviews of declined applications) and more transparency when AI is used to make consumer-facing decisions.

The hearing did not result in formal committee votes. Members said they will continue bipartisan work and follow up through written questions and further hearings.

The subcommittee accepted written testimony and invited follow-up submissions for the record.