Chairman Mike Rounds convened the Securities, Insurance and Investment Subcommittee of the Senate Banking, Housing, and Urban Affairs Committee to examine the growing role of artificial intelligence in capital and insurance markets and possible congressional responses.
The hearing focused on balancing innovation and oversight as firms deploy AI for underwriting, fraud detection and market surveillance. Chairman Rounds said the hearing’s title, “guardrails and growth,” captured the goal: “examine how AI is transforming financial services and explore how Congress can foster innovation while promoting transparency, accountability, and smart oversight.”
Why it matters: Senators and industry witnesses told the panel that AI can increase efficiency and market integrity but also introduces new systemic and consumer risks — from large-scale fraud and “surveillance pricing” to model opacity that complicates auditing and liability. Witnesses urged clearer liability rules, common standards and controlled public-private testing environments so regulators and firms can evaluate AI tools before wide deployment.
Kevin Kalanich, intangible assets global collaboration leader at Aon, described insurance as both a risk-transfer tool and a mechanism that sets behavioral standards for industry. “Insurance underwriting combined with statutes, litigation precedent, contractual allocation of liability, and evolving standards such as the NIST AI risk management framework set thresholds for what is considered an acceptable risk,” Kalanich said. He said some carriers are adding AI exclusions or sublimits while others are creating AI-specific endorsements and that coordinated regulation is needed given state-based insurance oversight under the McCarran-Ferguson Act.
Tal Cohen, president of Nasdaq, outlined how exchanges use AI for surveillance and investigator support. He said AI helps “reduce false positives and focus on the higher risk items” and can speed investigators’ work by converting hours of manual review into minutes. Cohen emphasized that exchanges already follow regulation such as Reg SCI and collaborate with peers, FINRA and the SEC on security and market-integrity controls.
David Cox, vice president for AI models at IBM Research, urged open, auditable models and use-case based rules: “Regulate the application of AI, not just the technology.” Cox said enterprises need transparency about training data and model governance so regulated firms can audit systems over time. He added that open-source models and academic–industry collaboration can broaden access and security.
Senators pressed witnesses on multiple risks: Senator Mark Warner warned that regulatory words matter and raised concerns about foreign adversaries injecting malicious data into models; he asked whether exchanges have a multi-exchange public-private body to set cyber and AI guardrails. Cohen said exchanges currently collaborate and work with the SEC and FINRA but that a broader discussion about a formal multi-exchange entity was still needed.
On consumer and market harms, senators and witnesses discussed “surveillance pricing,” fraud losses and the potential for models to cause correlated behavior across market participants. Warner referenced press reports that one major credit-card network’s AI improvements increased fraud detection by up to 300% and prevented more than $50 billion in fraud in three years; witnesses also cited a McKinsey estimate that AI systems can reduce fraud detection costs by about 30% while improving true positive rates.
Insurance and climate: Senators asked how insurers use AI for climate-driven risks such as wildfires. Kalanich said carriers increasingly apply climate and monitoring data to predict and mitigate events — for example, using drones for claims assessment and using models to encourage loss-mitigation practices that can lower a homeowner’s cost of risk.
Workforce and equity: Senators raised concerns about entry-level job displacement. Cox said research suggests AI will alter tasks within occupations and can augment productivity, but acknowledged labor-market change and urged education and training adjustments.
Policy options discussed included:
- A federal “sandbox” or testing laboratory to let regulators and firms evaluate AI in a controlled setting. Chairman Rounds said he is reintroducing the Unleashing AI Innovation in Financial Services Act to create such a venue; witnesses expressed conditional support if the sandbox is well designed, temporary and tightly controlled.
- Harmonized rules to avoid a patchwork of state regulations that could hinder innovation; witnesses warned against “balkanization” but also noted states such as New York and Texas are already moving on AI-related rules.
- Use-case based frameworks and transparency requirements so regulated firms can trace model inputs, outputs and governance for auditing.
No formal votes or regulatory actions were taken at the hearing. The chairman closed by setting administrative deadlines for the record: senators may submit written questions due a week from the hearing and witnesses were asked to file responses approximately 45 days later.
The hearing combined technical testimony with policy debate and left several open questions for follow-up, including how to assign liability across model developers, deployers and market intermediaries and how to structure multi-exchange public-private collaboration on cyber and AI risks.