Industry witnesses tell House committee AI can fight fraud and strengthen markets — but require governance

House Committee on Financial Services · December 10, 2025

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Witnesses from Google Cloud, Nasdaq, Zillow and Palo Alto Networks told the House Financial Services Committee AI already improves AML, fraud detection, market surveillance and cybersecurity, while urging explainability, human oversight, and information sharing to manage new risks.

Industry witnesses told the House Financial Services Committee that AI is delivering measurable improvements in fraud detection, market surveillance and cybersecurity — but they said the benefits depend on governance, transparency and public‑private information sharing.

Janette Manfra of Google Cloud said Google’s AI tools have improved detection and reduced false positives in financial‑sector fraud investigations and highlighted the company’s Agent Payments Protocol (AP2) and Secure AI Framework (SAFE) as industry efforts to standardize secure agent transactions and risk mapping.

Tyler Cohen of Nasdaq described Verifin, a cloud‑native AI anti‑financial‑crime platform used by hundreds of institutions, and said Nasdaq’s Dynamic Mello order type uses about 130 market signals every 30 seconds to improve execution quality for large institutional trades. He stressed internal governance tied to NIST frameworks and Reg SCI oversight.

Nicholas Stevens of Zillow emphasized AI use in housing search, lead prioritization and a publicly shared fair‑housing classifier designed to detect and prevent digital steering in housing searches. Wendy Whitmore of Palo Alto Networks warned that attackers now use AI to compress attack timelines and urged “secure by design” approaches that make security part of the AI lifecycle.

Witnesses repeatedly recommended harmonized standards, human‑in‑the‑loop safeguards, testing and monitoring across model life cycles, and information‑sharing mechanisms so government and industry can detect and stop scams faster. They told members they were willing to provide more technical detail in written follow‑ups.

The witnesses’ examples were operational descriptions and product names presented as part of testimony; committee members and advocates pressed for independent oversight, bias testing and enforceable consumer protections.