Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows
Experts at House hearing urge robust sandboxes and technical standards to test AI in finance
Loading...
Summary
Multiple witnesses told the House subcommittee that regulatory sandboxes, red‑teaming, and consensus industry standards could accelerate safe AI adoption in banking and fintech while preserving competition and protecting consumers.
Witnesses at the House Financial Services subcommittee hearing recommended expanding supervised testing environments and technical standards so companies can experiment with high‑impact AI use cases while regulators learn how to assess risk.
Why it matters: Several witnesses said supervised testbeds can help firms and regulators identify real‑world risks before broad deployment — especially for agentic AI and generative models that can take multi‑step actions.
Christian Lau, cofounder and president of Dynamo AI, told the subcommittee that regulators should "expand the use of AI sandboxes, as called for in the administration's AI Action Plan," and bring "leading evaluation and red teaming technology to rigorously test these experimental AI against real world risks." Lau described Dynamo's work helping banks move proofs of concept into production by establishing governance controls and red‑teaming processes.
Nicole Turner Lee of the Brookings Institution supported sandboxes that include ethics and bias mitigation requirements. She said regulators should prefer testing models in transparent, collaborative environments: "We should be creating more greenhouses, which allow for more transparency and sunlight into these processes, promoting collaborative partnerships between business, government, and consumers to see where issues arise and how the three of them can foster trust with one another."
Members and witnesses discussed international examples. Christian Lau and other witnesses pointed to Singapore’s AI sandbox and verification programs as a working model to combine testing, audits, and stakeholder engagement; Lau said Singapore’s programs "bring the latest technologies to actually evaluate the latest risks that are emerging with AI agents and new AI technologies."
Panelists recommended several design elements for effective sandboxes: clear objectives and public disclosure about goals, bias mitigation and explainability requirements, continuous monitoring and audits, affirmative consumer protections for participants, and publicly available blueprints and post‑testing reports. Witnesses also cautioned that some sandboxes in practice have limited participant scale and suggested tooling and federal guidance to help scale red‑teaming and continuous evaluation.
No formal action was taken; members said they will consider these design elements in drafting future legislation and supervisory guidance.

