House Homeland Security hearing: witnesses urge federal baseline and enforceable AI guardrails to protect cybersecurity
Get AI-powered insights, summaries, and transcripts
SubscribeSummary
Witnesses at the House Homeland Security Subcommittee hearing urged Congress to establish enforceable federal baseline standards for AI security, warning that adversaries already weaponize AI and that fragmented state rules risk leaving gaps in national cyber defenses.
Witnesses testifying before the House Homeland Security Subcommittee on Cybersecurity Infrastructure Protection on Oct. 11 urged Congress to establish a federal baseline for securing artificial intelligence systems and called for enforceable guardrails to keep pace with rapidly evolving threats.
The hearing examined how AI systems can be secured, the risk posed by adversaries who use AI tools and how AI can strengthen cyber defense. Chairman Garbarino opened the hearing and said members would examine “the nexus between artificial intelligence or AI and cybersecurity.”
The core argument from several witnesses was that AI security cannot be left to a patchwork of state rules. Kiran Chinnagongon Nagari, cofounder and chief product and technology officer at Securin, told the subcommittee: “Securing AI models is not just about protecting algorithms.” He argued that model vulnerabilities become broader societal vulnerabilities as AI is embedded in health care, critical infrastructure and the economy. Nagari recommended a federal baseline “much like PCI or HIPAA developed in partnership with state, setting minimum standards by allowing for regional adaptation.”
Steve Fale, Microsoft’s U.S. government security leader, said the speed of adversary adoption requires urgent policy action. “There is no way to tackle this urgent national security issue without organizations, including the federal government, immediately embracing AI,” Fale said, while also urging secure-by-design approaches and workforce readiness to accompany adoption.
Jonathan Danbrodt, CEO and cofounder of Cranium, described the need for continuous, lifecycle security and governance: “Security should be treated as a first class concern in model design and training just as performance or accuracy is.” He warned that agentic systems—AI agents that take actions autonomously—introduce a new class of security risk and must be covered by lifecycle protections.
Several witnesses contrasted different policy approaches internationally and warned about a competitive dimension. Nagari said Chinese models often prioritize speed and scale while western models “tend to prioritize security and transparency,” a trade-off policymakers must weigh. Multiple witnesses recommended enforceable guardrails that can be updated as models evolve.
The committee also heard concerns about talent and capacity to secure AI. Chairman Garbarino said he had recently heard of “some of our best cybersecurity experts being directly approached by foreign leaders and being offered visas and funding for their research,” and warned of a potential “brain drain” from agencies including CISA, NIST and NSF. Witnesses echoed the urgency of workforce development and retention as part of a federal strategy.
Members noted existing federal tools and programs as models for a baseline but pressed witnesses on design and implementation. Several witnesses recommended transparency about model training data and a “bill of materials” to help purchasers and agencies evaluate AI products’ security posture. Danbrodt and others endorsed lifecycle governance, continuous monitoring and enforceable testing requirements.
The subcommittee left the record open for additional written testimony and asked witnesses to respond to follow-up questions. The hearing did not result in votes or specific legislation; members urged further work on a federal framework that can be coordinated with states and updated as AI evolves.
