Citizen Portal

Industry witnesses tell House panel AI can scale cyber defense; vendors cite steep efficiency gains

3789115 · June 13, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Microsoft, Trellix and other vendors told the House Homeland Security Subcommittee that AI tools can markedly improve detection and response, offering example metrics and use cases while cautioning that AI requires secure design and human oversight.

Industry witnesses told the House Homeland Security Subcommittee that integrating artificial intelligence into cybersecurity operations can scale defenses and reduce human workload, but they cautioned that models and agentic systems require secure design and human review.

Steve Fale, Microsoft’s U.S. government security leader, said Microsoft protects customers from “more than 600,000,000 cyber attacks per day” and that the company has seen large gains from AI tools in operations. “As a result, we've seen a 34% decrease in mistakes, a 17% decrease in breaches, and a 30% faster time to incident resolution using this technology,” Fale said, citing results from Microsoft’s security copilot deployments.

Fale described an investigative example in which a prior, human-led inquiry cost “$640,000,” and Microsoft’s generative-AI–led approach achieved “the same result in an AI led investigation for only $80,” which he characterized as an “8,000 time increase in throughput.” Fale emphasized that human analysts still review AI outputs and that adoption must be paired with secure hosting and controls.

Gareth McLaughlin, chief product officer at Trellix, described agentic AI in security operations. He said Trellix uses commercial models and “builds our own frameworks to make sure we use those securely and safely.” McLaughlin said agentic systems can prepare investigative evidence before analysts arrive and that, in customer deployments, “the application of agentic AI effectively is a tenfold increase in the security operations capability that they have.” He added that his company “always distrust[s] it until proven otherwise” and keeps humans in the loop to verify outputs.

Kiran Chinnagongon Nagari of Securin emphasized that model capability increases attack surface and that “models with high reasoning capability are paradoxically more exploitable,” arguing for robust model security and adversarial testing before deployment.

Witnesses described both tactical uses—such as automated domain spoof detection that Fale said can reach “greater than 99% accuracy detecting these domains immediately”—and broader operational benefits for smaller security teams. At the same time, witnesses warned of new classes of threats, including AI-assisted phishing, deep fakes and polymorphic malware.

Committee members recorded these vendor claims and asked for written follow-up. The hearing included no votes on procurement or adoption; members asked witnesses to provide additional detail in the written record.