Citizen Portal

Senate intelligence panel hears experts on AI risks to national security and governance

2822773 · March 25, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

Senators heard experts from industry and academia warn that widely available generative AI models raise national-security, economic and democratic risks while offering major benefits if governance, workforce, and data infrastructure are improved.

Senator Mark Warner, chairman of the Senate Select Committee on Intelligence, convened an open hearing with AI experts and researchers to examine national security implications of generative models and possible policy responses. Witnesses included Dr. Yann LeCun, Meta’s chief AI scientist; Dr. Benjamin Jensen, senior fellow at the Center for Strategic and International Studies and professor at the Marine Corps University School of Advanced Warfighting; and Dr. Jeffrey Ding, professor of political science at George Washington University.

Warner said the intelligence community has long used machine learning for signal processing and translation but that generative models now change the scale and social impact of the technology, lowering barriers for foreign governments and malicious actors to adopt advanced tools. He told the committee he wanted the hearing to identify organizational, contracting and technical barriers the U.S. intelligence community faces in keeping its edge.

The witnesses emphasized three recurring themes: (1) proliferation and dual-use risks — accessible models can be repurposed for foreign intelligence, cyberattacks, and disinformation; (2) human and institutional constraints — agencies need trained analysts, adaptable bureaucracy, and data infrastructure to use models safely; and (3) governance and international coordination — safety standards, transparency and red teaming are important to reduce harms while preserving innovation.

Dr. Benjamin Jensen warned that without a workforce fluent in data science and statistical reasoning, analysts will be unable to interrogate model outputs or balance algorithmic inference against human judgment. Jensen said, “If you don't actually make sure that people understand how to use the technology, it's just a magic box.” He urged training, experimentation, and routine tabletop exercises so decision makers know when to slow down in crises.

Dr. Yann LeCun described large models as foundational infrastructure that enable downstream applications and said open sharing of academic work and model components has historically accelerated American leadership. He also underscored safety work—red teaming, curation, and bug bounties—used before releasing large models in controlled ways.

Dr. Jeffrey Ding urged Congress to focus on diffusion capacity — a country’s ability to adopt innovations widely across its economy — not only on who achieves initial breakthroughs. He argued China has strong innovation metrics but a diffusion deficit and recommended policies to broaden the engineering workforce and support applied adoption across sectors.

Members asked detailed follow-ups on immediate threats (disinformation, market manipulation), military uses (loitering munitions, autonomous weapons), and possible regulatory responses (watermarking, algorithmic accountability). Several senators proposed studying an AI-assurance approach analogous to the FDA for drugs while preserving American innovation.

The hearing produced no formal votes but generated consensus among witnesses that the United States must invest in people, interoperable data systems, and governance mechanisms that combine red teaming, transparency and international cooperation to lower the risks of rapid diffusion of generative AI.