Citizen Portal

House Homeland Security subcommittee examines online radicalization, generative AI and platform moderation after New Orleans attack

2490032 · February 28, 2025

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

The House Homeland Security Subcommittee on Counterterrorism and Intelligence opened its first hearing of the 119th Congress to examine how foreign terrorist organizations and other violent extremists use the internet, encrypted messaging apps, cryptocurrency and generative artificial intelligence to recruit, radicalize and mobilize individuals to violence.

The House Homeland Security Subcommittee on Counterterrorism and Intelligence opened its first hearing of the 119th Congress to examine how foreign terrorist organizations and other violent extremists use the internet, encrypted messaging apps, cryptocurrency and generative artificial intelligence to recruit, radicalize and mobilize individuals to violence.

Chairman Pflueger, presiding at the hearing, said the panel’s purpose was “to identify how foreign terrorist organizations like ISIS and other nefarious actors use the Internet, online networks, and generative AI to recruit and radicalize individuals to commit violence and terrorist acts.” He opened the session by citing the New Orleans New Year’s Day attack, noting investigators believe the perpetrator “self radicalized online through various propaganda channels affiliated with ISIS.”

The hearing brought four outside experts: David Gartenstein-Ross, senior advisor on asymmetric warfare at the Foundation for Defense of Democracies and founder/CEO of Valens Global; Aaron Zelin, senior research fellow at The Washington Institute and director of the Islamic State Worldwide Activity Map project; Daniel Flesch, senior policy analyst for the Middle East and North Africa at The Heritage Foundation; and Kurt Braddock, assistant professor of public communication at American University. Each summarized research and recommendations for policy responses.

Why it matters: witnesses and members said online radicalization is a persistent, evolving national-security threat that now includes new tools such as generative AI. The hearing focused on operational examples, current gaps in platform enforcement and potential congressional steps to ensure law enforcement and platforms can keep pace.

Key testimony and evidence

- Gartenstein-Ross argued extremists are moving through a technology adoption curve and warned that generative AI lowers barriers for bad actors. “Terrorists don't need cutting edge AI research labs to weaponize artificial intelligence. They only need access to the same widely available AI tools that businesses and individuals are already using,” he said, adding AI can be used for “hyper personalized extremist content, create deep fake recruitment videos, and use generative AI in recruitment agents that engage recruits dynamically.”

- Aaron Zelin described the New Orleans attacker, identified in public reporting as Shamsud Din Jabbar, as an example of someone who “recorded a video of the French Quarter using his Meta smart glasses,” posted ISIS-supporting material to Facebook and consulted older Islamic State instructional materials. He said jihadist groups also are experimenting with cryptocurrency, live streaming and foreign AI apps such as DeepSeek and that some arrestees have used TikTok and other mainstream platforms. Zelin reported that, per his dataset, there have been at least 36 arrest cases globally tied to cryptocurrency use since 2015 (13 in 2024) and 15 arrest cases since 2023 linked to ISIS activity on TikTok.

- Daniel Flesch urged attention to domestic fringes and campus activity, linking recent anti‑Israel demonstrations and some campus incidents to broader radicalizing networks. He recommended both proactive law-enforcement measures and civic responses to limit extremist organizing on campuses and in communities.

- Kurt Braddock emphasized that online and offline radicalization are intertwined, that algorithmic recommendation and revenue models can amplify polarizing content, and that content moderation plus “prebunking” and digital literacy programs can build resilience. He also urged more human review for nuanced cases and cautioned against seeing moderation as purely technical.

Examples and data cited

- The committee discussed the New Orleans vehicle attack that killed 14 people and injured dozens; witnesses said the attacker had posted pro‑ISIS material and may have drawn on preexisting IS guidance for vehicular attacks.

- Ranking Member Magaziner cited a Department of Homeland Security Office of Intelligence and Analysis finding: since August 2023, law enforcement disrupted five plots in which juveniles were radicalized online and mobilized to plan attacks.

- Witnesses described industry tools used to limit extremist content, such as shared hashing databases and digital fingerprints, and noted limits: adversaries use evasive spellings, new languages and encrypted/private messaging to evade detection.

Policy options discussed

- Platform measures: witnesses recommended shared “hashing” or fingerprint databases for audio, image and video content; coordinated takedown practices across companies; user and IP banning to limit repeat offenders; improved moderation in non‑English languages and nascent regional platforms; and more human analysts to review context‑sensitive material.

- Generative AI: witnesses warned that AI can generate tailored propaganda, operational instructions and deep fakes. Gartenstein-Ross and others recommended regular threat assessments (the GenAI Terrorism Threat Assessment Act was cited as an example of proposed legislation) and closer monitoring of AI misuse rather than reliance on reactive safety filters alone.

- Interagency and industry coordination: speakers urged stronger DHS/FBI strategies for sharing information with tech companies and among federal open‑source monitoring offices. Representative Magaziner noted a Government Accountability Office review finding the FBI and DHS had not developed comprehensive strategies for sharing information with social media and gaming companies.

Points of contention and additional concerns

- Members pressed witnesses about the proper legal line between protected speech and incitement, with experts noting the governing First Amendment standard centers on incitement to imminent lawless action and that prosecuting propaganda alone is legally fraught.

- Several members raised concerns about reduced moderation on some major platforms, the role of algorithms in pushing users toward more extreme content, and the challenges posed by platforms based outside U.S. jurisdiction or by apps with weaker content restrictions.

- Lawmakers also tied related threats to other issues raised in questioning, including exploitation of private messaging and gaming platforms for recruitment, the use of cryptocurrencies to move funds, and the need for more experienced personnel in federal counterterrorism units.

What the committee will do next

The subcommittee kept the record open for follow-up questions and asked witnesses to respond in writing under committee rules. Members indicated interest in continued bipartisan hearings that could include platform representatives, gaming industry witnesses and additional law‑enforcement officials. The chair closed the hearing with a request that the written record remain open for 10 days.

Ending

The hearing highlighted a consensus among members and experts that online radicalization is a current and evolving threat that intersects technology, law enforcement capacity and platform policy. Witnesses and lawmakers urged combined technical, legislative and community approaches — from better moderation and inter‑company coordination to expanded open‑source monitoring and prevention programs — to reduce the risk of future attacks.