Citizen Portal
Sign In

Lifetime Citizen Portal Access — AI Briefings, Alerts & Unlimited Follows

UN high-level event urges global action on artificial intelligence's role in amplifying hate speech

3844753 · June 16, 2025

Loading...

AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

At a June 18 United Nations high-level event marking the International Day for Countering Hate Speech, UN officials, member states, tech companies and civil-society groups discussed how artificial intelligence can amplify hate and outlined steps—legal, technical and social—to detect, respond to and prevent harm.

United Nations officials, diplomats, technology company representatives and civil-society groups on June 18 urged coordinated global action to address how artificial intelligence can amplify hate speech and fuel real-world harm.

Virginia Gamba, the acting special adviser on the prevention of genocide, opened the high-level commemoration of the International Day for Countering Hate Speech and framed the session around the nexus of hate speech and AI. She warned that "hate speech fuels discrimination, undermines social cohesion, and in some cases constitutes incitement to violence," and stressed the need for partnerships across governments, tech firms, civil society and local communities.

The event combined two parallel signals: a push by Member States to strengthen national and international rules and a simultaneous call from researchers and practitioners for tools that can detect, contextualize and respond to dangerous content before it escalates. Ambassador Omar Hilale of Morocco, the resolution’s penholder at the General Assembly, said Morocco has proposed focusing this year’s draft resolution on the intersection of AI misuse and hate speech and outlined national steps—legal, institutional and educational—aimed at ethical AI deployment. Hilale described Morocco’s draft AI legislation (introduced April 2024, reported to contain 17 articles) and cited existing national instruments, including its 2009 data protection framework and 2020 cybersecurity law, as part of that effort.

Speakers highlighted recent institutional moves inside the UN system. The secretary-general’s message, read at the event, said, "Hate speech is poison in the well of society," and noted UN actions including the appointment of Miguel Ángel Moratinos as the secretary-general’s special envoy to combat Islamophobia and the launch of a UN action plan to enhance monitoring and response to antisemitism.

Experts and civil-society representatives gave practical and sometimes divergent views on how to respond. Gregory Stanton, founder of Genocide Watch, described efforts to develop counter-speech tools and proposed a "words institute" to build AI systems that can answer hateful narratives at scale rather than rely solely on removal. Stanton argued: "It is not to take down the hate speech. It's not censorship. It is to answer it."

Technology companies said they are testing large language models and other AI to improve moderation. A Meta representative said that, in company testing, "LLMs often perform better than existing machine learning models, or can enhance the existing ones," and described systems-level safeguards, auditing and tools intended to reduce both harmful output and wrongful refusals to serve legitimate speech. Meta also reported small changes in enforcement prevalence for some categories after recent policy adjustments.

Civil-rights and digital-rights speakers cautioned about automated moderation’s limits and risks. A senior Electronic Frontier Foundation attorney emphasized that AI tools can "have serious freedom of expression implications," noting uneven performance across languages and contexts and urging transparency, notice-and-appeal mechanisms and independent audits. Access Now and other NGOs highlighted evidence that automated systems can be inconsistent—sometimes over-removing legitimate expression and sometimes failing to catch harmful content—especially for non‑English languages and marginalized communities; they urged protections for refugees, migrants and minority-language users.

Prevention and early-intervention approaches drew attention. Moonshot’s director for online violence prevention described a "secondary prevention" model that uses targeted outreach (including paid ads) to connect people consuming hate content to confidential counseling, reporting that such efforts can reach people earlier in a pathway to radicalization. Amandeep Singh Gill, the UN special envoy for digital and emerging technologies, recommended practical measures for platforms—"authenticity certification, labeling, and watermarking"—and supported the Global Digital Compact’s calls for governance and scientific assessment.

Member states across regions reiterated support for international cooperation and a human-rights framework. Several delegations invoked the UN Strategy and Plan of Action on Hate Speech and the Global Digital Compact. The European Union delegation cited the EU’s AI Act and Digital Services Act as examples of regulatory approaches that balance innovation and rights. Rwanda and other states drew a direct line from unchecked hate speech to past atrocities and urged multilingual, culturally aware monitoring and early-warning systems. The Philippines and other countries described national AI roadmaps and education and transparency initiatives.

Participants identified several near-term objectives: strengthen multilingual monitoring and early warning; expand human-in-the-loop moderation and notice-and-appeal processes; promote transparency, independent auditing and standards for AI governance; invest in education, counter-speech and community-based prevention programs; and deepen partnerships among states, the private sector and civil society. No formal vote or negotiated text was concluded at the meeting; speakers instead framed priorities for follow-on negotiations, capacity-building and interagency coordination at the UN.

The event closed with announcements of follow-up side events and working sessions to take deeper technical and policy work forward, and repeated calls from participants for sustained, multistakeholder effort to ensure AI serves prevention and protection rather than amplifying harm.