Carnegie Council’s Joel Rosenthal frames a "post‑liberal" world and warns AI raises urgent questions of authority and responsibility
Loading...
Summary
Joel Rosenthal told a Hinckley Institute audience that norms underpinning the liberal world order are under strain and used artificial intelligence as a case study to probe who should hold authority and responsibility for AI systems, citing corporate limits and contested government demands.
Joel Rosenthal, president of the Carnegie Council for Ethics and International Affairs, told a Hinckley Institute Forum audience that the international rules and moral commitments that defined the liberal world are fraying and that artificial intelligence illustrates the ethical and institutional challenges that follow.
Rosenthal argued that the liberal order—grounded in international cooperation, defense of democratic institutions, fidelity to truth and a humanitarian imperative—rested on both substantive values and procedural restraints. "Ethics is plural, not singular," he said, urging a pluralistic approach that manages differences rather than demanding purity of principle.
Rosenthal said contemporary politics shows a "rupture" from those norms, naming recent public leaders and speeches as evidence of a shift toward consolidation of power. He described the present moment as historically consequential and cautioned that unrestrained power historically yields dangerous outcomes.
Using AI as a case study, Rosenthal pressed the audience to consider whether systems such as Anthropic’s Claude are mere tools or entities warranting moral status. He noted Anthropic’s public framing of Claude’s development and read a summary of reporting about the company’s concerns: "Anthropic has two concerns over two issues that it isn't willing to drop ... AI‑controlled weapons and mass domestic surveillance of American citizens," he said, quoting a Wall Street Journal summary of the controversy.
That dispute, Rosenthal said, crystallizes competing pressures: some companies want to restrict certain government uses, while government actors insist on access during perceived security races. He described the bargaining as a standoff in which companies may refuse contracts that would allow automated weapons or blanket surveillance, and governments may threaten punitive measures for noncompliance.
Rosenthal also invoked broader intellectual questions: "Does AI have a soul?" and whether reliance on algorithmic authority risks supplanting human reason—an outcome he compared to a new kind of deference to cloud‑hosted systems. He raised practical consequences, including liability and insurance: if an AI system fails, who bears responsibility?
During a student Q&A, Rosenthal answered questions on moral status, international order, corporate liability and the feasibility of universal rules. To a student who asked whether a global ethical standard for AI is possible, Rosenthal recommended starting with prohibitions—an agreed list of uses that should be off limits—rather than attempting a single prescriptive code.
The forum concluded with a call for creative, pluralistic institutional answers that pair ethical reasoning with practical, technology‑by‑technology arrangements to harness benefits and prevent harms.
The talk was followed by a brief question period with students asking about AI’s moral standing, the Anthropic‑Pentagon dispute, corporate accountability, and prospects for international minimums on prohibited AI uses. Rosenthal encouraged continued public conversation and institutional innovation.

