And more from
Council on Foreign Relations

The World This Week

February 27, 2026

By Michael Froman
President, Council on Foreign Relations

This is the tale of two cities—Washington and New Delhi—where the issues of sovereignty and artificial intelligence (AI) have recently come to a head.

 

First, Washington. Under the second Trump administration, the United States has sought, through a laissez-faire regulatory approach, to ensure that privately owned U.S. firms build the most powerful AI systems in the world. In many respects, this approach is working as designed. Private capital and innovators are doing what they do best in building ever more ingenious U.S.-made mousetraps.

 

The caveat to this light-touch regulatory environment was always that the government, to enhance its sovereign powers, would demand to become the ultimate power user of AI—co-opting the tools produced by U.S. firms for national security, at scale and on its own terms. In practice, this is proving rather complicated, not least because many in the AI community would like to build products, including those they provide to the military, with built-in safeguards.

 

Is a sovereign power truly sovereign if a private firm can constrain the ability to use what could be a decisive military technology?

 

The Pentagon says no. In an ultimatum issued earlier this week to Anthropic, the firm behind the AI assistant Claude, the Pentagon demanded guardrail-free access to the company’s AI models by no later than 5:01 p.m. today. As Chief Pentagon Spokesman Sean Parnell stated on X: “Allow the Pentagon to use Anthropic’s model for all lawful purposes. This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.” Were Anthropic to refuse, the Pentagon vowed to invoke the Defense Production Act, which would enable the government to effectively commandeer Claude, or alternatively, to deem Anthropic a supply chain risk—a designation reserved for U.S. adversaries and never before applied to an American company—that could prevent the Pentagon and major defense contractors from using Anthropic products altogether. 

 

Yesterday, Anthropic refused. In a statement, CEO Dario Amodei wrote: “These threats do not change our position: we cannot in good conscience accede to their request.” Claude, Amodei noted, is already extensively deployed across the military and the U.S. intelligence community—and was the first major large language model to be deployed on classified government networks. But the company drew a hard line on two use cases requested by the Pentagon: mass domestic surveillance and fully autonomous weapons. On the former, Anthropic argues that “the law has not yet caught up with the rapidly growing capabilities of AI,” and therefore Claude cannot be provisioned responsibly for use cases such as creating a “comprehensive picture of any person’s life—automatically and at massive scale.” On the latter, the company contends that its systems are not reliable enough to take humans entirely out of the loop—and that Anthropic executives, not the government, are best positioned to make that judgment.

 

This is an extraordinary moment. We live in a free market economy, and firms generally enjoy broad latitude to determine the terms on which they provide their products to the government or anyone else. But we also live in a constitutional republic, one in which the people have vested in the state great power—an effective monopoly on violence—and the democratic process is the sole arbiter of its exercise.

 

Even if the Anthropic/Pentagon fracas blows over, basic dilemmas still loom with respect to the division of power, labor, and responsibility for AI safety. Will the role of private firms in ensuring responsible AI use be confined narrowly to producing the most reliable and accurate AI tools which the government can deploy as it sees fit, or will these firms play a more foundational role in determining acceptable use cases for their products? Whom do we trust more, and less—the government or a private firm—to make these decisions?

 

Therein lies the AI sovereignty paradox. We must ask not only whether the United States can be sovereign if the government lacks unfettered access to the most powerful AI models—but also whether we, the people, can remain sovereign if the government deploys those models, unfettered, in its mandate to keep us safe. 

 

Today, these decisions are being made on the fly, absent a domestic regulatory framework, clear government policies—legislative or otherwise—and any semblance of public consensus. There is no comparison between the speed of innovation and the speed of governance.

 

The rest of the world only wishes they had the same problems. Let’s go east.

 

Rising, middle, and small powers will not wrestle with domestic AI champions over unfettered access to the latest models. They are wrestling with U.S. and Chinese firms simply to ensure that commercial-grade AI products are available within their borders, and are scrambling to put in place their own version of AI governance.

 

These challenges were laid bare last week in New Delhi, where Indian Prime Minister Narendra Modi hosted the India AI Impact Summit 2026. Watching from afar, I was struck by the proceedings.

 

Rather than dwelling on frontier model development or the existential risks that have dominated Western AI discourse, Modi anchored the summit around “impact”: equitable access, climate resilience, and inclusive growth. For countries like India, the imminent AI risk is not that the technology will become too powerful but that its near-term benefits will be captured by a narrow band of wealthy nations. To avoid that outcome, these countries need assured access to AI while managing the sovereignty risks posed by foreign dependency.

 

Modi and other leaders in New Delhi were focused on a different set of questions: Will the rest of the world run on the U.S. or Chinese “AI stack”—the soup-to-nuts chain of technologies, from chips to models and applications—or will they develop competitive alternatives? And will the rest of the world coalesce around common regulatory principles to govern AI use? There is no doubt that they would like to establish some degree of AI sovereignty. But their capacity to do so is very much in doubt.

 

The problem is not simply that these countries are behind in developing and deploying AI. They are also contending with the path dependency wrought by several decades of American technological preeminence.

 

With the partial exception of China, which built a parallel digital ecosystem behind the Great Firewall, the United States and U.S. firms developed and set the standards for the modern internet, built and operate the cloud infrastructure that forms its backbone, and control most of the software that companies, governments, and individuals rely upon daily. Today, U.S. and Chinese firms also lead in most key layers of the AI stack: foundation models, cloud infrastructure, and chip design.

 

In the compute layer, the United States hosts roughly 75 percent of global AI supercomputer performance, with China accounting for approximately 15 percent and the rest of the world just 10 percent. These figures likely understate China’s true capacity, but the core point stands: everyone else is starting from a position of profound dependence.

 

Europe made a big deal about investing some $47 billion in AI infrastructure, but in this year alone, U.S. firms plan to make at least $650 billion in capital expenditures related to AI. The Gulf powers, India, Japan, and South Korea also lag well behind—despite substantial investments relative to historical levels.

 

The same goes for attracting AI talent. Consider a recent post from French President Emmanuel Macron announcing that France was doubling down on AI research by establishing a €30 million fund to attract top academics. Anduril CEO Palmer Luckey cheekily remarked that “my house cost more than this. And I am just some guy, not a whole country.” Leading U.S. AI labs have paid hundreds of millions of dollars simply to poach top researchers from their rivals.

 

In fairness, non-U.S. and non-Chinese AI models are gaining traction abroad. Mistral AI’s Le Chat model has solid market share in France; India has a burgeoning set of models optimized for Hindi, Tamil, and other languages; and Saudi Arabia’s HUMAIN is charging ahead with its own products. Indigenous AI firms will also enjoy certain advantages in local language optimization, integration with national data systems—such as India’s Aadhaar platform—and the trust of local governments for sensitive use cases. But these advantages are no substitute for the scale that advanced U.S. and Chinese labs possess.

 

Hence the growing trend whereby U.S. AI firms build quasi-sovereign solutions for foreign countries, such as Amazon’s new European Sovereign Cloud, which is governed by a European board, staffed by European employees, and operated exclusively on European soil. That may be the closest our allies come to true digital sovereignty.

 

Then there is the thorny question of governance. The Trump administration’s strategy—as made clear in Vice President JD Vance’s speech at the 2025 Paris AI Summit—is laissez-faire regulation designed to speed innovation and minimize red tape. Indeed, the U.S. is no longer engaging seriously in most multilateral efforts to define standards for AI safety, security, or behavioral norms. Insofar as we still care, it is with respect to specific national security restrictions, such as export controls and usage monitoring.

 

Much of the world is not comfortable with this “move fast and break things” approach. That is why so many leaders convened in New Delhi to sign new pledges regarding responsible AI deployment. But what has yet to emerge is a coherent, enforceable governance framework. Instead, a patchwork quilt is taking shape, with divergent paradigms in ASEAN, Brazil, EU, India, Japan, South Korea, and elsewhere. 

 

It is easy to dismiss these discrete regulatory efforts. After all, middle countries aren’t likely to be leaders of the AI revolution. But they can still introduce a lot of friction into the system, which could stymie the efforts of the AI accelerationists. 

 

I know from my time as U.S. Trade Representative that deeply fragmented regulatory environments abroad are rarely optimal for U.S. companies. It will be difficult to export U.S. AI if the world is operating under a hundred different regulatory frameworks, requiring major U.S. firms to maintain country-specific compliance teams and develop highly specialized product offerings to boot. While the tech bros are skeptical of global regulatory efforts, they might find the alternative—a patchwork of different and potentially conflicting regulations—to be even less ideal. Maybe that’s why a number of the leading figures, including OpenAI’s CEO Sam Altman and Anthropic’s Amodei, made the trek to New Delhi and were willing to join hands with Modi—even if they weren’t willing to hold each other’s hands.

 

And in other news, unless you’ve been unplugged for the past several weeks, you know that a military operation in Iran could be imminent. The United States hasn’t amassed this much airpower since invading Iraq in 2003. We’re closely watching these developments. If you’re looking to get caught up, I’d encourage you to read some of the work of our scholars on the subject, including those linked below. 

 

Let me know what you think about AI sovereignty and what this column should cover next by replying to president@cfr.org.

 

Find this edition insightful and want to share it? You can find it at CFR.org.

 

What I’m tuning into this week: 

  • My interviews with Soumaya Keynes for The Economics Show and with Jessica Yellin for News Not Noise 

  • Ray Takeyh and Reuel Marc Gerecht’s op-ed for the Wall Street Journal, “Trump Is Banking on Iranian Weakness. That’s a Mistake”

  • Heidi Crebo-Rediker, Liana Fix, Tom Graham, Ben Harris, Paul Stares, Sam Vigersky’s Ukraine Policy Briefs for CFR.org 

  • Zoe Liu’s CFR.org article, “The Supreme Court’s Tariff Decision Could Affect Trump’s China Negotiations”

  • Jennifer Hillman’s analysis for CFR.org, “The Supreme Court Clipped Trump’s Tariff Powers—and Opened New Trade Battlefronts”

  • Suzanne Maloney’s CFR Contingency Planning Memorandum, “Leadership Transition in Iran” 

  • Max Boot’s article for CFR.org, “Trump Should Take the U.S. Military’s Warning on Iran Seriously”

  • The latest iteration of the CFR.org series “How I Got My Career in Foreign Policy,” featuring Alan Cullison

  • Steven Cook’s latest Foreign Policy column, “For the Gulf States, Investment in AI Is Partly About U.S. Protection”
 

Council on Foreign Relations

58 East 68th Street, New York, NY 10065

1777 F Street, NW, Washington, DC 20006

Was this forwarded to you? Subscribe to The World This Week

FacebookTwitterInstagram LinkedInYouTube

Manage Your Email Preferences | View in Browser

Support CFR