AI Sovereignty in Europe
Why control of algorithms and the resources that power them is the new geopolitical frontier
Who ought to own, govern and profit from artificial intelligence? While the term ‘digital sovereignty’ has been around for years, describing the ambition to keep citizen data and critical infrastructure within national or regional borders, AI sovereignty is a new animal that requires autonomy over the entire stack—hardware, cloud infrastructure, training data, regulatory oversight, and, ultimately, the values embedded in machine intelligence.
To be fair, Europe has emerged as the world’s regulatory superpower in AI governance. Its flaws aside, the EU AI Act stands as the most comprehensive legislative framework globally, but despite this robust approach, the continent remains largely dependent on AI systems developed elsewhere, particularly in the United States. Europe leads in setting regulations but trails in the underlying capability.
This tension underscores why AI sovereignty is now the organising narrative of European industrial policy, but to understand what is truly at stake, we need to look beyond headlines about chip bans to the three pillars of computational independence, data dignity, and algorithmic accountability.
Computational independence
Perhaps the most visible aspect of AI sovereignty is computational infrastructure. Training large language models requires enormous resources: hyperscale data centres, specialised GPUs, and reliable, affordable (and ideally, clean) energy. Only a handful of nations can marshal these independently. Even fine-tuning and deploying sophisticated models demand substantial compute.
Ireland’s position offers an illuminating case study. Over the past decade, the country has transformed into a European data centre hub, hosting significant computational infrastructure that powers cloud services for millions. But much of this capacity serves the operational needs of multinational corporations rather than building indigenous AI capabilities. The question is not simply whether Ireland—or any small nation—should attempt to create its own version of ChatGPT (they probably shouldn’t), it is whether they have access to sufficient computational resources to adapt and deploy AI systems to serve their specific national needs and protect public interests.
A hospital in Cork should not need to route patient data through servers in California to benefit from AI-assisted diagnosis, but that is often precisely what happens. Without computational independence, or at least trusted access to allied compute capacity, sovereignty remains rhetorical.
Data dignity
The second pillar concerns data, not only its protection under frameworks such as the GDPR, but what might be called ‘data dignity’, which goes beyond the legal requirements of consent and security, asking, whose interests does our data serve?
Consider a language model trained predominantly on American English. Such a system may struggle to interpret localised placenames, local idioms, or the distinctive ways our health services and educational systems operate. Even the most advanced generative model will embed cultural assumptions and biases drawn from the data on which it was trained.
Data sovereignty, therefore, is not merely about keeping data within borders, it is about ensuring that when a nation’s citizens’ data is used to train AI, it contributes to systems that understand and respect national contexts, languages, and values. Without this, the promise of AI to augment public services and strengthen civic institutions risks being hollow, with models that neither comprehend nor care for the communities they serve.
Algorithmic accountability
The third and perhaps most crucial pillar is algorithmic accountability. When an AI system denies someone a mortgage, flags them as a security risk, or influences their children’s educational pathways, there must be meaningful oversight rooted in democratic governance. This is not anti-innovation paranoia, but the basic requirement that any system wielding significant social power be subject to democratic accountability.
The EU’s AI Act represents a bold attempt to establish such accountability. It bans certain ‘unacceptable risk’ systems outright and imposes stringent transparency and risk management obligations on high-impact models. From August 2025, developers of general-purpose AI will be required to disclose how their systems were trained, evaluated, and monitored. But implementation remains the key challenge. How does one meaningfully audit a black box? How do you ensure that an AI system developed in Silicon Valley respects European values around privacy, fairness, and human dignity?
The Act’s success will depend on whether regulatory bodies can develop the technical capacity to interrogate, test, and enforce compliance at scale. Without this, the most sophisticated rules risk becoming little more than paper shields.
Alliances and contests
Beyond Europe, AI sovereignty is increasingly entangled with geopolitical rivalry. In the United States, the focus is less on regulating domestic firms and more on limiting China’s access to advanced chips. The AI Diffusion Rule, issued in January 2025, tightened export controls on high-end accelerators destined for China and other ‘tier-2’ nations, sparking retaliatory investment and regulatory moves in Beijing.
China, for its part, has launched a massive state-led push to secure its own AI supply chains, including a ¥60 billion National AI Industry Fund and regulations mandating tamper-proof provenance labels on all synthetic content. The real surprise has been Taiwan, whose government recently blacklisted Huawei and SMIC, forcing local firms to seek licences before supplying these Chinese giants. Given Taiwan produces over 90% of the world’s leading-edge chips, its decision signals how sovereignty sometimes belongs to whoever controls the chokepoints.
Not every country can fabricate 3-nanometre processors, but many can fine-tune open-weight models on more modest hardware. European technologists have argued that a robust open-source ecosystem offers sovereignty by transparency, the ability to inspect, adapt, and redeploy models without total dependence on proprietary vendors. Still, openness does not abolish the underlying requirement for compute capital—running a 70-billion-parameter model remains orders of magnitude more expensive than hosting a conventional application.
Sovereignty, in this sense, becomes a balancing act: regulating commercial giants stringently enough to protect citizens while nurturing open alternatives that keep governments, universities, and civic institutions from technological dependency.
It is reasonable to expect a surge in so-called compute alliances, where mid-sized economies lacking fabrication capacity but rich in valuable domain-specific data strike reciprocal deals with larger allies. Regulatory interoperability frameworks will likely emerge as companies press for ‘equivalence’ recognition across jurisdictions.
And then, of course, there’s the energy problem. Training a frontier-scale model can consume as much electricity as a small town, positioning renewable-rich countries to parlay green power into strategic AI capacity.
The most intriguing development may be the rise of civic compute: publicly funded GPU clusters and data infrastructure designed to ensure researchers and public-interest organisations have access to advanced AI tooling. Sovereignty, in this model, is not just a state prerogative but a shared public good.
AI sovereignty is not autarky. No nation controls every line of code or every watt of electricity powering these systems. But the experience of the pandemic revealed how dangerous it can be to depend entirely on foreign supply chains for vital technologies, and the resurgence of Trumpism is a reminder that close allies can quickly become unpredictable partners. And the extraordinary concentration of power in a handful of big tech firms means that sovereignty is not just a matter of geopolitics but of escaping dependence on companies whose commercial interests often outrun any public mandate.
By pursuing sovereignty through computational infrastructure, data dignity, and algorithmic accountability, societies stand a better chance of steering AI systems towards public rather than purely private ends.