AI in the age of abundance
The ‘abundance movement’, popularised in part by Peter Diamandis and Steven Kotler’s 2012 book, Abundance: The Future Is Better Than You Think, presents a techno-optimist worldview grounded in the belief that scarcity is no longer the defining constraint of human society.
Exponential technologies—artificial intelligence, robotics, nanotechnology, synthetic biology—will not merely assist us, they will save us, ushering in an age of unprecedented human flourishing. With sufficient innovation and ingenuity, we can transcend the material limits of history and engineer a world of plenitude, a new Renaissance shaped not by political will, but technological advancement.
The abundance movement has become a cornerstone of Silicon Valley brogrammer ideology. It permeates the discourse of billionaires, tech bros, and start-ups, speaking, in the language of disruption and scale, of a future in which every person on the planet will have access to clean water, education, healthcare, and opportunity—not through redistribution or democratic deliberation, but through innovation. And in brogrammer speak, innovation is always represented as neutral.
But if we have abundance—or if we are, as the narrative insists, entering its golden age—why has the world never appeared so unequal? If scarcity is it in last throes, then where are the affordable houses? Where is the free, universal and accessible healthcare? And indeed, where is the ethical AI? For all the talk of plenitude, the reality is one of deepening crises, crises of access, affordability, and trust.
One would think that in an era of such technological maturity, where computational power is inconceivably vast, and where AI systems are capable of everything from protein folding to real-time translation, we would have solved the comparatively modest problem of aligning these systems with the most basic of human values. And yet, systems are biased, surveillance intensifies, labour is exploited, data is appropriated, and control is opaque. And the communities most vulnerable to harm—the marginalised and the workers—remain systematically excluded from the design and deployment of these tools.
The disjunction exposes that abundance is not a moral condition, but a material one, a function of scale and surplus; it tells us what is possible, not what is desirable. Indeed, the abundance promised by AI is predominantly measured in terms of quantity: more data, more compute, more speed, more functions, more automation. But ethical AI is a matter of quality, of care, inclusion, deliberation, accountability, and transparency. These are not the natural by-products of technological growth, they are the outcomes of politics, pedagogy, and a commitment to fairness.
The idea of abundance is little more than a rhetorical device used to obscure the structural and political conditions under which AI is developed and deployed. Behind the mythos of abundance lies the reality of concentration, of power, resources, and infrastructure. A tiny number of firms control the means of AI production. They train their models on public data, but offer little in the way of public accountability. The environmental costs of model training are borne by the planet. The labelling work behind the scenes is performed by poorly paid and invisible labour. The systems, meanwhile, are deployed into contexts—education, policing, healthcare—where risk and harm disproportionately fall on those with the least institutional power.
Ethical AI is not infeasible, it is inconvenient; it challenges business models and demands slowness in a culture obsessed with speed. It invites scrutiny in spaces designed for secrecy, and requires forms of governance that are deliberative rather than extractive.
Ethical AI is structurally incompatible with the abundance manifesto.
Ironically, this movement claims to transcend politics, but in doing so, it enacts its own political project, one in which technocratic solutionism supplants collective action, and where the messy work of justice is replaced by the sleek logic of optimisation. What is offered is not abundance in any meaningful ethical sense, but a form of hyper-productivity dressed in the rhetoric of liberation.
The humanities have long taught us to be suspicious of utopias that require the silencing of dissent; they remind us that abundance for some often rests on the scarcity of others. In the case of AI, it is clear that abundance has not delivered equity, and that those who dominate the sector, have no desire for anything different. The fantasy of the post-scarcity world has never contended with the very real, human cost of building it.

