Is the Singularity actually near?
The Technological Singularity, that hypothetical moment when artificial intelligence surpasses human cognitive capacity and triggers runaway, recursive self-improvement, seems to have migrated from science fiction into mainstream discourse with remarkable speed. Ray Kurzweil, its most prominent evangelist, has spent two decades insisting we are on an exponential curve toward this event horizon, now forecasting its arrival by 2045. The recent explosion in large language model capabilities has lent his prophecy fresh credibility, and of course, venture capitalists speak of ‘superintelligence’ as a near-term planning consideration (but their motivations are more political than they are scientific). Sam Altman, for example, reckons that AGI will be arriving within ‘a few thousand days’. The question of the Singularity has shifted, in the popular imagination at least, from whether to when.
I want to make a different case, that the Singularity is not near, that current evidence does not support the timeline optimism pervading Silicon Valley, and that the conceptual foundations of Singularity thinking contain flaws that even genuine AI progress cannot remedy.
Kurzweil’s Singularity thesis rests on extrapolating exponential trends—Moore’s Law and its analogues—into a future of unbounded growth. This reasoning suffers from a fundamental error, that exponential curves in technology describe specific domains under specific conditions. Transistor density increased exponentially for decades because of particular physical and economic dynamics that held within that domain, but transistor density is not intelligence just as compute is not cognition. The history of technology forecasting is littered with confident extrapolations that broke against unforeseen ceilings. The problem, essentially, is that exponential curves in one domain collide with constraints from adjacent systems that operate on different logics entirely. There is no guarantee that progress in machine learning translates smoothly into artificial general intelligence. The scaling hypothesis, the notion that sufficient compute and data will inevitably yield human-level reasoning, remains just that—a hypothesis. Recent work on large language models has begun to suggest diminishing returns at the frontier. GPT-4 represented enormous gains over GPT-3.5, but the trajectory from GPT-4 to subsequent models has shown less dramatic capability jumps despite substantially increased investment. This pattern is consistent with an S-curve rather than a true exponential, with rapid initial gains flattening as fundamental constraints become binding.
Another problem with the Singularity discourse is the persistent redefinition of what counts as intelligence. When computers first beat humans at arithmetic, enthusiasts predicted imminent machine supremacy. When chess fell to Deep Blue, the same predictions resurfaced, and then when Go fell to AlphaGo, the rhetoric intensified further. Now that language models can pass bar exams and medical licensing tests, the Singularity seems, to believers, almost tangible. But each milestone has just revealed the narrowness of our prior conceptions. Deep Blue couldn’t play tic-tac-toe and AlphaGo couldn’t make a restaurant reservation. GPT-4 struggles with tasks a five-year-old handles effortlessly, maintaining consistent beliefs across time, learning from single examples, understanding when it is being deceived, or recognising that a problem has no solution. The gap between task-specific excellence and general intelligence is not a quantitative matter of scale, but a qualitative difference in kind. A chess engine and a generally intelligent mind are not points on the same continuum, related by magnitude—they are categorically different types of systems.
Singularity narratives tend to treat intelligence as a purely computational phenomenon, substrate-independent and transferable. This assumption has deep roots in the functionalist philosophy of mind that dominated late twentieth-century cognitive science. But biological intelligence is constitutively entangled with physical instantiation in ways that resist abstraction. Human cognition did not evolve as a general-purpose reasoning engine, but through millions of years of embodied interaction with physical and social environments. Our concepts are grounded in sensorimotor experience, and our reasoning is scaffolded by cultural practices and institutional structures that took centuries to develop. The notion that we can extract ‘intelligence’ from this matrix and instantiate it in silicon involves massive, unexamined assumptions about the nature of mind. This is not an appeal to some ineffable human essence, but a very practical observation, that we do not know how to build general intelligence from computational primitives because we do not understand what general intelligence is at a sufficient level of specificity. Current AI systems are extraordinarily powerful pattern-matching engines, but pattern-matching is not understanding and prediction is not comprehension. We have built systems that can mimic the outputs of intelligent behaviour without instantiating the processes that generate it.
Even if we grant that recursive self-improvement is theoretically possible for an artificial system, the Singularity thesis requires an additional assumption, that such improvement can proceed without limit, bootstrapping itself to superintelligence through sheer computational iteration. This assumption ignores the social nature of knowledge production, that human intelligence has never operated in isolation. Scientific progress depends on institutions: universities, peer review, funding bodies, experimental traditions, instrumentation supply chains, and the slow accumulation of tacit knowledge that cannot be fully formalised. It involves dead ends, replication crises, paradigm shifts, and the gradual construction of shared conceptual frameworks across generations. An AI system attempting to improve itself would face the same constraints and it would need to conduct experiments, gather data from the physical world, develop instrumentation, navigate material limitations. The speed of computation does not accelerate the speed of chemistry or the time required for physical experiments to yield results, just as a superintelligent AI attempting to develop new physics would still need to build particle accelerators and wait for data collection. Recursive self-improvement cannot simply ignore the barriers presented by the material world.
If the evidence for an imminent Singularity is so weak, why does the narrative retain such cultural power? Singularity thinking satisfies a deep human appetite for eschatology. Every civilisation has produced narratives of rupture and transformation, moments when history ends and something radically new begins. The Singularity is a secular apocalypse, a technological rapture that promises transcendence to believers and destruction to sceptics. Its appeal is religious in structure even when atheist in content.
Singularity discourse also serves specific economic interests. The AI industry benefits from a sense of inevitability and urgency, because if superintelligence is coming, well then investment will follow. And if the timeline is short, well then regulatory caution becomes an unaffordable luxury. The companies best positioned to shape superintelligence become, in their own framing, civilisationally essential (as noted earlier, it’s all just politics).
There is also a peculiarly American character to Singularity thinking, a techno-utopianism rooted in frontier mythology and the conviction that technical ingenuity can overcome any obstacle. This strain of technological determinism, with roots stretching back through the Whole Earth Catalog to the nineteenth-century railroad boosters, treats progress as both inevitable and beneficial. European and Asian AI discourse tends toward greater scepticism, perhaps because historical experience has taught those civilisations that technical capability does not automatically translate into human flourishing.
None of this should be read as dismissal of genuine progress in artificial intelligence or the real challenges that current systems pose. Large language models represent a significant technical achievement with profound implications for information ecosystems and the organisation of knowledge work. The automation of cognitive tasks that once required human expertise raises genuine policy questions about education and employment. But these challenges require clear thinking, not eschatology. When we treat AI through the lens of Singularity mythology, we obscure rather than illuminate. Artificial intelligence will continue to advance in specific domains, automating particular cognitive tasks while leaving others untouched.
The path to general intelligence, if such a thing is even coherent as a goal, remains obscure. The Singularity is a story we tell about the future, it is not a forecast derived from evidence, so treating it as such distorts our planning, inflates our fears, and misallocates our attention. We would do better to address the AI systems we actually have, with all their genuine capabilities and limitations, than to orient policy and culture around a technological eschaton that exists primarily in the imagination of its prophets.

