
There is a tendency, especially in discussions shaped by technological determinism, to blame complex problems on a single, recent cause. Take, as a current and particularly pronounced example, generative AI, which is often portrayed as a principal culprit in the apparent collapse of critical thinking.
But this narrative ignores a longer and deeper decline in critical thinking. Generative AI has not caused the erosion of our critical capacities, but rather, has accelerated a process that was already well underway. If we are in the last days of critical thinking, it would be fairer to characterise gen AI as the final nail in the coffin.
But what do we even mean by critical thinking, a term that is invoked way too widely? For me (and you’re welcome to disagree), ‘critical thinking’ refers to a set of intellectual dispositions and practices oriented toward the careful evaluation of information, arguments, and assumptions. It’s not just about doubting claims or calling out flawed opinions, but about engaging in reasoned, reflective judgement. Critical thinking involves the capacity to distinguish between assertion and evidence, to evaluate competing perspectives, to draw warranted conclusions, and to remain open to revision in light of new arguments or data.
Importantly, critical thinking is not reducible to logic or problem-solving in the abstract—it is historically and culturally situated. It has roots in classical traditions of dialectic and rhetoric, but it also takes institutional form in the Enlightenment ideal of rational public discourse and, more recently, in liberal educational philosophies that emphasise independent thought over rote memorisation.
To think critically is also to recognise that knowledge is rarely neutral or complete, but entails an awareness of context, an attentiveness to ambiguity, and a willingness to inhabit intellectual uncertainty. As such, critical thinking is inseparable from epistemic humility, the understanding that our perspectives are always partial.
In contemporary educational and cultural discourse, it is thrown around as a vague good, or used a buzzword to decorate curricula or strategic plans. True critical thinking requires time, institutional support, and a tolerance for dissent and complexity. In many of the environments where it is most loudly championed, it is often, in fact, quite inhibited. What was once a set of intellectual virtues rooted in Enlightenment scepticism and liberal pedagogy is now often reduced to generic problem-solving strategies, and this depoliticised version of critical thinking no longer threatens dominant ideologies or power structures.
This isn’t anecdotal, it is observable in empirical indicators, institutional priorities, public discourse, and broader culture’s relationship with complexity.
The OECD’s 2022 Programme for International Student Assessment (PISA) recorded the most significant decline in reading and mathematics scores since the assessment’s inception. In reading, many national systems experienced losses equivalent to three-quarters of an academic year. The 2024 Survey of Adult Skills, part of the OECD’s broader PIAAC initiative, confirms a similar trend among adults: stagnating or declining literacy and numeracy levels.
Of course, such a stark downturn are more readily attributed to widening social inequalities, and literacy and numeracy, while foundational, aren’t synonymous with critical thinking.
Looking beyond the numbers, the decline is evident. Being genuinely comfortable with ambiguity and dissonance are, as already noted, essential to critical thinking, but public discourse has gown visibly hostile to such, reinforced by growing democractic polarisation. In political, academic, and cultural debates, there is increasing pressure to adopt clearly demarcated positions, to signal allegiance rather than engage in authentic and measured arguments.
Social media platforms, which facilitate much of our cultural conversations, are structurally aligned against critical thinking, rewarding speed, emotional charge, and brevity over careful analysis. The viral success of content is largely determined by its capacity to confirm biases and provoke instant reactions, conditions fundamentally at odds with the practices of sustained re-evaluation and contextualisation that critical thinking requires.
And it was into such a polarised and hyper socio-cultural landscape that generative AI arrived.
The most disquieting characteristic of popular large language models like ChatGPT is not their capacity to misinform, but their capacity to persuade. Language models don’t think—at least not in the ‘special’ way that humans do (please note the sarcasm)—but they do predict plausible continuations of language sequences based on statistical patterns in training data. This is, of course, extremely impressive from a technical and linguistics perspective, but for many consumers, the surface coherence of their outputs (grammatical, syntactic, and rhetorical) creates a strong impression of meaningful reasoning. This is not a new problem in the history of media, but it is a uniquely intensified one.
Research reveals a troubling relationship between reliance on LLMs and diminished critical engagement. Studies by Carnegie Mellon University and Microsoft indicate that greater trust in AI systems correlates with lower levels of measured critical thinking, while individuals with higher self-confidence in their own analytical abilities tend to retain stronger reasoning skills. As the authors put it: ‘We find that knowledge workers often refrain from critical thinking when they lack the skills to inspect, improve, and guide AI-generated responses.’ In other words, users who defer to AI are not simply outsourcing labour, they are also abdicating judgement.
This convergence of degraded reasoning and inflated confidence in AI outputs is incredibily dangerous. When AI-generated text appears plausible, users are less likely to interrogate it, even when its reasoning is shallow or flawed. This suggests a diminished awareness of the very need to think critically at all.
The automation of knowledge work is no longer speculative; in many sectors, routine intellectual tasks are already being delegated to generative systems. LLMs offer efficiencies, sure, but also carry epistemological costs: the language model generates the text, but the human operator cannot fully account for what it says. This represents something way beyond efficiency, but a deeper dislocation of epistemic responsibility, a shift from being the author of an argument to being its facilitator.
These dynamics are increasingly evident in educational contexts. A study in Smart Learning Environments observed that students who used dialogue-based AI tools to assist in essay writing reported increased confidence in the quality of their work. However, objective assessment revealed no significant improvement in argumentative depth or critical insight.
This discrepancy between confidence and competence is already familiar to educators. The essay that is well-written but critically shallow is hardly new, but what gen AI enables is the effortless production of such texts, reducing the pedagogical process to surface artefacts. The traditional pedagogical aim, to enable students to articulate, defend, and refine their reasoning, is being displaced by a focus on textual presentation.
Assessment regimes often reinforce these trends. Rubrics often reward clarity and coherence, whereas that which remains unmeasured and thus devalued is the slow and often disfluent labour of original thought.
None of this will be surprising to educators. Across much of the Anglophone world, education has shifted toward managerial logics: accountability metrics, standardised testing, and curriculum narrowing have reduced space for open-ended exploration. Subjects that traditionally fostered critical engagement have been marginalised in favour of STEM disciplines framed in narrowly vocational terms. Even within the humanities, there is increasing pressure to justify intellectual work in terms of ‘impact’ or ‘skills’, leaving less room for speculative, dialectical inquiry.
There is a tendency to dismiss critiques of new media as reactionary: the printing press, radio, television all provoked fears about the loss of intellectual virtue. And while these historical analogues are useful, they are also quite limited. What distinguishes gen AI from previous technologies is its capacity not merely to store or transmit information, but to produce linguistic outputs that simulate reasoning. A calculator does not pretend to understand arithmetic, it just executes it transparently. A LLM chatbot, by contrast, produces discursive performances that mimic the surface features of argument, and this mimetic quality is epistemically destabilising because it obscures the boundary between generation and justification.
What can be done about the decline of critical thinking and its apparent acceleration in the age of generative AI?
Faced with the erosion of analytical habits and the increasing normalisation of machine-assisted cognition, the appropriate response is not primarily technical. What is required is a deliberate cultivation of hugely unpopular attitudes and practices that slow cognition, foreground ambiguity, and demand active engagement.
The recovery of sustained reading would be a start. Long-form, linear texts (dare I say, even in printed form) offer a mode of engagement that resists the logic of digital distraction, requiring attentiveness, interpretative patience, and, perhaps most importantly, a tolerance for complexity. Reading in this way is not simply about absorbing information, but about inhabiting an argument and overcoming difficulty. It stands in contrast to the superficial scanning that typifies online consumption.
We also need to restore epistemic agency through estimation and provisional reasoning. Before consulting an LLM, one might at least attempt to formulate an initial hypothesis or outline a tentative explanation, to treat uncertainty as an intellectual opportunity rather than a problem to be eliminated. In doing so, the thinker reasserts their own role as a participant in inquiry, rather than a passive recipient of answers. Before delegating writing or summarisation tasks to generative systems, individuals might take time to sketch the structure of their own argument, its premises, evidence, potential counterpoints, and underlying assumptions. This clarifies the individual’s position but also repositions the AI as a tool to be interrogated, rather than an authority to be trusted. The model becomes a resource within a broader process of thinking, not a substitute for it.
Such practices are not ‘solutions’ in that they won’t counteract the epistemic consequences of generative AI at a societal level, but they may serve as acts of resistance, and that’s something.
Generative AI did not cause the decline of critical thinking, but it may bring us to a point where the appearance of thinking becomes an acceptable substitute for its practice. And in that future, the very idea of reasoning, as a discipline and social good, may become quaint. If we are to resist this trajectory, we must begin by acknowledging that the habits we cultivate today will shape the thinking we are still capable of tomorrow.