
Eight thousand three hundred likes. That’s what a fabricated Plato quote posted by
garnered on Substack before pointed out the obvious: Plato never said it. Wooden’s publication is, of course, described as being for those who think deeply (and who doesn’t like to think of themselves as a deep thinker?) The original post has since been deleted, but it was public long enough for Henderson to make his point: The appetite for slop is bottomless.The embarrassing incident, while minor in comparison to what some writers have been up to, is so perfectly emblematic of our current moment. Here we are, armed with unprecedented access to primary sources, scholarly databases, peer-reviewed publications, and fact-checking tools, yet we gorge ourselves on bullshit. That Wooden’s post—and similar examples of AI slop—manage to garner so much engagement reveals something deeply troubling about our contemporary relationship with truth, authenticity, and the simulacra of wisdom we so eagerly consume.
In his seminal essay and book on bullshit—aptly titled, On Bullshit—Harry G. Frankfurt draws a crucial distinction between lying and bullshitting. The liar, he argues, cares about the truth; they know it and deliberately subvert it. The bullshitter, by contrast, is utterly indifferent to whether their statements correspond to reality. They’re not trying to deceive you about facts, they’re attempting to deceive you about their enterprise itself, presenting the appearance of conveying information while actually being unconcerned with whether that information is accurate. Performance without substance.
And that’s what generative AI does—it bullshits. You often hear people complain that ‘ChatGPT lied to me’, but lying requires some relationship with truth, even if adversarial. Bullshitting is something altogether more insidious, it is content produced with sublime indifference to whether it corresponds to reality at all.
As Frankfurt puts it, the bullshitter ‘does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.’
Michael Townsen Hicks, James Humphries, and Joe Slater argue that large language models like ChatGPT are essentially mechanised bullshit generators. They reject the common metaphor of AI ‘hallucinations’, arguing this misrepresents what’s happening, that ‘hallucination’ implies misperception—seeing something that isn’t there—when LLMs don’t perceive at all, they generate text based on statistical patterns with no capacity for caring about truth. These systems are ‘in an important way indifferent to the truth of their outputs’.
This distinction matters because it shapes how we understand and respond to the problem. If we think ChatGPT is hallucinating, we might try to fix its inputs or correct its perceptions. But if we recognise it as a bullshit machine, we understand that accuracy was never the goal, the goal is convincing human-like text. ChatGPT works exactly as designed, it’s just that the design prioritises plausibility over truth.
We might try to comfort ourselves by imagining this as a technological problem, that rogue algorithms spewing nonsense that will eventually be filtered out by proper gatekeepers. But the Wooden incident suggests something far more uncomfortable: we actively prefer the bullshit. That fake Plato quote wasn’t shared by bots—it was liked, saved, and reshared by thousands of human beings who found in it something they wanted to be true. The quote felt right, seemed profound to the self-identified deep thinkers, and aligned with their existing beliefs about wisdom and antiquity. Who cares if Plato actually said it?
This preference for truthiness over truth predates our current technological moment, of course. The internet has always been awash with misattributed quotes, usually following a predictable pattern: the more ancient and revered the supposed source, the more likely the quote is fabricated. Einstein, Gandhi, Twain, etc. have all been conscripted as unwitting validators of whatever vapid insight someone wishes to promote, but generative AI has industrialised this process, creating what we might call an infinite bullshit machine.
The economic motivations for those who deal in bullshit are obvious. Why spend hours researching, thinking, and crafting original insights when you can prompt an LLM to produce something that looks indistinguishable from thoughtful content? Why verify sources when your audience demonstrably doesn’t care? The fake Plato quote succeeded not despite being bullshit, but precisely because it was bullshit—carefully calibrated to sound profound while requiring no actual philosophical work.
This creates a peculiar epistemological crisis. In previous eras, we could at least theoretically trace bad information back to its source, but when the source is a statistical model trained on the entire internet’s worth of text, including all its errors, fabrications, and previous generations of bullshit, we enter a strange new realm of recursively generated truthiness.
The academic response has been predictably focused on detection and prevention. Scholars propose various technical solutions: watermarking AI-generated text, building better detection algorithms, training models to be more ‘truthful’, but these efforts, while admirable, miss the more fundamental point. The problem isn’t that we can’t detect bullshit, it’s that we don’t want to.
Social media platforms reward engagement, not accuracy. A provocative fake quote or AI-video generates more clicks, shares, and comments than a carefully sourced scholarly article or real footage. Academic publishers, meanwhile, face pressure to produce content at industrial scale, creating conditions where AI-generated papers (and books, apparently) slip through peer review. Students discover they can submit AI-written essays that receive higher marks than their own work. At every level, the system rewards the production and consumption of bullshit.
Does this matter?
Does it really harm anyone if people share inspirational quotes falsely attributed to ancient philosophers? The answer depends on whether you believe there’s value in maintaining some connection between our discourse and reality. I, for one, think it does. Each fake quote shared, each AI-generated article published, each bullshit claim that goes unchallenged further erodes our collective capacity to distinguish truth from truthiness. The philosopher Alasdair MacIntyre once argued that modern moral discourse had become so fragmented that we retained only the vocabulary of ethics while losing its substance, that people could deploy moral language without any shared understanding of what it meant. We may be witnessing something similar with information itself, retaining the forms of knowledge-sharing while abandoning any commitment to whether that knowledge is actual.
Some might defend fake Plato, argue that ‘Plato would have said this’ is just as good as ‘Plato said this’. But this represents a complete collapse of epistemic standards wherein truth becomes merely one option among many, and not a particularly valued one. This is the environment in which ChatGPT and similar bullshit systems thrive. They provide an endless stream of content that looks like knowledge, feels like insight, and satisfies our desire for intellectual consumption without requiring the difficult work of actual thought. LLMs are the perfect dealers for our bullshit addiction.
We live in an era of unprecedented access to genuine knowledge, yet we increasingly choose the simulacrum over the real, the profound-sounding over the actually profound, the attributed quote over the verified source. The internet, the greatest library in human history, is being filled with forgeries.
Perhaps the most unsettling aspect of our bullshit economy is how it transforms us from consumers into producers. Every time we share unverified content, we become complicit in its spread—we’re bullshit farmers. We don’t need to build systems that can detect and filter bullshit—we are the bullshitters.
The solution, if there is one (and honestly, I doubt there is), requires a radical cultural shift in how we value truth versus the appearance of truth. This means slowing down, checking sources, admitting ignorance, and, perhaps most challenging, accepting that genuine insight is rare and difficult to achieve. Practically, this means that editors, respected journalists, publications of note, authoritative sources etc have never been more important.
Until we develop such habits, we remain willing participants in the bullshit industrial complex, endlessly clicking and sharing while the distinction between truth and truthiness grows ever thinner.