I am so f**king sick of AI slop

It has become impossible to navigate the web without wading waist-high through synthetic effluent. While one might theoretically concede a place for large language models as assistive tools (and I stress, might), the reality right now feels like we are witnessing a wholesale abdication of human expression to chatbots.
As someone who works at the intersection of language and technology, I am genuinely fascinated by large language models, which, viewed purely as a feat of engineering, are a staggering achievement. The mathematics required to map high-dimensional vector space into coherent English feels somewhat miraculous, and I genuinely marvel at what these systems can accomplish.
What is alarming, however, is the sheer willingness of people to put their own names to any old slop the machine produces.
Take the rise of ‘vibe coding’, for example, which most tech bros represent as some kind of futuristic workflow. Am I the only one who thinks that there is a peculiar arrogance in a ‘developer’ who takes pride in their inability to write or understand the syntax that underpins their own creation? We are seeing software deployed into the wild that is little more than a patchwork of pasted suggestions, a digital Frankenstein’s monster stitched together by someone who has no concept of the structural integrity of the code base. They might feel as though they know what they have built, but they can’t comprehend the gaping security vulnerabilities they have introduced because they never actually engaged with the logic of the system; they simply prompted a black box until the error messages stopped appearing, creating a technological infrastructure that is terrifyingly insecure.
Of course the rot extends beyond software development, and far more concerning is the extent to which it has thoroughly colonised our social spaces. The internet was, in its most idealistic (and yes, maybe naive) conception, a sprawling parlour for human conversation and the exchange of genuine thought. That vision is effectively dead. Open LinkedIn or Reddit (or X, if you really want to wind yourself up) and you will see streams of the same beige, hallucinatory text bearing the chirpy, predictive cadence of ChatGPT, generated by users who could not be bothered to read the content they are putting their name to. They enter a prompt and paste the result, engaging in a pantomime of interaction that benefits no one but the platform’s engagement metrics. It is a hall of mirrors where machines talk to machines while humans look on, increasingly alienated from the very networks built to connect them.
There is an implicit social contract in the written word: the writer expends effort to organise their thoughts, and the reader expends effort to consume them. When a PhD applicant submits a personal statement generated by a chatbot, or a colleague sends a memo churned out by an LLM, or someone posts AI slop to social media, they break that contract. They are demanding attention for something that required no intention to produce. And if a person didn’t care enough to actually compose a piece of writing themselves, why on earth should anyone care enough to read it?
We are constructing an empty world without meaning. It’s an exhausting, entropic direction for our culture, reducing the friction of creativity to a frictionless slide into mediocrity. I can appreciate the technology for what it is, but I am finding it increasingly difficlt to forgive the laziness of the people using it.


As a college English instructor, I could so relate to this as I wade through "streams of the same beige, hallucinatory text bearing the chirpy, predictive cadence of ChatGPT, generated by users who could not be bothered to read the content they are putting their name to. They enter a prompt and paste the result, engaging in a pantomime of interaction that benefits no one but the platform’s engagement metrics." I wish I could email this to my students. At the very least, and because I love them, I want them to stop embarrassing themselves.