Large language models are making me a suspicious reader

Something has gone very wrong with how I read. I have begun to read for tells.
Large language models like ChatGPT have introduced into written culture a problem that resembles the problem of forgery in fine art—once you know that a convincing fake is possible, the status of every authentic work becomes precarious. The problem with forgery isn’t just the actual production of counterfeits, it’s that the possibility of their existence degrades the experience of an entire form. LLMs have done something similar to prose.
I noticed it first in student writing, which is perhaps the most surveilled prose in the Anglophone world right now. Well written and structured content triggers interrogation—did this student write this? The question poisons the pedagogical relationship at its source, replacing the assumption of good faith with a forensic disposition that benefits no one. This creeping, unwelcome suspicion has made me a worse reader, less generous and less willing to be surprised by a student’s unexpected eloquence. The irony isn’t lost on me: we spent years trying to teach students to write with the qualities we now find suspicious.
And yet, while suspicion is corrosive, the alternative is worse. To read student work today without any awareness that generative AI might have quickly spat out the majority of the content is closer to negligence than generosity. The fact is that a significant number of students are submitting work produced in whole or in part by ChatGPT and its competitors, often with little or no intellectual engagement with the output. They prompt, then copy and paste. Educators who refuse to entertain the possibility of AI-generated submissions and treat suspicion itself as a failure of pedagogical care are making, to my mind, a grave error in how they measure care. They are extending the assumption of good faith to a situation in which bad faith has become structurally incentivised and incredibly easy. The discomfort of suspicion is real, but it is the appropriate discomfort of a profession adjusting to a genuinely new problem. Wilful innocence protects no one and worse, it rewards the students who cheat, penalises those who don’t, and degrades the value of the credentials we grant.
The question is how to be suspicious without letting that suspicion metastasise into something that destroys the classroom itself.
But the problem goes deeper than professional anxieties about plagiarism. I catch myself rereading my own sentences and asking, will a reader think this was AI generated? I have, for example, become far less reliant on my beloved em dash. The recursive absurdity of the situation is hard to overstate: I am a writer interrogating my own prose for signs of machine authorship, even when I know full well that no machine was involved. The tell-hunting has become involuntary, a kind of pattern-matching parasite that reduces my own writing to linguistic features and then feeds on them.
What even are the markers of LLMs? Apart from a few obvious ones (we all know what they are), they resist stable codification, which is part of what makes them so insidious. And every supposed marker of AI prose is also, in some context, a feature of competent human writing—they did, after all, learn how to write from us.
The art historian who has seen too many fakes begins to distrust works that are genuine. Thomas Hoving, former director of the Metropolitan Museum of Art, famously claimed that as many as forty per cent of the works he examined over his career were fake or misattributed. LLMs are producing a similar recalibration of readerly trust across an entire culture, and we have barely begun to reckon with what that means.
One consequence is the emergence of what we might call a hermeneutics of suspicion directed at the ontological status of texts. The question is no longer only ‘what does this mean?’ but ‘was this even written by a person?’ That question bleeds into every act of reading, making us worse interpreters because the energy that should go towards understanding what a text is doing gets diverted into determining what a text is—hermeneutics degrades into authentication (yes, I’m using my em dashes).
There is a genuine loss here that goes beyond the practical headaches of detection. The experience of reading well has always involved a particular kind of trust, the willingness to grant that the patterns of syntax and diction on the page express the movements of a thinking mind. That trust was never absolute or naïve, as we have always known that writing is constructed, shaped by convention and genre. But there was a baseline assumption that someone meant it. Someone chose this word over that one, or struggled with this sentence and decided it was, for now, good enough. The pleasure of reading a well-made sentence was partly the pleasure of sensing that labour and presence. LLMs don’t struggle and they don’t choose, they just produce statistically probable sequences of tokens, and the result can look uncannily like the outcome of deliberation. The resemblance hollows out the experience of reading even when the text is, in fact, human.
I don’t think the solution is better detection tools (and generally, I have very little faith in detection tools). We need to find ways of reading that can accommodate the existence of machine-generated text without collapsing into permanent suspicion. What that looks like, I’m not sure. Perhaps it involves learning to care less about whether a given text is ‘authentic’ and more about whether it is valuable, though I suspect that distinction is harder to maintain in practice than in theory.
What I know is that the suspicion has changed me as a reader in ways I don’t wholly welcome. I read with a divided attention now, scanning for the faint watermark of the machine. It is an exhausting and joyless way to read. It is also, I fear, the way many of us will read from now on, at least until the culture develops new conventions, new contracts between writer and reader, that can bear the weight of this strange new uncertainty.


And maybe defensive over-citation and verification, aimed not only at strengthening arguments but at signalling legitimacy, becomes more likely in such an atmosphere of distrust.