Writing with AI is the same as writing by AI
If you need a large language model to write, you are not a writer

The training corpora of large language models comprise billions of words of copyrighted text often scraped without the consent or compensation of the authors who produced it. Every output these models produce is derived from that appropriation, and so it follows that when a writer uses a language model in any phase of their compositional process, they are incorporating the products of misappropriated labour into work they will subsequently present as their own. The now-fashionable distinction between ‘writing by AI’, which is understood to be disreputable, and ‘writing with AI’, which is understood to be merely pragmatic, exists to prevent people from following this logic to its conclusion. The distinction asks us to believe that the degree of a writer’s involvement in selecting and revising the model’s output somehow cleanses that output of its origins, as though a curator’s tasteful arrangement of stolen paintings could legitimise the theft.
It is at this juncture that defenders of AI-assisted composition typically invoke the analogy of literary influence, arguing that all writers are shaped by what they have read, that originality is always a matter of recombination, and that the distinction between a human mind metabolising its reading and a statistical model processing its training data is one of degree rather than kind. The argument has a superficial plausibility that makes it rhetorically effective, particularly in contexts where audiences have limited familiarity with the technical architecture of these systems, but it collapses under even modest scrutiny. The process by which a human reader absorbs, over years or decades, the stylistic and intellectual influence of other writers is phenomenologically, cognitively (or so pyschologists tell me), and certainly ethically incommensurable with the process by which a transformer model encodes statistical regularities across a training corpus. The human reader is a conscious agent whose encounter with a text is mediated by memory, embodiment, emotional response, and the entire accumulated weight of their experiential history. The language model is a function that maps input sequences to probability distributions over output tokens. To describe both of these processes as learning from the work of others is, well, just wrong.
What the ‘assistant’ framing accomplishes, and what accounts for its rapid adoption among otherwise thoughtful people, is the preservation of an authorial self-image that the technology ought, by rights, to have rendered untenable. The writer who pastes a draft into a language model’s context window and asks it to ‘improve the flow’ is engaged in a form of collaborative production in which one of the collaborators is a statistical engine built from misappropriated textual labour, but the conventions of contemporary publishing allow this writer to present the finished product as solely their own work, with no disclosure of the model’s contribution and no acknowledgment of the vast body of uncredited writing that made that contribution possible (which, in Europe at least, can be a direct contrevention of AI regulations, but that’s another matter). The ethical failure here is compounding. There is the original appropriation of the training data, which the individual writer did not commit and for which they bear no direct responsibility, but then there is the subsequent concealment of the model’s role in the compositional process, which the writer performs each time they publish AI-assisted work without acknowledgment, and for which they bear complete responsibility.
There is also a simpler and less comfortable point to be made, one that the professional-managerial class of knowledge workers who have most enthusiastically adopted these tools would prefer not to confront: if you find that you cannot produce serviceable prose without the assistance of a large language model, if the act of composing a paragraph from your own cognitive resources strikes you as so onerous that you require a machine to do it for you or to repair what you have done, then you are, to put it frankly, not a writer. And this is fine of course, because the majority of human beings are not writers (at least, not good ones, writers with a capital W), in the same way that the majority of human beings are not concert pianists or structural engineers, and there is no shame in recognising that a particular form of skilled labour falls outside one’s competence. What is shameful, and what deserves to be called by its proper name, is the pretence that the machine’s intervention leaves your authorial status intact, that you can pass off the product of a statistical engine trained on stolen text as your own intellectual labour and expect to be taken seriously as a practitioner of the craft you are, in fact, unable to practise. The word for this is charlatanism. We have, in every other domain of professional life, a perfectly clear understanding that presenting someone else’s work as your own constitutes fraud, and the fact that the ‘someone else’ in this case is a machine assembled from the non-consensual contributions of millions of unacknowledged writers does not soften the offence as much as some like to think. The person who cannot write and who admits as much forfeits nothing of their dignity, while the person who cannot write and who uses a language model to disguise that fact, while claiming the result as the product of their own mind, has forfeited something considerably more important.
The publishing and media industries have absorbed these tools into their workflows in ways that render the question of individual responsibility almost moot. The same institutions that depend upon writers’ labour for their existence have adopted technologies trained on the non-consensual appropriation of that labour, and they have done so without any apparent sense of contradiction. This is perhaps unsurprising, as the history of cultural production under capitalism is, among other things, a history of the progressive externalisation of the costs of creative work, and the large language model represents something like the logical terminus of that process, a machine that converts the unpaid labour of millions of writers into a tool for reducing the need for writers altogether. That individual writers have been persuaded to participate enthusiastically in this project, and to provide sophisticated rationalisations for doing so, is a testament less to the quality of those rationalisations than to the coercive power of an economic environment in which refusing to adopt the prevailing tools carries immediate professional penalties.
I am not romanticising the writing process. It is true that every technology of writing has reshaped the compositional process in ways that were initially experienced as alien and that were subsequently normalised through habitual use. It is also true that certain technologies that now appear wholly benign, such as the spellchecker or the thesaurus, were greeted on their introduction with anxieties that look, in retrospect, disproportionate. These historical parallels are real, and they should introduce a note of epistemic humility into any argument about the present moment. They do not, however, establish what their proponents need them to establish, because the large language model differs from all previous writing technologies in a respect that is categorically significant. A word processor, a spellchecker, and a thesaurus are tools that facilitate the writer’s own selection and arrangement of language, while a language model generates language. The difference between facilitating composition and performing composition is the difference on which the concept of authorship depends, and no amount of historical analogy can dissolve it without simultaneously dissolving the concept of authorship itself, which is, of course, precisely what some of the technology’s more philosophically adventurous defenders are willing to do, though rarely with any awareness of what would be lost.
What would be lost is the possibility of holding anyone accountable for what a text says, means, or does in the world. Authorship is an ethical and legal category as much as it is an aesthetic one. The attribution of a text to a named author carries with it an implicit claim of responsibility, a declaration that a particular human being has exercised judgement and stands behind the result. When the compositional process is distributed across a human writer and a language model trained on appropriated text, the question of who is responsible for the resulting work becomes genuinely difficult to answer in a way that should trouble anyone who takes seriously the social function of written communication. The writer cannot be fully responsible for language they did not fully produce, while the model cannot be responsible for anything, because responsibility is a property of moral agents. The result is a text for which no one is entirely answerable, circulating in a public sphere that still operates, however imperfectly, on the assumption that published writing represents the considered judgement of an identifiable human author.
I am aware that this argument will strike many readers as excessively stringent, as an attempt to hold individual writers to a standard of purity that is incompatible with the practical realities of contemporary knowledge work. And that accusation is not without force, because the economic pressures that drive writers toward these tools are real, and it would be dishonest to pretend that the choice to refuse them carries no cost. But the fact that a practice is economically rational does not render it ethically defensible, and the fact that nearly everyone is doing it does not transform it into something that need not be examined. What is needed, at a minimum, is disclosure. If a text has been shaped, at any stage of its composition, by the output of a language model, that fact should be stated openly, so that readers may assess for themselves what kind of object they are encountering. The resistance to this minimal standard of transparency, which is fierce and widespread among writers who use these tools, tells us everything we need to know about the degree of confidence these writers have in the distinction they claim to be drawing. If the difference between writing with AI and writing by AI were as clear and as significant as its proponents insist, there would be no reason to conceal it. And this is precisely my point: the concealment is the confession.


Thanks once again, James, for your "oft was thought, but ne'er so well expressed" commentary. I have long thought that, in terms of any creative art or personally conceived and generated text (what we ask our students to produce in most comp assignments), that "ethical AI use" in an oxymoron. I really appreciate your explaining here the logical, philosophical and truly ethical implications: if you copy-paste AI output or even rely on it to "give you ideas," you are guilty of plagiarism and not even sure against whom or what...