There is an emerging consensus in higher education, particularly among those of us concerned with digital transformation, that staff and students must become ‘AI literate’. The assumption is so widespread as to appear commonsensical: that in order to teach, learn, or simply survive in the university of the near future, one must know how to work with generative artificial intelligence. The rush to develop AI literacy frameworks, training sessions, toolkits, and policies has been met with eager participation and, at times, uncritical enthusiasm. But I wish to pose a provocation: what if not knowing how AI works—AI illiteracy—might actually be a useful, even necessary, position for the academic or their students?
This is not a case for ignorance in the general sense, nor is it an argument against engaging critically with generative AI systems. Rather, I want to suggest that what we have come to consider ‘AI literacy’ may in fact obscure the very kinds of knowledge that academic practice is meant to safeguard. In its current institutional form, AI literacy tends to operate as a form of compliance: one learns the tools so as not to fall behind. But falling behind what, exactly?
Higher education has long prided itself on resisting the demands of immediacy. We do not produce scholarship at the speed of trend; we deliberate, we critique, we contextualise. If we accept this ethos, then surely one of the most valuable positions an academic can occupy is one of deliberate disinterest—an epistemic refusal to naturalise technological determinism. To not immediately adopt the language of tokens and transformers is to preserve a space in which meaning, metaphor, and humanistic critique might still matter.
Indeed, one might argue that there is an epistemological value in maintaining a certain estrangement from the inner workings of generative models. This distance can allow us to ask questions that those embedded in technical literacies often miss. What assumptions about language underlie these systems? Whose knowledge is encoded in their corpora? What epistemic violence is done when cultural, historical, or poetic nuance is flattened into vectorised representations? You don’t find these questions in a model’s user manual. In some cases, knowing less allows us to see more.
There is also a disciplinary politics at play here. The increasing encroachment of AI across the curriculum risks subsuming all forms of knowing into computational logics. One sees this, for instance, in the language of ‘optimisation’ now creeping into writing instruction, where the value of prose is assessed in terms of clarity and coherence as judged by a machine. The richer, more ambiguous registers of literary or philosophical expression—forms of thinking that thrive in contradiction—are being increasingly seen as liabilities. To resist this flattening, we may need to hold fast to ways of knowing that cannot be modelled. AI illiteracy, in this sense, is a defence of disciplinary identity.
Moreover, there is a pedagogical risk in modelling our teaching too closely on tools like ChatGPT. In showing students how to use AI to ‘improve’ their writing, we may be inadvertently teaching them that their own voices are insufficient. This becomes especially problematic for students from marginalised or underrepresented backgrounds, whose modes of expression are already under pressure to conform to dominant standards. The friction between a student’s voice and the expected academic tone is not something to be smoothed over by predictive language; it is often the site of the most profound learning.
I am not suggesting we abandon all attempts to understand these systems. There is, of course, a place for AI literacy—particularly where it enables informed critique, ethical resistance, or meaningful co-creation. But we must be cautious of treating it as a universal good. There is a difference between understanding enough to critique and absorbing so much technical detail that one begins to mistake function for truth. The illusion of understanding—the belief that being able to prompt a chatbot effectively constitutes deep knowledge—may be more dangerous than ignorance.
So let me propose a counter-literacy: AI ignorance, not as a failing, but as a stance, a cultivated unknowing that allows us to preserve the values of slowness, criticality, and complexity. Let us hold space for the unknown, the unmodelled, and the ineffable. In a university increasingly asked to adapt to technological change, sometimes the most radical act is to refuse fluency.