I love a gadget, so usually enjoy reading up on CES highlights.1 But January’s CES 2025 was grim. Technology companies unveiled a parade of AI-powered wearables, and the more I digged, the more I felt like I was witnessing the final phase of a decades-long project to eliminate human privacy, autonomy, and independent thought.
Don’t get me wrong, I like a wearable. I’m prone to overindulgence, so tracking my steps keeps me (relatively) in shape. I have a heart condition, and while I know cardiac monitoring tools are pretty much useless for detecting anything beyond the basics, my Apple watch often acts as a placebo. I like having a whole bunch of utilities on my wrist, and if they’d make an Oura ring that wasn’t rediculously chunky, I’d wear them on my finger as well.
I understand that wearables are sophisticated data extraction machines designed to harvest our biology for profit, but if Google and Apple want to commodify my heart rate, I can live with that. I’m a technophile who grew up in surveillence capitalism; I’ve never really known a time when Big Tech wasn’t inhibiting with our privacy.
But as someone working in higher education, AI-enabled wearables are the apocalypse. Such devices (which are already available but still quite limited in functionality) represent a new frontier of academic dishonesty wherein the tools of deception are no longer hidden in pockets or scrawled on palms, but seamlessly integrated into the very fabric of our daily wear.
It will be the absolute end of academic integrity (as know it, anyway).
We’ve barely begun to grapple with ChatGPT and its ilk, but soon we’ll have to consider the emergence of AI wearables—smart glasses, earpieces, even contact lenses—which promise to make gen AI and large language models ubiquitous and invisible. Meta’s Ray-Ban smart glasses can already identify objects and answer questions about what you’re looking at, and these are just the crude prototypes for what’s coming next.
Within five years, perhaps less, students will arrive at university equipped with AI assistants that are functionally invisible to external observation. The traditional boundaries between human cognition and machine augmentation will blur beyond recognition, and our entire system of academic assessment—built on the assumption that we can meaningfully evaluate individual human intelligence—will collapse. Exam halls and oral assessment will no longer be the answer to the problem of ChatGPT.
The academic community’s response to LLMs is already reactive and largely ineffective. We’ve deployed AI detection software that produces more false positives than genuine catches. We’ve returned to handwritten exams, as if regression to 19th-century methods could solve 21st-century problems. And we’ve created elaborate honour codes and authenticity statements, placing faith in systems that were already failing before gen AI emerged.
But wearables represent a different order of challenge entirely. How do you detect a student using smart contact lenses that display information directly onto their retina? How do you prevent someone from receiving real-time coaching through a nearly invisible earpiece during an oral examination? The surveillance required to police such technologies would transform educational institutions into panopticons, and even then, we’d likely fail.
The uncomfortable truth we must confront is that our entire model of academic assessment assumes we’re measuring something that exists independently within each student’s mind. But what happens when the boundaries of that mind become permeable and every student has potential access to the sum of human knowledge and the analytical power of advanced AI at every moment?
Some will argue we should embrace this future, that resisting AI augmentation is like insisting students solve maths problems without calculators. But this analogy fails to capture the magnitude of the shift. A calculator extends our ability to perform specific operations, while AI wearables promise to augment—or replace—the very faculties we’ve traditionally associated with learning itself: comprehension, analysis, synthesis, and critical thinking.
AI wearables will amplify existing educational inequalities to a magnitude never seen before. While we fret about the digital divide created by laptop access, those who can afford cutting-edge AI wearables will literally possess almost superhuman academic capabilities.
Universities will face an impossible choice: ban the technology and rick being labelled ‘irrelevant’, or permit it and watch academic credentials become meaningless. Some institutions will claim they are ‘AI-free’, marketing themselves as bastions of ‘pure’ human intelligence, but such bans will be empty gestures, impossible to police. Others will fully embrace augmentation, producing graduates whose capabilities are inseparable from their technological prostheses.
Throughout history, technologies have shaped how we think—writing reorganised memory, printing democratised knowledge, and computers externalised calculation. AI wearables may be the next step in this cognitive evolution, but there’s something qualitatively different about a technology that can think for us rather than with us. When a student’s essay is indistinguishable from one produced by their AI assistant, when their exam answers seamlessly blend human insight with machine analysis, what exactly are we assessing? And more fundamentally, what are we educating them to become?
I don’t pretend to have solutions, and perhaps there are none, at least not ones that preserve academic integrity as we’ve long understood it.
But I can offer some provocations:
First, we must abandon the fantasy that we can maintain clear boundaries between human and machine intelligence in our assessments. That ship has sailed, and it’s already disappearing over the horizon.
Second, we need to fundamentally reconceptualise what we’re trying to achieve through higher education. If knowledge and analytical capabilities can be downloaded and accessed at will, what uniquely human capacities should we be developing? Creativity? Ethical reasoning? Emotional intelligence? And how do we assess these in ways that can’t be gamed by AI?
Third, we must engage seriously with the question of what academic integrity means in an age of augmented intelligence. Maybe integrity will lie not in refusing augmentation but in transparently acknowledging it? Maybe we’ll need new forms of assessment that evaluate how well students can collaborate with AI rather than how well they can pretend they haven’t.
The transformation I’m describing isn’t distant speculation, because the technologies exist, they’re just not yet miniaturised and affordable. But they will be, and sooner than most academics realise. We have perhaps a few years in which to fundamentally reimagine education before these technologies render our current practices completely obsolete.
The end of academic integrity as we know it isn’t necessarily catastrophic. It could be a moment of overdue reckoning, a chance to re-evaluate what, precisely, we mean by ‘integrity’ in an era when information is abundant and easily recombined. For generations, academic integrity has been anchored to the ideal of the autonomous scholar, the individual who produces original work, properly attributes sources, and demonstrates mastery through carefully controlled assessments. But these norms arose in a very different epistemic context, one in which scarcity of knowledge and limited access to expertise meant that originality was both harder to counterfeit and easier to define.
As the boundary between genuine intellectual labour and synthetic production becomes increasingly porous, it may be constructive to ask what values are we really trying to protect when we invoke academic integrity. Intellectual honesty? Effort? Creativity? The capacity to discern and evaluate knowledge?
If we can disentangle these aims from the rituals that have traditionally signalled them, we might arrive at fairer, more meaningful expectations.
The question isn’t whether AI wearables will transform education. They will. The students arriving at university in 2030 will inhabit a cognitive landscape we can barely imagine. It’s time we started imagining harder.
If you’re wondering, CES stands for Consumer Electronics Show. It’s a massive annual trade show held in Las Vegas, where tech companies unveil their latest gadgets and software.