Discussion about this post

User's avatar
Vicki J. Sapp's avatar

So much to say...so little time (and maybe even less will at this point). I fully agree that AI detectors are flawed; however, there's a missing piece here. "The reason is straightforward: non-native speakers tend to write with simpler vocabulary, more predictable grammar, and more uniform sentence structures, the very same qualities the detectors associate with machine output." These days I primarily teach non-native speakers, and the clash between the submitted assignment prose and other contact in writing is big and glaring. The same is true for my native speakers; it's not that "no human writes like that"; it's more like "this human doesn't write like that." The statement quoted above seems quite reductive and hard to apply across big, complex populations. Relying on the AI detectors alone can indeed lead to damaging error; however, without this even marginal support for our suspicions (which we cannot officially use, it's true), we are left with the he-said, she-said impasse. Students will swear up and down that they didn't use AI or just used it to "to fix grammar issues." With every experiential and logical perception that we have, we know that this isn't true. Yes, it's a learning curve trying to resolve these challenges with our deeply-rooted desire that our students actually learn something in our classes. But try challenging such cases without empirical support and find yourself in exhausting, useless defeat. And of course, yes, toss it back to the instructors for not offering AI-proof assignments--try that, too, and you'll be surprised how easy it is, outside of paper & pen in class (and even this is compromised by the device in the lap, under consultation the whole exercise), for students to bring in AI for most any assignment. If I sound exhausted with it all, I am...if I sound defeated, maybe not yet. It's still, after all, a compelling challenge, which has its own exhilaration and rewards...

No posts

Ready for more?