Universities nationwide are forcing students to write exams in blue books — those flimsy paper booklets your parents used — in a panicked response to AI-generated essays. The solution creates new problems: students with disabilities lose accommodations, multilingual writers face additional barriers, and everyone graduates less prepared for workplaces where AI fluency is expected.
The shift accelerated after ChatGPT's 2022 launch made sophisticated text generation available to anyone with internet access. Now, according to reporting from Axios, institutions from community colleges to research universities are mandating in-person, handwritten exams as their primary defense against what they perceive as an epidemic of cheating.
But educators on the ground paint a different picture. Steven Krause, who teaches at Eastern Michigan University and reads about 1,500 pages of student writing each semester, told Axios that panic over AI cheating is "overstated." In his experience, students who cheat are already failing and "desperate" — not the masses of otherwise honest students that blue book policies assume need policing. "AI writing just sounds off," he notes, suggesting experienced professors can detect it without forcing everyone to write longhand.
The return to handwritten exams reveals how higher education's response to technological change consistently punishes the most vulnerable students first. More than half of all college students now take at least one online course. For these learners — often working adults, parents, or students in rural areas — traveling to campus for a timed, handwritten exam isn't just inconvenient. It's impossible.
Students with disabilities face even steeper barriers. Those who rely on assistive technology, need extra time, or have conditions affecting fine motor control lose crucial accommodations when forced to write by hand under time pressure. Multilingual students, who might use digital tools to check grammar or spelling in their second or third language, suddenly find themselves judged on handwriting speed rather than intellectual capacity.
Dan Melzer, a professor at UC Davis, points out what should be obvious: writing is a revision process. When universities evaluate rushed, single-draft responses scrawled in blue books, they're not measuring writing ability or subject knowledge — they're testing handwriting speed and performance under artificial constraints. "Why don't we just have them write with chisels?" Krause asked Axios, only half-joking.
The practical failures compound the pedagogical ones. Large lecture classes with 200-plus students make handwritten exams a logistical nightmare. Professors complain about deciphering poor handwriting. Meanwhile, students wearing Meta's smart glasses or similar AI wearables could theoretically cheat anyway, making the entire exercise pointless for its stated purpose while maintaining all its discriminatory effects.
Most revealing is what this backward march says about higher education's relationship to the future its students will inhabit. Employers increasingly expect graduates who can work with AI tools, not in spite of them. Major tech companies are already integrating AI into core business functions. Yet universities respond by making their classrooms deliberately anachronistic, as if preparing students for a workplace that ceased to exist a decade ago.
Melzer calls ChatGPT the "most powerful disruption" of his teaching career and argues educators need to learn to work with it, not against it. He tries to help students see value in assignments beyond just "getting it off my plate" — addressing the real challenge of making learning meaningful in an age of instant generation.
The blue book revival fits a historical pattern. When typewriters arrived, educators worried about the death of penmanship. Word processors brought fears about spell-check creating lazy writers. Each wave of technology triggered apocalyptic predictions about education's demise. Each time, the panic subsided as teachers learned to integrate new tools rather than ban them.
What makes this iteration particularly harmful is how the burden falls. Working students juggling multiple jobs, parents managing childcare, students with disabilities, and multilingual learners all face disproportionate harm from policies that assume everyone learns the same way, at the same pace, with the same physical abilities.
Universities could instead focus on designing assignments that make AI-generated responses irrelevant — projects requiring original research, personal reflection, or creative application. They could teach students to use AI tools ethically and effectively, preparing them for workplaces where such skills matter. They could recognize that education's value lies not in surveillance and punishment but in fostering genuine curiosity and critical thinking.
Instead, they're bringing back blue books, pretending technology will disappear if they ignore it hard enough. The students who suffer most are those who always do when institutions choose control over accessibility — a choice that reveals exactly whose education universities consider worth protecting.