Cornell Professor Switches Class to Typewriters to Thwart AI‑Generated Essays

Cornell Professor Switches Class to Typewriters to Thwart AI‑Generated Essays

Pulse
PulseApr 1, 2026

Companies Mentioned

Why It Matters

The Cornell experiment highlights a tangible response to the AI‑cheating crisis that is sweeping higher education. By forcing students to write without digital crutches, the exercise tests whether low‑tech interventions can restore authentic learning and assessment. If successful, such methods could inform policy decisions at universities grappling with how to preserve academic integrity while still leveraging AI’s educational benefits. Beyond the immediate classroom, the move signals a cultural shift: educators are increasingly willing to sacrifice convenience for credibility. As AI tools become more sophisticated, the pressure on institutions to develop robust, enforceable standards will intensify, and low‑tech safeguards may become one piece of a larger, multi‑layered strategy that includes AI‑detection software, honor codes, and redesigned assessment formats.

Key Takeaways

  • Cornell German professor Grit Matthias Phelps requires an entire class to write on manual typewriters to block AI‑generated essays.
  • The “analog” assignment began in spring 2023 and now uses a few dozen vintage machines sourced from thrift shops and online marketplaces.
  • Students report slower writing pace, fewer digital distractions, and a need to think more deliberately about each sentence.
  • Quotes from Phelps, student Catherine Mong, and sophomore Ratchaphon Lertdamrongwong illustrate the experiment’s impact.
  • The initiative reflects a national trend toward low‑tech assessments as a countermeasure to AI‑assisted cheating.

Pulse Analysis

Phelps’s analog experiment arrives at a moment when universities are scrambling to adapt to generative AI’s disruptive potential. Traditional plagiarism detectors struggle against AI‑crafted prose that is original in phrasing yet derivative in idea. By removing the digital layer entirely, Cornell sidesteps the detection problem and instead re‑engineers the writing process. This is a classic supply‑side intervention: if the tool (the computer) is unavailable, the output (AI‑generated text) cannot be produced.

However, scalability remains a critical hurdle. Large lecture courses with hundreds of students cannot realistically equip every learner with a typewriter, nor can they afford the time required for manual grading of handwritten work. The experiment’s true value may lie in its symbolic power, prompting broader conversations about assessment redesign. Hybrid models—combining in‑class analog tasks with AI‑aware digital assignments—could strike a balance, preserving the benefits of technology while safeguarding core learning outcomes.

Long‑term, the Cornell case may inspire a tiered integrity framework: low‑tech checkpoints for high‑stakes assessments, AI‑detection tools for take‑home work, and clear policy guidelines that delineate acceptable AI assistance. As AI becomes more embedded in everyday tools, educators will need to shift from outright bans to nuanced pedagogy that teaches students how to harness AI responsibly rather than evade it. The typewriter experiment is a vivid reminder that sometimes, looking backward can illuminate a path forward.

Cornell Professor Switches Class to Typewriters to Thwart AI‑Generated Essays

Comments

Want to join the conversation?

Loading comments...