Why universities need to radically rethink exams in the age of AI

Why universities need to radically rethink exams in the age of AI

Article metadata

Article Date: 02 December 2025
Article URL: https://www.nature.com/articles/d41586-025-03915-7
Article Image: Illustration by Matt Chinworth

Summary

Since ChatGPT’s debut, student use of AI has surged: a 2025 UK survey reports 92% of full-time undergraduates use AI and 88% rely on generative AI for coursework. Traditional written assessments — essays, take-home tests — are now vulnerable because AI can produce high-quality text that obscures how much students actually understand.

The authors argue that piecemeal fixes (AI-detection tools, closed-book or handwritten tests, oral exams) are limited. Instead, universities should redesign assessment around three interlinked ideas: adopt different assessment types (conversation-based and AI-mediated dialogue), shift from high-stakes end-of-term exams to continuous, low-stakes assessment, and develop learning-oriented AI platforms that track student progress longitudinally. Challenges include ensuring standardisation and fairness, avoiding AI miscommunication, and managing educator workload.

Key Points

  • AI use among students has jumped rapidly (92% of UK undergraduates use AI in 2025).
  • Generative AI undermines essays as reliable evidence of individual understanding.
  • Detection tools for AI-authored work are unreliable; short-term fixes are limited in effectiveness.
  • Conversation-based assessment — bolstered by AI that can sustain context-aware dialogue and probe reasoning — offers a promising alternative.
  • Continuous, low-stakes assessment can reduce the pressure and temptation to cheat inherent in high-stakes exams.
  • There is an urgent need for learning-oriented AI systems that capture longitudinal data and map learning trajectories, not just answer questions.
  • Barriers include AI miscommunication, difficulty standardising personalised assessment, and added workload for educators.

Why should I read this

Short and blunt: if you work in higher education, policy or edtech and you think exams can stay as they are, read this. The paper explains why old-school essays and final exams are brittle in the face of generative AI, and gives practical directions — chatty AI assessments, ongoing low-stakes checks and smarter platforms — that actually move the needle. It’s a useful wake-up call and a quick roadmap for what to change next.

Context and relevance

This comment piece sits at the intersection of education, technology and assessment policy. It matters because AI adoption by students is already mainstream and improving rapidly; without changes, assessment validity and fairness are at risk. The proposals align with broader trends: personalised learning, formative assessment, and AI-enabled analytics. For universities, implications include redesigning curricula, investing in new AI-capable learning platforms, and rethinking recruitment or admissions measures where consistency across large cohorts is required.

For edtech developers, the article signals demand for platforms that can record, analyse and present longitudinal learning data rather than just provide conversational interfaces. For policymakers and exam boards, it highlights the need to balance innovation with transparency, equity and standards.

Source

Source: https://www.nature.com/articles/d41586-025-03915-7