Universities are embracing AI: will students get smarter or stop thinking?
Article metadata
Article Date: 2025-06-26
Article URL: https://www.nature.com/articles/d41586-025-03340-w
Article Title: Universities are embracing AI: will students get smarter or stop thinking?
Article Image: https://media.nature.com/w400/magazine-assets/d41586-025-03340-w/d41586-025-03340-w_51470768.jpg
Summary
Universities worldwide are rapidly integrating generative AI into campus life and teaching — from Tsinghua’s AI admission-bot and Ohio State’s compulsory AI classes to the University of Sydney’s safeguards of in-person testing. Students are already heavy users (surveys report around 86% regular usage in 2024), often relying on AI for writing, summarising and exam help. Early evidence is mixed: some trials find AI tutors boost short-term learning efficiency, while small studies and preliminary data signal reduced long-term retention and lower cognitive engagement when students over-rely on AI tools. Institutions are scrambling to produce coherent policies, and partnerships with big-tech raise concerns about commercial influence, ethics and de-skilling.
Key Points
- Universities are embedding AI across campus functions — admissions, teaching, tutoring and feedback systems.
- Student uptake is high: many use AI for drafting, editing and explaining text; a significant share use it in assessments.
- Controlled trials show AI tutors can improve learning speed, but other studies suggest poorer retention and reduced cognitive engagement in AI-assisted tasks.
- Institutional policy is lagging: patchwork approaches leave students exposed to inconsistent rules across courses and campuses.
- Commercial partnerships (OpenAI, Google, Anthropic) scale access but raise questions about data, vendor lock-in and ethical impacts.
- Some universities build in-house or curated platforms (for example Cogniti and Tsinghua’s three-layer architecture) to control model choice and factual accuracy.
- Scholars warn of potential de-skilling, weakened critical thinking and unknown long-term cognitive effects if AI is uncritically adopted.
Content summary
Generative AI has spread rapidly among undergraduates and faculty, particularly in STEM fields. While some institutions are taking a proactive approach — teaching AI fluency, embedding AI tutors and building architectures that combine multiple models with curated knowledge — many are struggling to keep policies and pedagogy up to speed with student use. Research is equivocal: AI can be a powerful learning aid but may also give learners a false sense of understanding, reduce brain-region connectivity during creative or analytical tasks, and harm longer-term retention if overused. The debate includes practical issues (assessment integrity, workload automation) and broader concerns about ethics, environmental cost and corporatisation of campus services.
Context and relevance
This article matters because higher education sits at the crossroads of skill formation, assessment standards and workforce preparation — all of which are being reshaped by AI. For educators, policymakers and students, the trends described flag urgent decisions: whether to ban, regulate, embed or partner with AI vendors; how to redesign assessments and curricula for authentic learning; and how to measure long-term cognitive impacts. The piece summarises emerging evidence and institutional responses, making it useful for anyone tracking how AI will change teaching, accreditation and student competencies.
Why should I read this?
Short version: if you care about how degrees, exams and student skills will look in the next five years, this is worth your time. It lays out where universities are already using AI, what early studies say about learning and memory, and the messy policy choices institutions must make — plus the commercial pressures behind the rush. We’ve done the legwork and pulled the key facts together so you can get up to speed fast.
