AI scientists are changing research — institutions, funders and publishers must respond

AI scientists are changing research — institutions, funders and publishers must respond

Article Date: 25 March 2026
Article URL: https://www.nature.com/articles/d41586-026-00934-w
Article Image: https://media.nature.com/lw767/magazine-assets/d41586-026-00934-w/d41586-026-00934-w_52203666.jpg

Summary

Nature’s editorial flags a shift: AI systems — exemplified by “The AI Scientist” from Sakana AI and other LLM-assisted research — can now carry out many steps of the scientific process, from literature review to drafting papers. While some AI-generated work has passed initial peer review and produced plausible outputs, these systems still hallucinate data, fabricate citations and struggle with multi-step reasoning and confidence estimation. That creates risks: mass-produced, low-value papers, distorted research incentives, difficulties in credit and assessment, and potential narrowing of research topics. Nature stresses transparency, reproducibility (including sharing prompts and transcripts), and the need for institutions, funders and publishers to set guard rails.

Key Points

  • The AI Scientist can automate large parts of research workflow and has produced peer-reviewed workshop submissions.
  • State-of-the-art LLMs are aiding real research (e.g. GPT-5 in theoretical physics), but outputs remain error-prone and limited mainly to theoretical or code-heavy domains.
  • Major risks include fabricated or hallucinated data and citations, automated P-hacking and overloading peer-review systems with low-quality outputs.
  • AI use may skew researchers toward fewer topics and data-rich fields, potentially reducing scientific diversity.
  • Nature requires transparency about LLM use, will not accept LLMs as authors and encourages submission of prompts/transcripts to aid reproducibility.
  • Institutions, funders and publishers must adapt policies on authorship, assessment, reproducibility and ethics to manage AI-driven research.

Context and relevance

AI-assisted research is moving from helper tools (coding, analysis) to agents that can generate hypotheses, run experiments and write up results. That increases productivity but also multiplies risks for the research ecosystem: hiring and promotion metrics, peer-review capacity and scientific integrity. The editorial situates Nature’s position in a wider conversation — other major labs and companies are building research-facing AI, and early examples already influence publication norms and research directions. For anyone involved in research management, funding decisions or publishing, the article signals that policy and process changes are urgent.

Why should I read this?

Short answer: because this could completely change how research output is measured, produced and judged — and that affects jobs, funding and career paths. If you work in a lab, run peer review, or shape research policy, you’ll want to know what mess to expect and what guard rails to build. If you don’t work in research, read it anyway — it’s the clearest heads-up yet on how AI might flood the literature with plausible but shaky science.

Source

Source: https://www.nature.com/articles/d41586-026-00934-w