AI forces a rethink of the PhD
As AI moves deeper into research, the PhD, long a symbol of human originality, faces an identity crisis. Machines can now write, analyze and even propose ideas, forcing universities to rethink what it means to train independent scholars, Kazinform News Agency correspondent reports, citing Nature.
Universities are largely unprepared for this transformation. While many academics still focus on the early flaws of generative models (factual “hallucinations,” inconsistencies, and shallow reasoning) the technology has evolved rapidly. Modern AI systems can draft literature reviews, generate hypotheses, write code, and even interpret complex datasets.
Some emerging “agentic” models can autonomously set sub-goals, coordinate tasks, and learn from feedback, edging closer to partial independence. If AI systems can produce analytical frameworks, process data, and write comprehensive drafts, what then remains uniquely human in doctoral work?
Educators may need to shift their focus from training students to perform technical tasks toward teaching them how to frame questions, evaluate AI outputs critically, and safeguard academic integrity in increasingly automated environments.
Traditional skills, like coding, statistical analysis, literature synthesis, may become secondary to new abilities: critically assessing machine-generated work, verifying data integrity, and maintaining oversight of complex automated workflows. Students will also need to learn how to detect plausible but incorrect outputs, a task that paradoxically requires the very expertise AI tools are beginning to replace.
This shift will inevitably change how universities assess PhD candidates. A polished thesis, easily produced with AI assistance, might no longer serve as proof of intellectual mastery. Oral defenses, live problem-solving tasks, and reflective commentaries could become more central to evaluating genuine understanding. Supervisors, too, will have to adapt, guiding students not just in research design but in the ethical and critical use of AI technologies.
If AI can compress months of research into days, institutions must decide whether to shorten PhD durations or to broaden their scope, allowing students to tackle more interdisciplinary or ambitious topics. Yet these efficiencies also come with risks: excessive reliance on AI could lead to “intellectual atrophy,” as critical reading, reasoning, and writing skills weaken from disuse. Universities will need to build deliberate safeguards to preserve the human core of research: curiosity, judgment, and creativity.
Some universities are already taking early steps to address the challenge. Institutions such as the University of Oxford, Nanyang Technological University in Singapore, and Sweden’s Karolinska Institute have begun implementing AI-use policies that emphasize transparency and responsible use. Oxford now offers AI and machine learning courses for academic staff, while Nanyang mandates AI literacy for postgraduate students.
Others, like the University of New South Wales in Sydney, have introduced campus-wide access to specialized systems such as ChatGPT Edu. However, these initiatives mostly treat AI as an auxiliary tool rather than a transformative force, and many universities still lack the resources or strategic frameworks to keep pace.
Earlier, Kazinform News Agency reported that while the number of PhD graduates is rising worldwide, academic career opportunities are shrinking.