AI challenges the future of traditional exams

With most students now using chatbots in their studies, universities are being pushed to rethink how learning should be tested in the age of AI, Qazinform News Agency correspondent reports, citing Nature.

photo: QAZINFORM

Artificial intelligence has quickly become part of everyday student life. A recent UK survey found that 92% of undergraduates now use AI tools, up from 66% last year. Nearly 90% rely on generative AI to help with coursework, a sharp rise from just over half in 2024.

This rapid growth is raising serious questions about traditional exams and essays. AI can already write reports and solve tasks that meet or exceed the quality of typical student work. As a result, teachers can no longer be sure whether assignments truly show a student’s own understanding. There are also concerns that heavy chatbot use could weaken deep learning, self-reflection, and independent thinking.

Many universities have tried to fight misuse by using AI detection software or switching to handwritten tests, oral exams, and reflection journals. However, detection tools have proven unreliable, and these quick fixes deliver only limited results. Researchers say a fundamental change in assessment is needed.

Solutions

One proposed solution is conversation-based testing. Instead of writing essays, students would discuss topics in structured talks that show how well they understand ideas and solve problems. Earlier systems using this method were limited, but modern AI can hold longer, more natural conversations. These tools can ask follow-up questions, tailor feedback, and adjust difficulty to fit each student’s level.

Still, this approach has drawbacks. AI can misunderstand students or provide wrong information. Personalized conversations also make it harder to compare students fairly. Because of this, traditional exams would still be needed for situations where strict standardization matters, such as university admissions.

The authors also argue for reducing pressure from big final exams. Instead, they support continuous assessment, where progress is measured through many smaller tasks over time. This method is widely used in medical training but is rare in other fields because it demands constant evaluation by instructors. AI tools could help manage this workload by tracking student progress across repeated interactions.

Current chatbots like ChatGPT cannot do this effectively because they do not follow long term-learning development. Education focused platforms would be needed to analyze growth, detect learning gaps, and support course design. Studies indicate that frequent low stakes assessments reduce student stress and lower the chances of cheating.

Universities are also being urged to focus more on skills that machines struggle to replace, including teamwork, creativity, and empathy. Students could work on real world projects, using AI openly for research and planning. While AI can support group creativity, clear methods for grading such skills fairly have not yet been established.

To make any changes work, educators and students must strengthen their understanding of AI. Teachers need training and time to adapt lessons, while students need guidance on responsible and ethical AI use.

Universities must also rewrite academic rules to clarify what originality and creativity mean in an AI supported classroom.

Earlier, Qazinform News Agency reported that AI forces a rethink of the PhD.