AI | Science |

Students’ use of AI spells death knell for critical thinking

March 02, 2025
AI Image

In response to your report (UK universities warned to ‘stress-test’ assessments as 92% of students use AI, 26 February), universities have long regarded themselves as guardians of knowledge and truth. However, this foundation has been eroding as expertise is increasingly devalued, critical thinking is weakened, and public discourse becomes more polarized.

Traditional sources of knowledge are being challenged like never before. Books, journal articles, and established media now compete with evolving methods of information access, particularly through apps and social media. This shift has led to what can be described as the “Tinderfication” of knowledge.

For instance, curated reading lists—meticulously compiled by academics to highlight key thinkers and essential texts—are often bypassed by students who instead turn to quick Google searches. If they dislike what they find, they simply dismiss it, much like swiping left on a dating app. Algorithms then steer them toward unexpected and sometimes unreliable sources, diverting them from rigorous academic materials.

While round-the-clock access to learning resources is undeniably valuable, it raises the question: has knowledge become just another form of convenience? With information readily available at the click of a button, delivered instantly from countless sources, quantity often prevails over quality—AI, in this sense, becomes the ultimate fast food of knowledge.

This shift forces us to reconsider the very definition of knowledge and the evolving role of education and academics. AI undoubtedly offers advantages in fields such as science, economics, and mathematics, where many facts are objective. However, in disciplines like the humanities and social sciences—where interpretation, debate, and nuance are essential—its implications are far more complex.

If universities fail to address these rapid and profound societal changes, they risk facing consequences far beyond what we can currently foresee.

Prof Andrew Moran

London Metropolitan University

As a university lecturer in the humanities, where essays remain a primary form of assessment, I’m not surprised by the surge in AI usage. Tech companies heavily promote it as a time-saving tool, and this view is reinforced in broader political discourse without adequately addressing AI’s limitations or ethical concerns.

While AI can be beneficial in certain academic contexts—such as drafting reports or conducting preliminary research—its use by students for writing essays reflects a growing undervaluation of the humanities and a fundamental misunderstanding of what original writing in fields like history, literature, and philosophy fosters: critical thinking.

The novelist E.M. Forster once said, “How can I tell what I think till I see what I say?” He suggested that writing is a sophisticated form of thinking and that learning to write well—working through ideas and developing arguments—is central to the process. When students turn to AI to write their essays, they are not just outsourcing labor; they are outsourcing their thinking and its evolution. Over time, this will only leave them more confused and less intellectually capable.

In a neoliberal, technology-driven age where the focus is often on the product rather than the process, it's no surprise that the true value of writing is being overlooked. Students are merely reflecting a world increasingly disconnected from the irreplaceable value of human creativity and critical thought.