Generative Models as Catalysts for Critical Thinking in Higher Education
Stop Treating AI as an “Answer Machine”. AI as both amplifier and anaesthetic, defines the pedagogical problem of 2025 and beyond.
Leah Belsky, Vice-President of Education at OpenAI, told students that using ChatGPT solely to harvest ready-made answers is “missing the point” of the technology. She compared generative AI to a calculator: a device that expands, rather than replaces, human reasoning when handled with intention. OpenAI’s new “Study Mode” now nudges learners through a Socratic sequence of follow-up questions, forcing them to justify each response before moving on.
Her challenge lands at an opportune moment. Adoption figures are soaring. The HEPI /Kortext Student Generative AI Survey 2025, based on 1 041 UK undergraduates, finds that 92 per cent now use AI in some form and 88 per cent deploy it for assessed work, a jump of more than twenty-five percentage points in a single year. Faculty usage is not far behind too.
Yet educators remain uneasy. Fresh research from Microsoft and Carnegie Mellon tells us why. Higher confidence in an AI system correlates with lower self-reported effort to think critically. In other words, the moment a tool feels “smart”, the user’s cognitive guard drops. This paradox, AI as both amplifier and anaesthetic, defines the pedagogical problem of 2025 and beyond.
What the evidence says
AI is reshaping learning behaviours. The NSTA’s March review highlights mounting classroom cases where pupils default to AI summaries instead of constructing their own.
Curricula are racing to keep up. IIT Delhi now mandates AI literacy and ethics in every degree, while Florida State University has launched a dedicated computational linguistics track to equip graduates for language-tech research.
The scholarly landscape is evolving. A 2025 scientometric review of 81 740 publications charts a surge of work on intelligent tutoring, automated feedback, and NLP-based analytics. Evidence that computational linguistics is no longer a niche but a central pillar of educational innovation.
Four design principles for the AI-ready classroom
Make critical thinking visible.
Require students to annotate AI-generated drafts with colour-coded comments that explain what they accepted, rejected, or re-worked. This simple protocol externalises judgement and thwarts passive copy-paste habits.Shift from task completion to dialogue.
Instead of setting an essay that ChatGPT can finish in one prompt, pose a series of linked micro-tasks. Each part relies on the student’s prior reflections, compelling continuous iteration.Audit confidence, not just accuracy.
After each AI-assisted activity, ask learners to rate their trust in the response and list verification steps taken. Over time, students learn to calibrate reliance on the model, mitigating the “over-confidence dip” reported in the Microsoft study.Broaden the toolset.
A 2025 round-up of educational AI platforms notes that institutions now have options ranging from AI help-desks (Capacity) to automatic graders (Gradescope) and research chatbots (Perplexity). Selecting complementary tools prevents single-platform dependency and encourages transferable literacies.
A pragmatic workflow for lecturers
Pre-class curation
Use Perplexity or SciSpace to generate a literature snapshot. Cross-reference key claims manually before bringing them to the seminar.Interactive exposition
During the session, demonstrate ChatGPT’s “Study Mode” live. Show how probing prompts reveal gaps in the model’s explanation; invite students to refine prompts collectively.Collaborative synthesis
Divide the cohort into groups. One group validates AI references, another edits tone and argumentation, a third visualises data with an LLM-integrated charting tool. Rotation ensures every learner tackles verification, composition, and presentation in one sitting.Reflective close
End with a metacognitive debrief. Which steps felt laborious? Where did AI save time, and where did it introduce uncertainty? Capturing these impressions informs the next iteration of the module.
Looking ahead
We are witnessing a pedagogical pendulum. Early enthusiasm for GenAI led some universities to ban it outright. The backlash now favours structured integration. The sweet spot lies in instructional design that positions AI as collaborator rather than the know-it-all oracle.
Belsky’s provocation is therefore more than a sound-bite. It invites every lecturer to rethink assessment, scaffold reflection, and champion a culture where curiosity outlasts convenience.
If we succeed, graduates will greet the next wave of models, not with complacent acceptance, but with a toolkit of habits that keep thinking alive.
That, surely, is the outcome higher education was always meant to secure.

