Commentary: ChatGPT is a dangerous study aid for STEM students
Students who use AI to solve STEM problem sets can evade detection, but they skip the mental work that builds understanding, says chemistry tutor Kelvin Ang.
This audio is generated by an AI tool.
SINGAPORE: During last year’s exam season, a student showed me her answer to the 2024 GCE A-Level Chemistry Paper 3 question. The task was straightforward: Draw three curly arrows to complete a reaction mechanism.
ChatGPT provided a diagram with clean lines, proper notation, and technical precision. But the arrows were completely wrong, misplaced in ways that would cost full marks.
Here’s another example from the same year. Students were asked why calcium fluoride does not dissolve in water, even though the thermodynamic conditions suggest that it should.
ChatGPT explained that the particles are held together very tightly, which sounds reasonable at first glance.
However, this missed the main point the examiners were looking for. The correct answer was that the reaction requires too much energy to get started.
THE THREAT TO STEM EDUCATION
Artificial intelligence in STEM (science, technology, engineering and mathematics) education poses a unique challenge. Unlike essay plagiarism, students who use AI to solve STEM problem sets can evade detection. However, they skip the mental work that turns procedures into understanding.
I’ve been teaching A-Level Chemistry for over a decade, and I now see students across STEM subjects arriving with AI-generated answers they cannot evaluate. When students ask ChatGPT to explain concepts, whether chemical reactions, calculus problems or circuit diagrams, the answers often include too many details and reference material not in their syllabus. Students struggle to distinguish what’s relevant from what isn’t.
AI-generated STEM answers are dangerous because they look correct at first glance. They use proper terminology, follow conventional formatting and sound authoritative. What they lack is the specific insight the question demands.
In chemistry, when asked about thermodynamics, ChatGPT produces comprehensive explanations covering entropy, enthalpy and Gibbs free energy. In mathematics, ask about differentiation, and you’ll get the chain rule, product rule and quotient rule all explained in detail. In physics, a question about forces might trigger a lecture on Newton’s laws.
For a student seeking general understanding, this might seem helpful. But STEM exams test whether students can identify the single relevant principle and apply it precisely. All that extra detail buries the actual answer.
Some students bring healthy scepticism to these interactions. They sense that responses that run half a page are off. They ask me whether they really need to write all this, which at least shows they’re thinking. But many more never seek clarification. They accept the AI answer as definitive and move on.
Ultimately, failing to build this evaluative skill is the most serious consequence of overreliance on AI. If students cannot discern when AI-generated answers are wrong, they cannot progress as independent thinkers in STEM.
THE OVERRELIANCE PROBLEM
Some may argue that the inevitability of AI renders problem sets and assessments obsolete. This misses the heart of the argument: STEM education’s purpose is to develop deep thinking and intuition, not just produce correct answers.
In my classroom, the strongest students aren’t the ones who memorise formulas fastest, but the ones who can pause and ask: “How does this answer actually make sense?”
Take my chemistry example: Understanding why calcium fluoride doesn’t dissolve despite favourable thermodynamics requires thinking about the difference between thermodynamic favourability and kinetic accessibility.
In physics, understanding why heavier objects don’t fall faster requires wrestling with the difference between mass and weight. In mathematics, grasping why you can’t divide by zero demands thinking about limits and undefined operations.
I’ve watched students grasp these distinctions after struggling with multiple problems. That “aha” moment can’t come from a shortcut. When AI delivers that insight pre-packaged, students miss what makes understanding stick.
WHAT NEEDS TO CHANGE
The solution isn’t to ban AI from STEM education. That ship has sailed. Instead, we must reshape how STEM subjects are taught to acknowledge AI’s existence whilst preserving the cognitive development students need.
STEM educators could consider doing three things. First, design assessments that require students to demonstrate their thinking process, not just provide answers. Second, teach students to critically evaluate AI outputs to spot overly general or inaccurate explanations. Third, create classroom environments where productive struggle is encouraged, guiding students but not providing immediate access to AI.
Students must develop “AI literacy”, meaning they must understand the limits of AI tools, when outputs are flawed, and develop STEM skills for professional capability.
What's at stake is more than individual careers. If we produce graduates who can provide answers but cannot evaluate them, we undermine STEM's capacity to serve society. Do we want engineers designing bridges who can't explain why an answer is wrong?
My student's mechanism diagram looked perfect, until you checked if the arrows actually made sense. That's the skill we're losing. And once it's gone, we're not preparing students for an AI-augmented future. We're just teaching them to be dependent.
Kelvin Ang is Founder and Principal Tutor of The Chemistry Practice.