Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu




Commentary: ‘How do I prove my innocence?’ Casting students as would-be cheaters eager to exploit AI tools is disheartening

ChatGPT’s arrival has created the perfect storm to foster distrust between educators and students, says NUS undergraduate student Lim Le Ming.

Commentary: ‘How do I prove my innocence?’ Casting students as would-be cheaters eager to exploit AI tools is disheartening
File Photo. ChatGPT’s arrival should be seen as an opportunity to redesign outdated learning and assessment styles. (Photo: iStock/BongkarnThanyakij)

SINGAPORE: Imagine this - you’re a student, frantically typing away in the middle of the night, trying to wrap up that final essay assignment. Suddenly, your group chat erupts with news of a miraculous artificial intelligence (AI) tool capable of producing persuasive academic essays

Naturally, you’d be sceptical, of course. But fast forward to today, and generative AI tools, like ChatGPT, have not only met but surpassed our wildest expectations, sweeping the world by storm on its possible applications and uses for productivity

AI tools like ChatGPT seem capable of answering almost any question. It’s been said to be smart enough to pass graduate-level examinations in the United States. In Singapore, the National Institute of Education has started offering AI literacy courses to teachers, to help them understand the potential and limitations of its uses in education.

There has also been much concern around it. Some of the largest school districts in the US have banned ChatGPT (although New York City public schools rescinded its ban last week). Public discourse is also filled with educators either demanding tools to detect AI cheating or scrambling to devise “AI-resistant” assignments.

Amid these robust discussions among stakeholders, where is the voice of the students? Shouldn't our perspectives be considered, seeing as we will be most affected by decisions that are made surrounding this technology?


ChatGPT’s arrival has created the perfect storm to foster distrust between educators and students. It is easy to use without much technical knowledge - all one needs is a device with an Internet connection. And there are no reliable tools to verify and authenticate whether the work was done by a human or AI. 

In New Zealand, two students at different high schools claimed earlier this month that they were wrongly accused of using AI to cheat. In another incident, a professor in Texas failed an entire class after ChatGPT claimed it did their assignments. 

As a student, I find myself disheartened by the pervasive narrative of suspicion that dominates the news and social media, casting students as would-be cheaters eager to exploit AI tools. Yet there is little nuance on how we, students, might possibly use AI tools as a learning aid or as an assistant, without any intention of cheating.

I can’t help but wonder: Do my own educators share this mistrust?

While some educators have spoken up in support of AI tools, others have remained silent. This silence makes us uneasy as we do not know where they stand on the matter. 

Moreover, the stress of knowing that anyone could effortlessly use ChatGPT to complete their assignments raises the stakes for students who, like me, want to uphold academic integrity.

The rise of AI detection tools, like GPTZero, compound our anxieties. I tested my own original work and, while not flagged as AI-written, the conclusion that it was “likely written by a human” offers little reassurance, especially after hearing stories of other students’ original works were wrongly flagged as containing “parts written by AI”. 

This leaves me worried about being flagged as a false positive in the future - it feels less of an “if it happens”, and more of a “when it happens”. When that day comes, how do I prove my innocence?

Without definitive ways to prove our innocence, we students are left vulnerable, unable to clear our names beyond a reasonable doubt.

All these contribute to the erosion of trust between students and educators, the consequences of which are far-reaching, especially in the context of universities where collaboration between students and educators are invaluable in advancing innovative ideas in the spirit of problem-solving. 

If students are worried about being falsely flagged by an “almost-good-enough” AI detector, and if educators suspect AI involvement in their students’ work, what then becomes of this collaborative spirit?


To ensure the continuity of trust in the partnership between students and educators, a good first step is for educators to acknowledge the limitations of current AI detection tools.

Students could be flagged as a false positive by these tools, and educators need to handle these cases delicately in discerning whether the student is innocent to avoid rupturing the trust between them.

This, however, is but a stopgap measure. 

As a student, I propose a more proactive approach that nurtures responsibility and curbs the temptation to cheat with AI. The solution? Moving away from traditional, static assignments designed to evaluate student performance to self-directed research projects guided by educators.

This approach empowers students to delve into their interests, igniting their curiosity and transforming their learning into an authentic intellectual journey. Self-directed projects require students to first deepen their understanding in order to develop their own question. 

This means that there is no question that can be readily fed into ChatGPT for answers - one has to create one’s own research question. And when coupled with a topic of interest, students will more readily use AI tools to enhance their knowledge of the topic rather than as a shortcut to answers.

Educators also stand to benefit from this shift. It fosters a spirit of collaboration, sparking a dynamic exchange of ideas: Students receive guidance and affirmation, while educators gain fresh perspectives that expand their horizons, with the possibility of sparking new discoveries.

It is my firm belief that by emphasising a formative rather than evaluative approach, we can safeguard academic integrity and alleviate the unhealthy competition for good grades. By doing so, students will be able to rediscover the joys of learning and nurture trust with our educators. 


In the face of AI’s undeniable influence, society must embrace the reality that this technology is here to stay. 

Rather than shunning its presence in the classroom, would it not be better to view the challenges AI presents as an opportunity - a chance to redesign outdated learning and assessment styles, whilst preparing us students with the necessary AI competencies to have a competitive advantage in the future.

By harnessing AI’s potential as a positive disruptor, we can forge a new path, focusing on the values that will shape future generations. 

It is my hope to see thriving collaborations between students and educators to nurture trust and that spirit of intellectual curiosity. 

It’s time to prioritise these collaborations in learning.

Lim Le Ming is an undergraduate student at the Faculty of Arts and Social Sciences, National University of Singapore.

LISTEN - Daily Cuts: A teacher who lets his students use ChatGPT

Source: CNA/fl


Also worth reading