Skip to main content
Advertisement

Ground Up

‘Almost every instructor is doing it’: How university professors are using AI, and why students are concerned

Students who spoke to CNA TODAY said that they accept the use of artificial intelligence tools for course preparation if professors are transparent, but oppose using it for grading.

‘Almost every instructor is doing it’: How university professors are using AI, and why students are concerned

In conversations with 10 university professors at several institutions of higher learning in Singapore, CNA TODAY found that most of them used artificial intelligence for various purposes. (Illustration: CNA/Samuel Woo)

New: You can now listen to articles.

This audio is generated by an AI tool.

There was one problem with the reading material assigned to Ms Pearl (not her real name) for her English course assignment at the National University of Singapore (NUS): It did not exist.

The 21-year-old was one of 14 literature students at NUS enrolled in EN3254 Worldly Words: Written Image and Visual Text this year.

Over the Chinese New Year break in February this year, the professor had given the class a reading assignment, but when Ms Pearl looked up her assigned scholarly book chapter, she could not find the chapter online or at the university library. 

When she asked around, her course mates said they could not track down their assigned texts either.

She declined to be identified because she is still a student in the same department as the professor.

CNA TODAY has spoken to three students from the course and seen the assigned reading list of six texts. Based on checks by CNA TODAY, none of the texts on the list exist.

When the students told the professor that they could not find the books and articles he had assigned, he posted an announcement on Canvas, the university's learning management platform.

It was also forwarded to students' emails and it read: "I had drawn that list from existing bibliographies and I suspect one of them was partly machine-generated."

In response to CNA TODAY's queries about the incident, NUS said: "This was an optional exercise to provide additional opportunities for practice and not for assessment, so there was no impact on grades or learning outcomes. When this issue surfaced, students were directed to consult other source materials."

Nonetheless, the incident laid bare an uncomfortable truth for students: Even as universities are erecting rules around students' use of artificial intelligence (AI) and sometimes penalising them for it, their own professors are taking advantage of the technology as well.

In conversations with 10 university professors at several institutions of higher learning in Singapore, CNA TODAY found that most used AI for various purposes, including grading student assignments, preparing course materials and conducting research.

And just as there is rising concern about how students' use of AI might affect their capacity to learn and develop critical thinking skills, students, for their part, are concerned about whether they are being shortchanged when their instructors use AI.

When asked about her professor's response that he might have drawn at least part of his reading list from a machine-generated bibliography, Ms Pearl said: "It is concerning. As students, we trust our professors to check the material before presenting it to us.

"On a larger scale, if academia has been infiltrated by AI such that there are unreliable machine-generated bibliographies, it calls into question the honesty of scholarship and validity of research."

Students on the grounds of the National University of Singapore on Jun 25, 2025 (Photo: CNA/Raj Nadarajan).

HOW PROFESSORS ARE USING AI

Dr Rebecca Tan, an NUS lecturer in the Faculty of Arts and Social Sciences, uses AI and she can fully understand why her colleagues may be doing the same.

"You may use generative AI (GenAI) because a task is better automated. It gives us more bandwidth and frees us up to do more meaningful things," she said. "Or we are all just overwhelmed with the work we have to do, which would be a more structural issue."

She added an example of her AI use: "I use citation generators all the time. I just also check them as well."

Fellow academics told CNA TODAY of the various tasks they have outsourced to AI.

Mr Jonathan Sim, a lecturer in philosophy and AI at NUS, said he uses AI to brainstorm ideas to teach better, as well as to generate images that illustrate course content. This task, he said, is otherwise challenging because philosophy is often too abstract to find existing visuals that can illustrate those ideas well.

Similarly, Dr Jason Lim, a lecturer in architecture at the Singapore University of Technology and Design (SUTD), said: "AI can link me to references and sources I never thought about, though I would read them and see if they're really relevant."

Separately, Dr Liu Fang, a senior lecturer from the School of Science and Technology at the Singapore University of Social Science (SUSS), uses AI to generate questions on topics when she wants to test the students, but she always edits them before using them in class.

"The key to using AI tools is not to rely on them too much. You cannot use it directly. Take it as a supporter," Dr Liu said.

Dr Sovan Petra, a Singapore Management University (SMU) senior lecturer who teaches philosophy, uses AI to outwit AI itself. 

He "AI-proofed" his assessments by feeding exam questions into ChatGPT. He then edited his exam questions until ChatGPT could not answer them correctly.

This way, he said, it would be futile for students to try to use ChatGPT to answer his questions.

Other professors have experimented even more boldly with the technology, with mixed results.

Assistant Professor Leonard Ng, from the School of Materials Science and Engineering at Nanyang Technological University (NTU), said that in 2024, he and two colleagues launched "Prof Leodar", a chatbot to answer student questions.

This pseudo-instructor was imbued with a personality and even spoke Singlish.

Soon after deploying the chatbot in his classes, he conducted a study, which found that students felt they had learnt more. They also appreciated its round-the-clock availability and produced higher-quality work, Asst Prof Ng said.

Crucially, students reported that without tools such as Prof Leodar, they would likely have turned to inaccurate open-source ones, he added.

However, through observations covering a few cohorts of students, he found that when quizzed orally and in pen-and-paper exercises, students were unable to demonstrate the same level of understanding as past cohorts.

"We would be doing the students a disservice if we do not effectively show them how to use GenAI technologies to augment their output," Asst Prof Ng said. "However, we need to be careful lest students start to completely outsource their critical thinking to these GenAI technologies, which are, in effect, statistical algorithms."

Nowadays, he still uses Prof Leodar and similar chatbots created based on his lessons and experience with Prof Leodar. "Captain Thermo", for example, was a bot he created for his class on the thermodynamics of materials.

Associate Professor Ben Leong, the director of the AI Centre for Educational Technologies at NUS, has found a use case with ScholAIstic, a roleplaying large language model (LLM) platform developed by the centre to program court scenarios to help law students practise roles in court.

"It brings down the cost of role-playing," he said, noting the cost of hiring actors. 

He also pointed out another advantage of role-playing with LLMs over on-the-job training: "When you screw up, you can try again."

Students from the Singapore University of Technology and Design attending a lecture on Jun 25, 2025.

CONTROVERSIAL USE OF AI FOR GRADING WORK

One question drew particularly divisive views: Should teachers use AI to grade assignments?

Some professors said AI can speed up the grading process, especially for classes with many students, since some introductory modules can have thousands of students enrolled. Even then, they all noted that there has to be human involvement in the process.

Dr Liu from SUSS, for example, uses AI as a starting point to grade essays, especially for large classes of more than 50 students. She first inputs students' writing and asks AI to summarise the key points of each essay.

Afterwards, she reads every essay in full while quickly identifying the students’ key points with the help of the AI summaries, and grades them herself.

However, some professors disagreed with any use of AI in the grading process. 

Associate Professor Seshan Ramaswami, who teaches marketing at SMU, said that AI does not have the specific domain knowledge needed to properly grade papers or the human connection with the students.

"Even while an instructor's judgments of students' performance may be subjective, I think they would prefer that to the judgments of an impersonal piece of software," he added. 

Assoc Prof Leong from NUS believes that AI might not be sufficiently reliable for grading high-stakes exams for now, but human graders are also not completely consistent.

"Based on our preliminary studies, we have not reached the level of confidence and accuracy where we will deploy it at scale without human oversight yet. 

"But what is also clear from our study is that it is very likely, within the next five years, we will have AI grading that is more accurate and reliable than human grading," he said.

“It will not be 100 per cent accurate, but it will definitely be no worse than human graders."

Even when that time comes, some professors said they see it as their responsibility to students to apply their own energy to the task, instead of outsourcing it to AI.

"(If) a student puts in a certain amount of effort to submit something, I think we need to also reciprocate with an equivalent level of effort to evaluate whatever they produce instead of leaving it to some algorithm," Dr Lim from SUTD said.

Asst Prof Ng from NTU put it another way: "You can't take humans out of the loop because it's a sacred duty." 

WHAT THE UNIVERSITIES SAY

In response to CNA TODAY's queries, all six autonomous universities in Singapore said that they viewed artificial intelligence (AI) as a useful and transformative pedagogical tool that requires responsible use.

In general, the universities encourage instructors to explore AI use in their teaching and for streamlining administrative tasks, though they also stressed adherence to academic integrity and human oversight.

Improper use of AI will be handled under existing academic misconduct policies, they added.

Associate Professor Tamas Makany, the associate provost in teaching and learning innovation at the Singapore Management University (SMU), said: "The first step (to academic integrity) is transparency with students through open discussions of mutual expectations about acceptable practices."

He also said that instructors must manually check all generated materials for accuracy, fairness and alignment with learning goals, and consider legal and ethical issues such as copyright.

Associate Professor Karin Avnit, deputy director of the Teaching and Learning Academy at the Singapore Institute of Technology (SIT), said that irresponsible usage includes using AI to replace essential expert human judgment (such as for final grading), relying on unvalidated AI outputs, or failing to inform students of expectations around AI usage.

Some universities are exploring using AI-assisted grading tools within defined boundaries.

Ms Tammy Tan, chief communications officer at the Singapore University of Technology and Design, said that students use AI graders developed in-house to receive structured, rubric-based feedback on their poster submissions for the university's flagship first-year course, Design Thinking and Innovation. Final grading is still undertaken by the faculty.

The National University of Singapore said that "approval is required when AI is used to provide instructional responses, feedback or marks, whether as virtual tutors or markers". 

Its publicly available 2024 Policy for Use of AI in Teaching and Learning states that for AI marking, human oversight is mandatory except for objective or closed-ended assessments, where non-compliance may be deemed academic misconduct. 

Closed-ended assessments refer to those with pre-determined response options.

To address improper AI usage, the Nanyang Technological University said that it has well-established academic integrity procedures in place.

Similarly, SIT said that should any cases of improper use arise in future, such incidents would be addressed through existing academic and employee disciplinary procedures. 

Assoc Prof Makany from SMU said that instructors are told about the expectations for using AI in teaching and that should an incident arise, the university's first response will be to counsel rather than penalise them.

The Ministry of Education said that "all autonomous universities have institutional policies governing the use of AI, which are aligned to (the ministry's) position".

Collapse

LESS AI, MORE TEACHING

Most students who spoke to CNA TODAY agreed strongly with Asst Prof Ng's view that it is unacceptable to remove human involvement and judgment from the grading process.

When Ms Sophia (not her real name), a 20-year-old NUS student, found out that the feedback she had received on a graded assignment had been generated with the help of AI, she said it felt like "a slap in the face".

She also felt that the many hours of work she had poured into the essay were not respected, and she was appalled by the flat and "ChatGPT-sounding" feedback provided by the teaching assistant and approved by her professors.

She did not want to be identified by her real name because she is still studying in the department under her professors.

The stringent AI regulations applying to students at her school only exacerbated her frustration.

"Why are the same standards not being upheld for professors?" she asked.

SMU social science student Mildred Ng, 21, disapproved of professors using AI for any form of grading, a task she feels is too important to be entrusted to a machine.

"Grades are very sensitive and have high stakes for students," she said, citing examples such as financial aid, access to student housing and overseas exchanges.

She is especially sceptical that AI can properly grade essays where there is more than one right answer. In such cases, she does not trust AI’s capability to assess her work fairly.

On the other hand, SMU computer science student Dang Truong, 24, said he is willing to accept AI being used for essay grading, as long as professors are upfront about it and can lay out how accuracy in the marking would be maintained. 

By and large, students who were interviewed said they were comfortable with teachers using AI to prepare course materials, though they added that they would prefer it if their professors were transparent about how they were using AI if so.

The students also expressed a desire for their professors not to over-rely on AI in their instruction.

"I want to learn from my professor, not AI," Mr Truong said.

Ms Ng, the social science student, said that instructors' unique perspectives and knowledge are key to enhancing her learning experience.

"If I can get that knowledge through a machine that's widely available, then there is no point in going to school."

Additional reporting by Alice Le.

Source: CNA/yy
Advertisement

Recommended

Advertisement