ChatGPT can write essays and answer questions, but can AI take over humans?
A chatbot like ChatGPT is the “perfect tool” for people who want to spread misinformation, said an expert.
SINGAPORE: Chatbot ChatGPT can write essays and answer the toughest of questions, but such artificial intelligence (AI) tools may not be able to take over humans just yet, said an expert in the field on Tuesday (Dec 20).
“It certainly can write business letters. It's even written a film script, and answers exam questions. There's lots of things it can do but … it also doesn't really understand completely what it's talking about,” leading researcher Toby Walsh told CNA’s Asia First.
While the AI may get it right most of the time, it can also be completely wrong.
“At the end of the day, it's not really understanding like you and I understand what it's saying. It’s just saying things that are probable,” he said.
“I'm not too worried about the machines taking over. They don’t have any sentience,” he said. He added that machines do not have consciousness or desire to do what humans do.
What he is indeed worried about is people possibly becoming “a bit too lazy” and making the tools do their work for them, said the Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney. “Scientia Professor” is a title given as a recognition of outstanding research performance.
“It's not that the robots are going to be malevolent and decide to take over the planet. It tends to be much more subtly insidious things,” he said.
“It's that we give responsibility to machines that aren't capable enough.”
San Francisco-based research and development firm OpenAI made its latest creation, the ChatGPT chatbot, available for free public testing on Nov 30. Within a week of its unveiling, more than a million users are said to have tried to make the tool talk.
CHATBOT’S ROLE IN EDUCATION
He said among the consequences may be the need to transform the way students are taught in schools.
“Are we going to have to stop people setting exam questions where we ask people to write essays because they can just ask ChatGPT to write them? So, how do we actually then teach people to write properly if we can't actually ask them exam questions anymore?” he asked.
Not having to learn skills like writing essays may mean people may be less intelligent in future, he said.
However, Mr Jonathan Sim, an instructor with the Department of Philosophy at the National University of Singapore told CNA938’s Asia First on Wednesday that educators should not treat AI tools, including ChatGPT, as taboo.
“This is a place of learning, so we should actually teach them how to use it well, how to really take their learning further with it,” he said.
On his part, he has prepared an exercise for his students that involves the use of ChatGPT. They will have to use the chatbot to generate an essay and critique it, he said.
Mr Sim said he has been testing ChatGPT out, and that he would give the essays it writes a B grade at best.
“It’s actually a very good learning opportunity to get students to sit down, learn how to generate it and then ask why is this not an ‘A’ essay,” he said, adding that they would learn how to write better through this exercise.
Another issue with such a chatbot is that it is the “perfect tool” for people who want to spread misinformation, said Prof Walsh, a Laureate Fellow. He noted that social media is already rife with fake news and ChatGPT will not help the situation.
“Here, we've got a tool that at speed and scale and at very limited cost can produce very plausible text that is much more likely that we will click on than the emails that we're used to getting from … scammers,” he said.
“Now we can actually personalise those emails to any information that we can glean (about) you from the web.”
In the wrong hands, it is a “potentially quite dangerous tool”, he cautioned.
Prof Walsh, who wrote a book called “Machines Behaving Badly: The Morality of AI”, added that there is not enough knowledge about how ChatGPT’s technology works.
“For example, we believe that the million people using it now are actually helping to improve it. But we're not exactly sure how people's queries are being used to improve the output, and get rid of some of those troublesome ways it makes stuff up,” he said.
NEW WAY OF ACCESSING INFO
ChatGP gives a glimpse into how the future could look, Prof Walsh said.
He said, for instance, he thought AltaVista, one of the earliest search engines, would be all that he ever needed, but Google came along and gave him a better way to access information.
Hence, a tool like ChatGPT could become the next phase of web search, he said.
“Ultimately, rather than having to follow links and look things up yourself, if the search engine can actually answer the questions for you and we can deal with this fundamental issue of it making stuff up, then I suspect that's going to be my favourite destination, not Google anymore,” he said.
With technology being developed at breakneck speed, regulation does tend to lag, Prof Walsh said.
“We're just starting to see social media being adequately regulated today,” he said.
He noted that there are constantly new stories on lawsuits being brought against tech companies. There is a need to worry about whether these new developments will also lead to harm, on social media or elsewhere, said Prof Walsh.
“We do have to move forward quicker and faster with the regulation because we are discovering that these are just like any other business, and they become data monopolies,” he said.
“We do need to regulate those markets to ensure that all of us profit from the benefits that these technologies are going to bring.”