As ChatGPT takes the world by storm, professionals call for regulations and defences against cybercrime
Sophisticated phishing emails, malicious codes, and hacking programmes are among the concerns of cybersecurity experts as the popularity of chatbots grow.
SINGAPORE: The viral artificial intelligence chatbot ChatGPT is finding its way into more homes and offices across the world.
With the ability to answer questions and pump out content within seconds, this generation of chatbots can help users search, explain, write, and create just about anything.
However, experts warn that the increasing use of such AI-powered technology comes with risks, and could facilitate the work of scammers and cybercrime syndicates.
Cybersecurity professionals are calling for regulatory frameworks and user vigilance to prevent individuals from falling victim.
THE RISE OF AI CHATBOTS
ChatGPT’s advantage is the “convenient, direct, and quick solutions” that it generates, said Mr Lester Ho, a user of the chatbot.
The seemingly curated content for each individual is one reason why some users now prefer to use ChatGPT as a search tool over traditional search engines like Google or Bing.
“Google’s downside is that users have to click on different links to find out what is suitable for them. Compare that to ChatGPT, where users are given very quick responses, with one answer given at a time,” he said.
Another draw is the chatbot’s ability to consolidate research into layman terms, making it easier for users to digest information, said Mr Tony Jarvis, director of enterprise security at cyber defence technology firm Darktrace.
Complicated topics such as legal matters, for instance, can be summarised and paraphrased into simple terms.
Its knack for writing complete essays has proven a hit with students seeking help with homework, sending education institutions scrambling to combat misuse, with some outright banning the bot.
Businesses have also flocked to the chatbot, drawn by its content creation and language processing capabilities that can allow firms to save manpower, time and money.
“This is definitely revolutionary technology. I believe sooner or later everybody will use it,” said Dr Alfred Ang, managing director of training provider Tertiary Infotech.
“Powerful chatbots will continue to emerge this year and the next few years,” added Dr Ang, whose firm uses AI to generate content for its website, write social media posts, and script marketing videos.
PHISHING, HACKING FEARS
Developed by San Francisco's OpenAI, ChatGPT set a record for the fastest-growing user base with an estimate of 100 million monthly active users in January this year.
The accessibility, popularity and extensive use of such chatbots are causing increasing concerns among cybersecurity experts over ethical and security considerations.
“Some people might use ChatGPT to create phishing emails. Others might use it to generate malicious codes or for hacking,” said Dr Ang.
Phishing emails are not new – current technique duplicates one template millions of times and mass sends to users. Due to the fixed templates, they are much easier for email systems to detect and warn users, or to automatically block or direct to the spam folder.
“But with AI-generated messages, it would be a lot more difficult to detect because the content looks like human written messages, and every email can look different,” said Professor Lam Kwok Yan from the Nanyang Technological University’s school of computer science and engineering.
These scam emails can now be generated even faster than ever with these chatbots, written in convincing styles free from spelling or grammatical errors so often seen in today’s phishing emails.
“With this, the chances of phishing emails penetrating into individuals’ mailboxes and being mistakenly clicked on and led to malicious websites (or downloads) will be higher,” Prof Lam said.
Apart from phishing, experts warn that the technology can also be used to write malware code that attacks computer systems.
GOVERNANCE AND REGULATIONS
Tech giants including Google, Microsoft and Baidu are all jumping on the bandwagon, with similar products and plans to advance them amid a chat engine race.
With cybercrimes expected to rise along with the use of AI chatbots, experts are calling on authorities to look into initiatives to defend against threats and safeguard users.
“To mitigate all these problems, (regulatory bodies) should set up some kind of ethical or governance framework, and also improve our Personal Data Protection Act (PDPA) or strengthen cybersecurity,” Dr Ang said.
“Governance and digital trust for the use of AI will have to be investigated so that we know how to prevent abuse or malicious use,” added Prof Lam, who is also a GovWare Programme advisory board member.
Authorities said phishing scams last year jumped by more than 41 per cent from the year before.
Apart from government and regulators racing to implement security measures, users also have a responsibility to keep up with technology news and skills to keep themselves safe, experts said.
“As more people use ChatGPT and provide data for it, we definitely should expect (the bot) to further improve. As end-users, we need to be more cautious. Cyber hygiene will be even more important than ever,” said Prof Lam.
“In the coming years, chatbots are almost certainly going to become more human-like, and it's going to be less obvious that we're talking to one.”