AI fuelling more sophisticated phishing attempts, cyberattacks
Malicious actors are likely to benefit as AI continues to improve and be adopted, said the Cyber Security Agency.
SINGAPORE: The rise of artificial intelligence (AI) is a trend to watch, with malicious actors likely to benefit as the technology continues to improve and be adopted, said the Cyber Security Agency (CSA) on Tuesday (Jul 30).
AI is used to enhance various aspects of cyberattacks, including social engineering or reconnaissance, the agency noted.
“This is likely to increase, driven by the ever-growing stores of data, which can be used to train AI models for higher quality results.”
The agency's Singapore Cyber Landscape 2023 report published on Tuesday found that malicious actors are using generative AI for deepfake scams, bypassing biometric authentication and detecting vulnerabilities in software.
Deepfakes are created using AI techniques to alter or manipulate visual and audio content.
Malicious actors have used deepfake calls, videos and photos for commercial or political reasons.
This year, several Members of Parliament received extortion letters with manipulated images in which their faces were superimposed onto obscene photos of a man and a woman.
And just last month, Senior Minister Lee Hsien Loong warned people about deepfake videos of him commenting on international relations and foreign leaders.
What is the difference between traditional and generative AI?
Traditional AI usually performs specific tasks intelligently, based on a particular set of data.
While these systems are able to learn and make decisions on fixed inputs, they cannot create anything new. They analyse and predict outcomes based on existing data.
Voice assistants such as Siri and recommendation engines on Netflix or streaming services are examples of traditional AI.
Generative AI, however, creates something new.
With the introduction of generative adversarial networks - or GANs - a type of machine learning algorithm, generative AI can create new images, videos, and audio.
ChatGPT, OpenAI's viral chatbot, is an example of generative AI.
According to Forbes, the main difference between traditional and generative AI lies in their capabilities and application.
"Traditional AI can analyse data and tell you what it sees, but generative AI can use that same data to create something entirely new."
NEW DIMENSION TO CYBER THREATS
AI has also allowed malicious actors to scale up their operations, reported CSA.
The agency and its partners analysed a sample of phishing emails observed in 2023, with about 13 per cent found to contain AI-generated content.
These emails “were grammatically better and had better sentence structure”, said CSA.
AI-generated or AI-assisted phishing emails also had “better flow and reasoning, intended to reduce logic gaps and enhance legitimacy”.
It added that AI’s ability to adapt to any tone allowed malicious actors to exploit a wide range of emotions in their victims.
The technology has also been used to scrape social media profiles and websites for personal identification information that can be used by malicious actors. This allows them to increase the speed and scale of their attacks.
CSA warned that malicious actors could also become unintended beneficiaries of legitimate research into how generative AI is used negatively.
These actors could recreate and operationalise research findings, incorporating them into their cyberattacks, said the agency.
“The use of generative AI has brought a new dimension to cyber threats,” said Mr David Koh, commissioner of cybersecurity and chief executive of CSA.
“As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it.”
Individuals and organisations need to learn how to detect and respond to malicious uses of Gen AI, said CSA.
Users can discern if the content they are viewing is a deepfake by assessing its message, analysing its audio-visual elements and using tools to authenticate its content, it said.
DECREASE IN PHISHING SCAMS IN 2023
According to CSA’s report, Singapore saw a 52 per cent decline in phishing attempts in 2023 compared with the year before. The drop bucked a global trend of sharp increases.
However, the total number of phishing attempts in 2023 was around 30 per cent higher than in 2021.
CSA warned that phishing attacks continue to be a major threat to organisations and individuals, especially as threat actors improve on the sophistication of their cyberattacks.
The agency observed that cybercriminals were making their attempts more legitimate and authentic.
For example, more than a third of reported phishing attempts in 2023 used the more credible-looking domain “.com” instead of “.xyz”, an increase of about 20 per cent from 2022.
More than half of the phishing URLs reported also used the more secure "HTTPS protocol", a significant increase from the 9 per cent that did so in 2022, said CSA.
The most spoofed industries in 2023 were banking and financial services, government, and technology.
Sixty-three per cent of the organisations imitated in phishing attempts were from the banking and financial services sector.
“This industry is often being masqueraded as banking and financial institutions are trusted organisations which hold significant amounts of sensitive and valuable information, such as personal details and login credentials,” said CSA.
The agency’s report also found cybersecurity vendors reported a record number of ransomware victims globally in 2023.
In Singapore, there were 132 cases, the same as in 2022.
The agency said ransomware groups shifted towards exfiltration-only data extortion attacks, which are faster and stealthier. This method refers to the unauthorised transfer of data without any encryption of files or systems.
Ransomware groups also shifted towards applying additional pressure tactics, such as harassing clients of victim organisations to compel the latter to pay the ransom, said CSA.
The manufacturing and construction industry remained the hardest hit by ransomware incidents in 2023.