Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu

Advertisement

Advertisement

Singapore

More SMEs falling prey to scams even as many of them turn to tech like AI for solutions

One firm recently got attacked with a phishing fraud where its email system was breached, and an attacker posed as an employee and asked a customer to transfer money to another bank account.

More SMEs falling prey to scams even as many of them turn to tech like AI for solutions

CNA's Marcus Tan (on left of table) sitting through a simulation of a deepfake Zoom call with CEO of Straits Interactive Kevin Shepherdson.

New: You can now listen to articles.

This audio is generated by an AI tool.

SINGAPORE: A majority of small and medium enterprises (SMEs) have turned to artificial intelligence (AI) to improve their operations, but may not be doing enough to ensure their security.

A recent survey of business owners by the Association of SMEs (ASME) found that 90 per cent of respondents had tried implementing some form of AI solution. Some areas they are using such technology for include logistics and routing, said ASME president Ang Yuit.

At the same time, the association has seen a year-on-year increase of about 20 per cent in the number of companies reporting falling prey to scams.

“Many businesses are concerned about whether they can be hacked, and whether they would lose money financially, especially when a breach happens,” he said.

Their concerns are not linked only to AI, but to other breaches such as phishing and fraud, he added. Despite their fears, many are willing to take a “wait and see” approach towards improving their security, he said.

A member firm recently got attacked with a phishing fraud where its email system was breached, and an attacker posed as an employee and asked a customer to transfer money to another bank account, he shared.

“More and more businesses are attacked with variations of that. So you can imagine with the advent of even better phishing fraud, using deepfakes and using AI, that will get even worse,” he said.

“The challenge is that many businesses may not have the bandwidth, especially when you're talking about SMEs, to really properly deal with it,” he added. Most businesses rely on financial institutions like banks to be their vanguard against scams, he said.

THE PROBLEM OF DEEPFAKES

Among the challenges is identifying deepfakes.

In February this year, a finance worker at a multinational firm in the Asia Pacific region was tricked into paying US$25 million (S$34 million) to scammers, the Cyber Security Agency of Singapore (CSA) said in an advisory on Friday (Mar 22).

In the incident, the scammers had used deepfake technology to pose as the company’s chief financial officer in a video conference call, to convince the worker to make the fraudulent wire transfer.

The main advisory for companies from CSA is to adopt three steps to avoid such traps: Assess the message, analyse the audio and visuals, and authenticate the content.

Founder and CEO of Straits Interactive, Mr Kevin Shepherdson, showed CNA how easily deepfakes can be created.

Using a set-up that can be run on a personal desktop computer, he replicated the scam incident, with “him” instructing an employee to transfer funds, complete with a slight lag and bad signal for authenticity. His firm deals with data protection.

FILE PHOTO: AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

LACK OF DATA PROTECTION

Mr Shepherdson said that the biggest concern with the use of AI in the corporate context is that firms are using the free version of AI tool ChatGPT.

“That is something that you shouldn't do, given the fact that a lot of these companies are using your personal data as part of monetising your personal data,” he said.

“So what's happening is information that belongs to your company is being used to train the AI and on top of that … you want to summarise the document, you could be actually uploading a very confidential document.”

There is also the danger posed by hundreds of AI apps available that claim that data uploaded will be protected, when in fact it is not, he added.

Member of Parliament Patrick Tay who sits on the Government Parliamentary Committee for Home Affairs said that he is concerned by the state of affairs, given that no organisation or country will be spared.

“The knowledge acquisition of the limitations, the ethics and the use, and the policies of generative AI is important. And it should percolate down, from not just the employers and businesses but also to the worker, because we are only as strong as the weakest link,” he said.

Source: CNA/ja(fk)

Advertisement

Also worth reading

Advertisement