SINGAPORE: From the popular boy band The Jonas Brothers to celebrity chef Gordon Ramsay, users of FaceApp around the world have been having a blast sharing their AI-altered photos on the Internet.
The photo filtering app which has been around since 2017, had only recently become the latest social media craze as photos with its filter that transform one’s look younger or older had gone viral.
It was all good fun until users and experts began to notice the red flags.
IT’S NOT JUST FACEAPP THAT YOU SHOULD BE WARY OF
Last month, Prime Minister Lee Hsien Loong issued a note of caution urging Singaporeans to “be careful with what apps [they] download and use”. One point he alluded to is the potential criminal use of users’ personal data.
In this incident, that data refers to images of users’ faces.
Photo sharing has become second nature to many people with the rise of social media. But as more solutions leverage biometric data – which includes our facial features – for authorisation and authentication, careless sharing can put people at risk of identity-based attacks.
The reality is, some digital platforms expose users to more risks than others do. When users agree to the terms of service of an app, they may unwillingly give the app permission to use their photos for other purposes.
For instance, clauses in the FaceApp’s terms and conditions notes that the app reserves the rights to “use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display” users’ photos and personal information “in all media formats and channels now known or later developed” without having to compensate or notify users.
This means that user identities could potentially be used for various works without them knowing, and even altered, exploited, and reproduced in something that did not, in fact, exist or occur.
It may seem far-fetched, but with access to high-resolution pictures of users’ faces, cybercriminals can easily create deepfakes – high-fidelity images that are indistinguishable from the original – from just a few images or even a single one.
If these images and data fell into the wrong hands, bad actors have the ability to impersonate just about anyone.
A THREAT TO BIOMETRIC AUTHENTICATION
This poses an obvious threat to Singapore, which announced plans to adopt a centralised biometrics identification system as part of its National Digital Identity system.
A key security issue associated with biometric authentication is susceptibility to spoofing – the act of posing as valid users to gain access to critical networks. Scenarios in which fraudsters successfully spoof verification measures using a doctored photo or video are now entirely possible with the rise of deepfake technology.
This is a pressing cause for concern as Singapore looks to roll out its centralised biometric platform to sectors that will include banking and finance.
As such, Internet users need to be more discerning with what they share, and whom they share it with.
Terms and Conditions are a good place to start when using social media and photo-sharing apps.
While more established platforms such as Instagram and Snapchat have more upfront and transparent terms of service, this may not be necessarily the case for lesser-known apps.
Going through the Terms and Conditions can be a hassle, but it is a good first step to help users know their data is in safe hands.
Users can also protect their digital identities by opting for online services with more stringent authentication and verification processes, especially when biometric login is quickly gaining traction across both public and private sectors.
You may want to think about setting up two-factor authentication, by adding fingerprint verification on those apps that you use. Additional checks and balances like requiring a passcode might seem cumbersome but provides one more layer of assurance.
Companies, on the other hand, can help prevent sham logins by implementing better security components such as “liveness detection” – a technology that allows service providers to determine the user’s physical presence.
Liveness detection requires users to take a selfie of themselves holding a handwritten note - often containing a custom phrase, such as a company date, and the current date - to ensure that there is a real person is performing the transaction.
Users may also be prompted to perform other actions that cannot be easily replicated with a spoof as the technology captures the user’s patterns of rhythm, tracks their eyeball movement or flash lights on their face to ensure that the user is a three-dimensional object.
Already, Singapore’s Info-communications Media Development Authority (IMDA) listed liveness detection earlier this year as one of the necessary measures in its new cybersecurity guide for telecom companies.
This shows that regulators in Singapore are taking a more stringent approach towards identity verification and authentication, and we can expect other critical sectors to follow suit.
Proving liveness, particularly during enrolment, establishes the chain of trust. It anchors the digital identity of a real person and strengthens the entire trust chain, especially if the biometric data is stored centrally.
As Singapore continues to extend the national biometric ID system to more sectors, the “trust but verify” approach will serve as the foundation for future efforts to protect citizen’s online identities.
IMDA’s recognition and inclusion of liveness detection in its guidelines is a great first step towards that direction, but it must be complemented by better cyber hygiene from Internet users around how image data is captured, stored, shared, and used.
Frederic Ho is APAC Vice President at Jumio Corporation.