Winner of AI contest aimed at fighting fake media wants to make deepfake detection available on ByteDance platform
SINGAPORE: The winner of a competition designed to find artificial intelligence (AI) solutions to combat fake media aims to make deepfake detection available on Chinese technology company ByteDance’s platform, said AI Singapore (AISG) in a media release on Friday (Apr 29).
The competition, Trusted Media Challenge, which began on Jul 15 last year, was a five-month-long initiative driven by AISG, a national AI programme launched by the National Research Foundation.
“The competitors took part in the challenge in order to raise public awareness about how media can be manipulated, and how to detect media that has been manipulated,” said AISG.
“This will allow people to identify and stop the spread of unethical and malicious AI applications and instead view media that has been authenticated,” it added.
The winning project was led by Wang Weimin, a Singaporean working as a research scientist at ByteDance, the parent company of video app TikTok.
Mr Wang, a National University of Singapore graduate, said that he was motivated to participate in the competition as the “prevailing challenge in the media landscape” matched his own research interests.
“Good or evil, deepfake is an emerging tech you simply can’t ignore,” he said.
AISG said that Mr Wang was working towards incorporating his AI model into ByteDance’s BytePlus platform to make deepfake detection available to users.
The second prize went to a team comprising Swiss software engineer Peter Grönquist and Chinese PhD student Ren Yufan. The duo is looking to collaborate with Singaporean companies to develop authentication and certification interfaces, said AISG.
In third place was a team led by PhD student Li Tianlin from Nanyang Technological University’s (NTU) Cyber Security Lab and four others from NTU, Singapore Management University and Japan’s Kyushu University. The team is looking to develop its AI model further through its start-up, VAISION.
The top three winners will receive prize money and start-up grants amounting to S$700,000.
BATTLE AGAINST MISINFORMATION
The prevalence of deepfakes illustrates how technology is not only used to spread misinformation, it is also being used to produce information, a point made by Minister of State for Communications and Information Tan Kiat How.
Speaking at the competition's prize-giving ceremony on Friday, Mr Tan cited the example of a doctored video that featured US House Speaker Nancy Pelosi slurring her speech and appearing intoxicated. Millions viewed the video before platforms took it down.
"In the wrong hands, such technology can be a potent tool to produce misinformation," said Mr Tan.
"This is why the Trusted Media Challenge is a timely one. Technology is not just part of the problem, but it can also be part of the solution."
WINNERS CHOSEN FROM 470 TEAMS
About 470 teams from across the Asia Pacific region, North America, Europe, Africa and Oceania participated in the competition, which was targeted at AI enthusiasts and researchers to build AI models that will easily detect audiovisual media that has been manipulated.
The teams were required to design and test their solutions against short video clips where the video and audio may be modified.
At the end of the first phase of the competition, 20 teams qualified for Phase 2 which ended in December last year.
Submissions from the qualifying teams were evaluated and scored on a leaderboard, said AISG. The three models that achieved the highest scores were then reviewed by technical experts in the field.
The winners’ AI models were trained and tested using a dataset that included video clips from Mediacorp’s CNA and Singapore Press Holdings’ The Straits Times, said AISG, adding that custom videos were also collected from actors.
“AI Singapore is optimistic that the winning solutions will spur the translation and commercialisation of media-focused AI innovations to industry and make Singapore the home of pioneering AI products and services,” it said.