CNA Explains: What is frontier AI and how Anthropic's Mythos is reshaping cybersecurity
Anthropic’s Mythos AI model has demonstrated advanced capabilities in coding and system analysis, raising fears over how such tools could be misused.
Illustration of a person using a laptop. (Illustration: iStock)
This audio is generated by an AI tool.
The debut of advanced artificial intelligence models such as Anthropic’s Claude Mythos is sharpening global concerns over a new class of technology known as frontier AI - and the security risks it could pose.
These systems, the most advanced AI models available today, can reason through complex problems and detect vulnerabilities at a speed and scale far beyond human capability.
While that could strengthen cybersecurity, it also introduces new threats.
Mythos in particular has drawn attention for its capabilities in coding and system analysis, fuelling fears that such tools could be misused by cybercriminals to exploit weaknesses in critical systems.
As a result, regulators and policymakers are reassessing whether existing safeguards are sufficient.
Here’s what to know about frontier AI - and how it is changing the risk landscape.
What is Frontier AI?
Frontier AI refers to the most capable and cutting-edge AI systems available today.
Think of it as J.A.R.V.I.S from Iron Man – an assistant that can hold a conversation, reason through problems and act on a user's behalf, said Zscaler's Santanu Dutt, the cybersecurity firm's vice president of solutions consulting and head of technology of Asia Pacific and Japan.
What was once fictional is now widely available to anyone with an internet connection, he noted.
These models are particularly powerful in areas such as cybersecurity and software analysis. They can scan vast amounts of code, identify weaknesses and even suggest ways to exploit or fix them - in hours rather than months.
But this becomes a double-edged sword.
While organisations can use frontier AI to fix bugs quickly, cybercriminals can also misuse the same system to exploit vulnerabilities at an unprecedented speed.
Besides Mythos, other frontier AI models include OpenAI’s GPT-5, Google’s Gemini and xAI's Grok.
What are the risks surrounding Frontier AI?
The concern is not just what frontier AI can do, but how quickly and widely it can do it.
Experts told CNA that these models dramatically lower the barrier to entry for sophisticated cyberattacks. Tasks that once required specialised skills - such as writing malware or crafting phishing campaigns – can now be done with simple prompts.
In terms of speed, vulnerabilities that once took weeks or months to uncover can now be identified and exploited in minutes, said Muthukumar Natarajan, field chief technology officer at cybersecurity firm TrendAI.
In terms of scale, one person with a frontier AI model can now match what previously required a team of attackers, noted Mr Dutt.
“The barrier to launching sophisticated attacks has effectively collapsed,” he added.
This creates a growing imbalance between attackers and defenders, said Ian Lim, chief technology officer of Asia Pacific, Japan and Greater China at Cisco Customer Experience.
“Attackers only have to find one gap while defenders have to remediate every vulnerability,” he explained.
“With frontier models significantly shrinking the time-to-exploit, organisations can no longer assume patches will arrive before exploitation occurs.”
Mr Dutt warned that frontier AI has changed the rules of the game. “Systems and instincts built for the old pace have not caught up,” he said.
Why are these risks a growing concern?
The implications are particularly serious at a national level, experts said.
Critical infrastructure, such as banks, telecommunications networks, healthcare systems and energy grids, relies on complex interconnected digital systems.
Many of these depend on layers of third-party and open-source software, which may contain hidden vulnerabilities.
Sophisticated state actors could leverage frontier models to launch automated and frequent attacks on critical infrastructure, Mr Lim said.
Frontier AI also allows threat actors to scan for weaknesses at scale, said Dmitry Volkov, chief executive and co-founder of cybersecurity firm Group-IB.
"Every country has many organisations that are part of its critical infrastructure," he said. “These organisations also rely on long lists of software products, many of which may contain vulnerabilities.”
An attack on a single sector could therefore ripple across others.
In Singapore, for instance, banks, hospitals, power grid, transport networks and telcos are deeply interconnected, Mr Dutt noted, which means a potential AI-accelerated attack on any one of them would not just be an IT problem.
"For businesses, the equivalent is stark - one careless moment with an AI tool can expose years of trade secrets or an entire customer database in seconds," he said.
Which sectors are most at risk?
As cyberattacks are driven by profit, data exfiltration and disruption, Mr Lim said industries that fit the target profiles of these attacks will be most at risk.
These include companies that move financial transactions, handle sensitive data and manage critical infrastructure, he added.
“Supply chain attacks should also be considered, as downstream organisations may not have the budget or expertise to secure their environments as rigorously as regulated industries,” Mr Lim said.
Globally, the finance sector has been thrust into the spotlight as authorities scrutinise its ability to combat risks posed by powerful AI models.
“Financial services and banking are highly exposed because they are digital, interconnected, and attractive to both criminals and nation-state actors,” said Mr Volkov.
“AI could accelerate fraud, phishing and attacks on banking applications.”
Are organisations keeping up?
Many organisations are not keeping pace with the rapid evolution of frontier AI, said Mr Natarajan.
He pointed out that cybersecurity today remains largely reactive, with companies responding to incidents rather than anticipating them. Policies governing AI use are also often underdeveloped or absent.
“We’re actually fixing the wheels while we’re driving,” Mr Natarajan said. “There’s a huge gap … I won’t say anybody is prepared for (attacks)."
While some banks in Asia are tightening checks on their AI tools, not everyone is moving fast enough.
Australia’s financial system regulator has warned that the country’s banks and their information security practices are struggling to match the rate of change in AI.
Even so, global financial institutions are adopting AI at more than twice the rate of regulators, according to research published on Apr 28 by the Cambridge Centre for Alternative Finance.
In the United States, the government announced on Tuesday that it would have access to tech giants’ new AI models to evaluate them before they are released - a sign of growing concern over their potential impact.
What can be done?
The risks posed by frontier AI cannot be eliminated - but they can be managed.
At an organisational level, experts point to several priorities - understanding risk exposure, monitoring how AI tools are used, setting clear guidelines on what data can and cannot be shared with AI systems, as well as investing in AI-driven security.
At the national level, stronger coordination is needed.
Mr Natarajan suggested an international governance framework, with governments working together to develop policies and procedures around the use of frontier AI.
“This is not just one country’s problem,” he said. “If they end up in the wrong hands, it’s going to affect everybody.”
Regular checks and governance policies should be established before even bringing AI into critical infrastructure, Mr Natarajan added.
Mr Dutt warned that cybersecurity in the age of frontier AI is no longer a back-office IT concern.
“It is a leadership issue,” he said. “Because the consequences land on customers, citizens, and the country.”
Want an issue or topic explained? Email us at digitalnews [at] mediacorp.com.sg. Your question might become a story on our site.