Skip to main content
Advertisement
Advertisement

Business

Microsoft, Google and xAI to give US government early access to AI models for security checks

Microsoft, Google and xAI to give US government early access to AI models for security checks

A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, Jan 9, 2025. (Photo: REUTERS/Gonzalo Fuentes)

05 May 2026 08:00PM (Updated: 06 May 2026 04:40AM)

WASHINGTON: Microsoft, Google and Elon Musk’s xAI agreed to give the US government early access to new artificial intelligence models for national security testing, as US officials grow alarmed by the hacking capabilities of Anthropic’s newly unveiled Mythos.

The Centre for AI Standards and Innovation at the Department of Commerce said on Tuesday (May 5) that the agreement would allow it to evaluate the models before deployment and conduct research to assess their capabilities and security risks. The agreement fulfills a pledge the Trump administration made in July 2025 to partner with technology companies to vet their AI models for “national security risks."

Microsoft will work with US government scientists to test AI systems “in ways that probe unexpected behaviours,” the company said in a statement. Together they will develop shared datasets and workflows for testing the company’s models, the company said. Microsoft signed a similar agreement with the UK’s AI Security Institute, according to the statement.

Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, US officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.

CNA Games
Show More
Show Less

The development of advanced AI systems including Anthropic's Mythos has in recent weeks created a stir globally, including among US officials and corporate America, over their ability to supercharge hackers.

"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," CAISI Director Chris Fall said in a statement.

The move builds on previous agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI was known as the US Artificial Intelligence Safety Institute. Under former President Joe Biden, the institute focused on developing AI tests, definitions and voluntary safety standards. It was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.

CAISI, which serves as the government's main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public. 

Developers frequently hand over versions of their models with safety guardrails stripped back so the centre can probe for national security risks, the agency said.

xAI did not immediately respond to a request for comment. Google declined to comment.

Last week, the Pentagon said it had reached agreements with seven AI companies to deploy their advanced capabilities on the Defense Department's classified networks as it seeks to broaden the range of AI providers working across the military.

The Pentagon announcement did not include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military's use of its AI tools. 

Source: Reuters/fs
Advertisement

Also worth reading

Advertisement