Real AI race is about solving humanity’s biggest challenges, says Google DeepMind’s COO
While some investors have questioned whether Google’s research-driven approach has caused it to lag behind competitors like OpenAI, DeepMind’s Lila Ibrahim told CNA’s Sarah Al-Khaldi the company remains at the forefront of AI innovation.
Google DeepMind chief operating officer Lila Ibrahim speaking to CNA at the Bloomberg New Economy Forum.
This audio is generated by an AI tool.
SINGAPORE: The global artificial intelligence race may be accelerating, but for Google DeepMind’s chief operating officer Lila Ibrahim, the real competition should not be about technological supremacy.
“When people throw around the term ‘AI race’, I think we need to take a step back and say, ‘What race is this really about?’” she told CNA at the Bloomberg New Economy Forum on Wednesday (Nov 19).
Her comments came amid intensifying talk of global AI dominance, with American chip giant Nvidia’s chief executive Jensen Huang recently saying China is well-positioned to lead the field due to lower energy costs and looser regulations.
But Ibrahim said the more important question is how societies can balance AI’s risks and opportunities.
“We have to be thoughtful stewards of the technology,” she added.
“Just throwing the technology out there, that's not a race worth having, right? The race is about … solving some of humanity's biggest challenges.”
She added that the goal of AI as a tool for human progress underpins DeepMind’s work.
The company – Google’s AI research arm – was founded in the United Kingdom in 2010 and acquired by Google in 2014.
The lab’s notable breakthroughs include developing AlphaGo, the first computer programme to defeat a Go world champion.
DeepMind also announced on Wednesday the opening of a new research lab in Singapore, its first in Southeast Asia, which will focus on using AI to address issues such as education and healthcare.
BALANCING BOLDNESS WITH RESPONSIBILITY
For Ibrahim, AI’s potential lies in creating opportunities for society, but she cautioned that its development must be done responsibly.
“If we can think of AI as like a telescope or a microscope to help us understand the world around us, and if we can do it with communities versus to communities, that’s absolutely critical,” she said.
She described the next phase of AI development as one where models become more “general and capable”.
This could be in the form of a “universal AI assistant” that can reason and contextualise, while still maintaining human oversight. However, “research breakthroughs” must happen to achieve such capabilities, she added.
While some investors and analysts have questioned whether Google’s slower and research-driven approach has caused it to lag behind competitors such as OpenAI, Ibrahim rejected the notion that Alphabet, Google’s parent company, is falling behind.
She said Google’s latest projects – including WeatherNext2, a weather prediction model for emergency preparedness, and its newest AI model Gemini 3 – reflect the company’s rapid pace of innovation.
“At DeepMind, we were founded on this premise: if we can build AI responsibly to benefit humanity, that’s really the mission worth pursuing,” she added.
“We can’t just be bold without being responsible. It has to be both.”
Asked who should take the lead in ensuring AI’s safe and ethical development, Ibrahim said responsibility must be shared across sectors.
“It has to be collaborative,” she said. “AI is a technology when you release it - it’s available to everybody instantaneously, so that requires companies to be thoughtful in how they do research.
“Governments need to think about how … to right-size regulation and provide the right environment.”
She noted that DeepMind employs teams of bioethicists, anthropologists and philosophers alongside technology experts to think about such questions and work with communities to address them.
“We need to think about, how do we prepare the next generation … how are they learning how to use it?”
MANAGING RISKS BEFORE IT’S TOO LATE
When asked about growing concerns that AI could become too complex for human oversight, Ibrahim said DeepMind treats risk as a “continuum”.
This includes near-term risks like bias, misinformation or misuse, and long-term ones such as who controls the technology and the values it is grounded in.
For Ibrahim, addressing these challenges now – before systems become too powerful – is essential.
“As long as we’re having the conversations now as we’re building it, instead of waiting until we’re too far down the capability pipeline,” she said.
To that end, DeepMind has set up governance structures, evaluation systems, and external partnerships to help ensure its models are tested from different perspectives, she added.
It has also published a “frontier safety framework” to encourage public dialogue around long-term risks.
“I personally spend so much time on responsibility and safety and collaborating externally, because I think if we don’t do it now, it may be too late,” Ibrahim said.
SINGAPORE’S ROLE IN GLOBAL AI RESEARCH
The company’s focus on a more holistic, global approach to AI is also behind its decision to open a research lab in Singapore.
Factors such as Singapore’s multicultural and multilingual society, and also the “extraordinary talent that is here”, played a part, Ibrahim said.
Many of the lab’s initial hires are Singaporeans returning from roles abroad, along with transfers from within Google.
She noted Singapore’s close attention to areas like upskilling, healthcare and robotics offered its team here “an opportunity to collaborate with a local ecosystem, with the government, in a pro-innovation environment”.
“The team has the agency to define what happens here in the region; it will also help improve what we're trying to do on a global level,” she added.