Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide
Best News Website or Mobile Service
Digital Media Awards Worldwide
Hamburger Menu




Commentary: Google CEO - building AI responsibly is the only race that really matters

Fulfilling the technology's potential is not something that one company can do alone, says Sundar Pichai, CEO of Google and Alphabet.

Commentary: Google CEO - building AI responsibly is the only race that really matters
File Photo. AI is the most profound technology humanity is working on today; it will touch every industry and aspect of life. (Photo: Reuters/Brandon Wade)

MOUNTAIN VIEW, California: This year, generative artificial intelligence (AI) has captured the world’s imagination. Already, millions of people are using it to boost creativity and improve productivity. Meanwhile, more and more start-ups and organisations are bringing AI-powered products and technologies to market faster than ever. 

AI is the most profound technology humanity is working on today; it will touch every industry and aspect of life. Given these high stakes, the more people there are working to advance the science of AI, the better in terms of expanding opportunities for communities everywhere.

While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that. At Google, we’ve been bringing AI into our products and services for over a decade and making them available to our users. 

We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right.


We’re approaching this in three ways. First, by boldly pursuing innovations to make AI more helpful to everyone. We’re continuing to use AI to significantly improve our products - from Google Search and Gmail to Android and Maps. 

Artificial intelligence is being used to improve products from search, email to maps. (Photo: iStock/Sompong Lekhawattana)

These advances mean that drivers across Europe can now find more fuel-efficient routes; tens of thousands of Ukrainian refugees are helped to communicate in their new homes; flood forecasting tools are able to predict floods further in advance.

Google DeepMind’s work on AlphaFold, in collaboration with the European Molecular Biology Laboratory, resulted in a groundbreaking understanding of over 200 million catalogued proteins known to science, opening up new healthcare possibilities.

Our focus is also on enabling others outside of our company to innovate with AI, whether through our cloud offerings and APIs, or with new initiatives like the Google for Startups Growth program, which supports European entrepreneurs using AI to benefit people’s health and wellbeing. We’re launching a social innovation fund on AI to help social enterprises solve some of Europe’s most pressing challenges. 


Second, we are making sure we develop and deploy the technology responsibly, reflecting our deep commitment to earning the trust of our users. That’s why we published AI principles in 2018, rooted in a belief that AI should be developed to benefit society while avoiding harmful applications.

We have many examples of putting those principles into practice, such as building in guardrails to limit misuse of our Universal Translator. This experimental AI video dubbing service helps experts translate a speaker’s voice and match their lip movements. 

It holds enormous potential for increasing learning comprehension but we know the risks it could pose in the hands of bad actors and so have made it accessible to authorised partners only. As AI evolves, so does our approach: this month we announced we’ll provide ways to identify when we’ve used it to generate content in our services. 


Finally, fulfilling the potential of AI is not something one company can do alone. In 2020, I shared my view that AI needs to be regulated in a way that balances innovation and potential harms. With the technology now at an inflection point, and as I return to Europe this week, I still believe AI is too important not to regulate, and too important not to regulate well.

Developing policy frameworks that anticipate potential harms and unlock benefits will require deep discussions between governments, industry experts, publishers, academia and civil society. 

Legislators may not need to start from scratch: existing regulations provide useful frameworks to manage the potential risks of new technologies. But continued investment in research and development for responsible AI will be important - as well as ensuring AI is applied safely, especially where regulations are still evolving.

Increased international cooperation will be key. The United States and Europe are strategic allies and partners. It’s important that the two work together to create robust, pro-innovation frameworks for the emerging technology, based on shared values and goals. 

We’ll continue to work with experts, social scientists and entrepreneurs who are creating standards for responsible AI development on both sides of the Atlantic

AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more. Yet we are still in the early days, and there’s a lot of work ahead. We look forward to doing that work with others, and together building AI safely and responsibly so that everyone can benefit.

Sundar Pichai is the CEO of Google and Alphabet.

LISTEN - Daily Cuts: The Achilles heel of AI

Source: Financial Times/fl


Also worth reading