Policymakers need to ensure innovation will not be stifled as AI scene gets more regulated: WEF
At the end of the day, AI is going to touch on every aspect of society, said Ms Cathy Li of the World Economic Forum in an interview with CNA’s Yasmin Jonkers.

File photo of a data centre. (Photo: iStock/gorodenkoff)
This audio is generated by an AI tool.
DALIAN, China: Even as the artificial intelligence scene gets further regulated, policymakers have to ensure that innovation will not be stifled, said Ms Cathy Li, head of AI, data and metaverse at the World Economic Forum (WEF).
At a WEF conference in Dalian, China earlier this week, global leaders discussed the rapid growth of AI and the risks associated with generative models.
“When you build trust among the community, that's when you see that innovation really is given room to thrive and not be killed at the onset,” said Ms Li, member of the executive committee at WEF.
But ethical standards have to be in place, she told CNA’s Asia Now on Thursday (Jun 27).
“Because without the capability to even measure the risks and also the opportunities that will come with AI adoption and deployment, it's not going to be even possible to think about how we regulate the technology.”
WORKING TOWARDS RESPONSIBLE USAGE
With AI being the centre of attention across the society, the WEF is placing more importance on the technology and has been working hard to round up the different stakeholders, she noted.
“At the end of the day, AI - as such a ubiquitous technology - is going to touch on every aspect of society.”
For instance, the WEF launched the AI Governance Alliance last year. The initiative leverages the expertise of leaders from business, government, academia and civil society to address the ethical challenges of AI and chart a path towards a responsible future.
She noted however that given that AI is “nuanced”, different regions, countries and jurisdictions have different approaches.
But within the industry, organisations deploying the technology are taking a responsible approach, “because no one wants to see their employees, suppliers or customers being endangered”.
“At the end of the day, the responsibility is not only with the AI model producers and owners, but increasingly we see that responsibility is throughout the entire AI value chain. That goes through all the way to even the end users and the application adopters as well.”
BALANCING DEMAND AND SUSTAINABILITY
During the interview with CNA, Ms Li also addressed the energy demands to power AI systems, even as climate activists call for greater scrutiny of the environmental impact.
“To be very frank, we don't have good answers for it,” she said. “Because on the one hand, you see the uptake in AI adoption across different industries.
“But on the other hand, we all know that the global electricity usage is going to have at least 30 per cent more upward uptake based on the current prediction of the AI usage by 2035.”
The current power grids are unlikely to handle such a surge well “if we don't come up with new solutions relatively soon”, said Ms Li.
Meanwhile, data centres, which are effectively warehouses with lines of servers that help to power AI and the internet in general, also have to make their operations more sustainable by using electricity from renewable energy sources.
“Otherwise, we're not going to be able to meet the goals of reducing carbon footprint and (this would) compromise the net zero goals,” she noted.
To balance the AI boom with sustainability concerns, there is a need to forecast the actual electricity and other clean energy usage more accurately.
“It's not going to be easy,” said Ms Li. “We are trying to figure out what that equation might look like and it's not going to be perfect.
“But at the same time, I think having some preliminary responses is better than having no common ground at all.”