Commentary: In times of rapid AI development, it’s better to ride the wave than to be left behind
Pragmatism suggests that generative AI regulation can favour innovation and growth without being naive about the technology’s potential pitfalls, say lawyers Alex Toh and Rajesh Sreenivasan.
SINGAPORE: With ChatGPT and generative artificial intelligence (AI) hitting the public consciousness this year, the fear in some quarters is that we may have finally outsmarted ourselves by creating something that poses an existential and irreversible threat to the livelihood of humankind.
The impact of AI on every aspect of our lives cannot be understated. AI can now reportedly perform better on the US bar exam than the average law school graduate. OpenAI’s Jukebox can generate music samples while Stable Diffusion and Midjourney can churn out illustrations - probably at rates cheaper and faster than human musicians and artists.
Office workers, whose tasks tend to involve reasoning, communication and coordination, are more likely to find aspects of their jobs automated by AI going forward.
The speed at which ChatGPT has acquired users is unprecedented, reaching 100 million monthly active users in January, just two months after launch. This trend is not expected to stop with the amount of venture capital invested in AI having grown significantly over the years.
RISKS OF AI WARRANT CONTEMPLATION
But the possible negative consequences of unleashing generative AI upon the world without adequate restraint warrants contemplation. The rapid automation of tasks and processes traditionally performed by humans raises concerns regarding job displacement at an unparalleled scale, and if some forecasts are to be believed, is a frightening prospect that could destabilise economies and exacerbate inequality.
The growth of AI systems reliant on the harnessing of vast amounts of data - data being the de facto currency of AI systems - would also likely increase users’ risk of data exposure. Malicious actors could weaponise the technology with impunity.
It is therefore no surprise that players in the field have called for caution. On May 30, a group of more than 350 AI executives and experts from companies including OpenAI and Google published a statement warning of the “risk of extinction from AI”, and that mitigating it “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
There was also the open letter by the Future of Life Institute in March, calling for a six-month moratorium on training even more powerful AI systems. It garnered more than 1,000 signatories, including Tesla CEO Elon Musk and Apple co-founder Steven Wozniak.
PURSUING GROWTH WITHOUT BEING NAIVE ABOUT AI PITFALLS
Governments around the world have been busy contending with the potential effects of AI. Italy implemented a temporary ban on ChatGPT, leading other European countries to consider a similar move. Italy later lifted the ban after OpenAI introduced new data protection features, such as warnings for users and the option to not have chats be used for algorithm training.
The proponents of an AI “pause" typically cite the need to refine models and implement mechanisms to address biases, misinformation, harmful content and other ethical concerns, at least before widespread dissemination of the technology to the public.
Others disagree, pointing out that a temporary ban could delay advancement of AI development and innovation, and increase the resources necessary to re-introduce AI at a later juncture.
Pragmatism suggests that the balance could be tilted in favour of innovation and growth without being naive about AI’s potential pitfalls.
The European Union is currently working on mandatory legislation such as the EU AI Act, which promotes a risk-based approach to classifying and regulating AI systems. For example, high-risk AI systems such as a social credit scoring system based on real time surveillance would be prohibited, while low-risk AI systems such as spam filters would be permitted subject to transparency requirements.
Similarly, China recently published draft administrative measures which seek to regulate the research, development and use of generative AI products, and their provision to the public. These measures are impressively comprehensive, covering a wide range of issues such as requirements concerning security assessments, content regulation, intellectual property, and transparency and bias in training data.
Regulation through law, backed with the means of enforcement, could mitigate harm and foster trust in AI. However, critics have also argued that prescriptive regulation could unduly stifle innovation by imposing greater compliance costs on developers and, eventually, users.
Jurisdictions such as the UK have opted not to legislate broadly. A central set of common principles will be applied, leaving it to individual sectors to regulate for use cases of AI but not the technology per se. This approach would enable the appropriate experts with domain knowledge to target sector-specific issues (such as regulation of autonomous vehicles) with greater surgical precision.
At the same time, guidelines and principles without the force of law could be said to lack sufficient “bite” for the purposes of achieving compliance.
On one view, Singapore has sought to achieve the proverbial sweet spot by opting for a more collaborative governance model. It provides industry partners with a regulatory sandbox to experiment and innovate in the context of a controlled but non-prescriptive environment.
In 2019, for example, the Personal Data Protection Commission published the first edition of the Model AI Governance Framework which sets out implementable guidelines, frameworks and toolkits to assist companies in deploying AI responsibly.
The Ministry of Communications and Information also announced that later this year, it will issue advisory guidelines on the use of personal data in AI systems, under the Personal Data Protection Act 2012.
TOWARDS RESPONSIBLE AI DEPLOYMENT
Whilst policymakers and lawyers may have to work out the fine print, building a measured regulatory framework is possible.
Standards, licensing, accreditation and technology approvals are anticipated to play a prominent role in shaping AI regulation, paving the way for ethical AI design. International cooperation and consensus are also likely to grow.
But the broader point is this - by engaging in thoughtful discussion and drawing from existing legal frameworks, lawmakers, industry representatives and the public can contribute to ethical and responsible AI development and deployment.
For the rest of us, some tried and true principles in dealing with technological change remain constant.
Be curious and try out AI products to learn and understand how they work. Take a critical eye towards the potential, limits and risks of AI, but be receptive towards how AI might change the way tasks are carried out. Be adaptable and upskill to make the most of the opportunities that technology may bring.
It wasn't that long ago when smartphones were just a curiosity, but you can’t leave home without one now.
Alex Toh is a Singapore Academy of Law (SAL) Accredited Specialist in Data and Digital Economy Law, and a local principal in the Intellectual Property & Technology Practice Group in Baker McKenzie Wong & Leow.
Rajesh Sreenivasan is an SAL Senior Accredited Specialist in Data and Digital Economy Law, and the head of the Technology, Media & Telecommunications team at Rajah & Tann Singapore.