Commentary: Enough gushing about AI, we need to pause and think about its dangers
Just as the potential for ChatGPT and other AI applications to do good is great, so is their capacity for harm in the wrong hands, says Carl Skadian, senior associate director at the Middle East Institute, NUS.
SINGAPORE: Unless you have been living under a rock, you would not have failed to register the collective swooning over artificial intelligence (AI) platforms such as ChatGPT recently. These so-called chatbots have been treated like shiny new baubles around the world, but, in my view, more so in Singapore.
Of course, the wonders of AI have been making the rounds for some time now, mostly in tech-nerd circles. But ChatGPT and its ilk have propelled the discussion into the open. Hardly a day goes by without someone expressing amazement over what these systems, based on natural language processing techniques, can do.
There has been little discussion, however, of the dangers involved. Sure, there have been some words of caution, including in pieces published by local media, but these have mostly been of the type that seek to counter outlandish Terminator-type theories about how machines will take over the world.
By and large, the pros have outnumbered the cons, and the din encouraging all and sundry to adopt some form of AI is getting louder.
GREAT FOR THE MUNDANE AND ROUTINE
To be sure, ChatGPT is a great tool that comes in handy for all sorts of things, mundane and otherwise. I have been using it for a while, for everything from getting recipes for family dinners (a recent request for one for paella indicated I should use Arborio rice, which, frankly, would have left much of Spain in revolt), to summarising long documents and getting quick backgrounders on areas of interest in my day job.
Its potential for minimising the need to carry out the mundane and routine parts of many jobs is sky-high, leaving humans to figure out the creative or other, more important, aspects of work.
Senior Minister of State for Communications and Information Janil Puthucheary said as much recently in Parliament. Touching on the work of a team tasked with integrating ChatGPT into Microsoft Word, he said: “… we see potential in helping civil servants with parts of the writing process, such as summarising long reference material, exploring related ideas or improving the clarity of writing.”
That’s a worthy goal, no doubt - apart from “improving the clarity of writing”, but that’s a story for another day.
Singaporeans, who pride themselves on being tech-savvy, are no doubt rushing to get on the bandwagon in their own ways, either by advocating the technology, or trying it out themselves. I’ve noticed many social media users and others posting what they have asked GPT to do. Which is all well and good - until someone does the equivalent of scanning a rogue QR code… remember those?
Just as the potential for ChatGPT and other AI applications, such as Gato or Midjourney, to do good is great, so is their capacity for harm in the wrong hands. Remember the recent brouhaha over AIA’s offer of a free “blood pressure device”?
Many were convinced it was a scam because of what we have learnt over the past few years: One clue that something is too good to be true is when an email, SMS, or flyer contains bad grammar. I’m sure bad hats are already using AI to spew grammatically perfect scam messages.
That is just the tip of the iceberg. Disinformation, misinformation, cyberattacks - all these are within the reach of those who have enough expertise to tap AI.
While some of the alarms that are being sounded border on the hysterical - I don’t believe, for instance, that we are on the precipice of what has been termed “The Singularity”, the point at which computing technology escapes human control - it is time to sound a note of caution about the dangers of AI and take a cold-eyed look at the possible consequences of its use.
These are not limited to criminal enterprises and such, but extend to other realms, including the security and socio-economic ones: For instance, are we prepared for bad hats that will unleash disinformation on an unsuspecting populace, or the jobs that will be lost as a result? It is no secret that many Singaporeans are vulnerable to - or, indeed, are targets of - the former, and at risk of the latter.
WARNINGS FROM TECH LEADERS
At any rate, we should take a page from what has happened in the United States and elsewhere in the months since ChatGPT made the headlines. In March, tech leaders and researchers, including Elon Musk and Apple co-founder Steve Wozniak, were among more than 1,000 who added their signatures to an open letter urging a pause in the development of advanced AI tools, saying these pose “profound risks to society and humanity”.
Earlier this month, US President Joe Biden met the chiefs of Google, Microsoft, and other firms, and warned that while AI had the potential for great things, it could also cause great harm, citing risks to society, the economy, and national security.
In Italy, ChatGPT was temporarily banned over data security concerns, while the European Union is also looking at rules to limit the use of high-risk AI products. Meanwhile, the “godfather of AI”, Geoffrey Hinton, announced that he has quit Google to speak freely about the risks of AI, and made the remarkable admission that he now regrets parts of his life’s work.
On May 17, in perhaps the most stunning development to date, the chief executive of OpenAI - the firm behind ChatGPT - Sam Altman, “implored lawmakers” (to use the phrase employed by The New York Times) to regulate artificial intelligence.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.”
These developments ought to give lawmakers here similar cause for pause. Singapore is rightly at the forefront of tapping technology for everything from better delivery of government services to, famously, deploying it to good use in the fight against COVID-19.
The rah-rah mood about ChatGPT is worrying, however. It reminds me of how everyone was so gung-ho about Facebook and social media in its early days, and the 2018 Parliamentary Select Committee hearings on false and harmful content, which revealed how little these firms were able, or willing, to do to put the genie back into the bottle.
ChatGPT and its ilk are infinitely more powerful. In its report on Mr Biden’s meeting with tech leaders, the Associated Press quoted Russell Wald, the managing director of policy and society at the Stanford Institute for Human-Centered Artificial Intelligence, as saying that the president wanted to “set the stage for a national conversation dialogue on the topic by elevating attention to AI, which is desperately needed”.
This weekend, during the Group of Seven summit in Hiroshima, Japan, Mr Biden and other leaders are also expected to hold the first conversations among the world’s largest democratic economies about a common approach to regulating the use of generative AI programmes like GPT-4.
If major world leaders and creators of this wondrous new technology themselves are urging a time-out to talk about guardrails, perhaps we should too.
Carl Skadian, a former journalist and editor for 30 years, is Senior Associate Director at the Middle East Institute, NUS.