Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu

Advertisement

Advertisement

Commentary

Commentary: Viral AI chatbot ChatGPT is less wowed by itself than we are

Amid swirling predictions of how an artificial intelligence chatbot could be revolutionary, we must scrutinise who runs these new digital portals to human knowledge, says John Thornhill for the Financial Times.

Commentary: Viral AI chatbot ChatGPT is less wowed by itself than we are
ChatGPT, the latest iteration of “generative artificial intelligence” is electrifying the tech world. (Photo: iStock/metamorworks)

LONDON: What are we to make of ChatGPT, the latest iteration of “generative artificial intelligence” that is electrifying the tech world? Although it was only released by the San Francisco-based research company OpenAI last week, more than 1 million users have already experimented with the chatbot that can be prompted to churn out reams of plausible text at startling speed.

Enthusiastic users have been predicting ChatGPT will revolutionise content creation, software production, digital assistants and search engines and may even transform the digital substructure of human knowledge. One tweeted that ChatGPT’s significance was comparable to the splitting of the atom.

Such wild rhetoric makes some computer scientists rip their hair out, as they argue that ChatGPT is nothing more than a sophisticated machine-learning system that may be incredibly good at pattern recognition and replication but exhibits no glimmerings of intelligence.

Besides, ChatGPT is not even new technology. It is a tweaked version of GPT-3, a so-called large language, or foundation, model released by OpenAI in 2020, that has been optimised for dialogue with human guidance and opened up to more users.

CHATGPT “HALLUCINATES FACTS”

Intriguingly, ChatGPT itself appears a lot less wowed by its own capabilities. When I prompted it to identify its flaws, ChatGPT listed limited understanding of context, lack of common sense, biased training data and potential for misuse, by spreading misinformation to manipulate financial markets, for example.

ChatGPT has the potential for misuse, such as by spreading misinformation. (Photo: iStock)

“While ChatGPT is a powerful and impressive technology, it is not without its limitations and potential flaws,” it replied. One researcher is more blunt. “It hallucinates facts,” he says.

While most of the public interest in ChatGPT focuses on its ability to generate text, its biggest business and societal impact may come from its corresponding ability to interpret it. These models are now empowering semantic search engines, promising far more personalised and relevant content to users.

Whereas traditional search engines serve up results according to the number and quality of links to a page — assumed to be a proxy for importance — semantic search engines scour the relevance of the page’s content. Google is developing its own powerful LaMDA foundation model, while Microsoft, which runs the Bing search engine, has invested heavily in OpenAI.

The ways in which foundation models can be used to interpret and generate content at scale has been demonstrated by a research team at McGill University. They used an earlier GPT-2 model to read 5,000 articles on the pandemic produced by the Canadian Broadcasting Corporation and prompted it to generate “counterfactual journalism” about the crisis.

The GPT-2 model’s coverage was more alarming, emphasising the biological and medical aspects of the pandemic, whereas the original, human journalism focused more on personalities and geopolitics and injected a “rally-round-the-flag” element of forced positivity. “We are using AI to highlight human biases,” says Andrew Piper, who led the McGill research project. “We are using simulation to understand reality.”

NEW DIMENSION OF CONTROVERY

But by injecting unspecified human inputs into ChatGPT, OpenAI is adding a new dimension of controversy. Sensibly enough, OpenAI has added guardrails to try to prevent ChatGPT from generating toxic or dangerous content. So, for example, ChatGPT refused to tell me how to build a nuclear bomb.

Such guardrails are essential to prevent foundation models being abused, but who decides how these safeguards are designed and operate, asks Percy Liang, director of Stanford University’s Center for Research on Foundation Models.

These models can be incredibly useful for scientific and medical research and automating tedious tasks. “But they can also wreck the information ecosystem,” he says.

The contours of the debate between paternalistic and libertarian AI are emerging fast. Should such models be controlled or “democratised”?

“The level of censorship pressure that’s coming for AI and the resulting backlash will define the next century of civilisation,” tweeted Marc Andreessen, the tech entrepreneur and investor.

Andreessen may be overstating his case, but it is undoubtedly true that foundation models will increasingly shape the ways we see, access, generate and interpret online content. We must scrutinise who runs these new digital portals to human knowledge and how they are used.

But it is doubtful whether anyone outside the companies that are pioneering this technology can truly understand, still less shape, its spookily fast evolution.

Source: Financial Times/ch

Advertisement

Also worth reading

Advertisement