Commentary: Unlike parents, AI will never tire of entertaining our children. Here’s the catch
We may live to see a future where artificial intelligence can mimic human behaviours, including the delicate task of raising children. NUS lecturer Jonathan Sim weighs in on the potential pitfalls.

We may live to see a future where AI can mimic human behaviours, enough to assist us in so many aspects of our lives - this includes the delicate task of raising children. (Photo: iStock/kynny)
SINGAPORE: Science fiction is fast becoming a reality with recent breakthroughs in artificial intelligence (AI). In March, Microsoft researchers reported that GPT-4 began showing the “sparks” of near-human intelligence.
We may live to see a future where AI can mimic human behaviours, enough to assist us in so many aspects of our lives - this includes the delicate task of raising children. What can AI offer to children that no human, TV nor smartphone can provide?
AI will be able to determine the child’s needs and tailor interactions to provide the child with the best care and educational experience. If the child struggles to learn a certain concept, the AI can adjust its instruction, catering to the child’s learning needs in an engaging manner.
This sounds very much like what a human can do, with one exception: The AI will not grow tired. It will continue giving undivided attention and care to the child in ways no human can.
Whether the AI acts as a nanny, tutor or playmate, the main selling point will be that unprecedented degree of personalisation to fulfil the child’s every need.
This sounds too good to be true. What’s the catch?
Tech leaders, including the heads of OpenAI and Google DeepMind, have issued dire warnings on the potential risks of AI to society, urging caution. In June, former Google X executive Mo Gawdat even suggested prospective parents hold off having children until the technology is controlled.
On Jun 14, European Union lawmakers signed off on draft legislation for the use of AI, including a ban on using the technology in biometric surveillance and certain clauses to protect children. This is the first law on AI by a major regulator and could act as a model for other jurisdictions planning similar regulations.
THE PITFALLS OF PERSONALISED CARE
To understand the potential pitfalls that AI-driven personalisation might have on future generations, we need to focus our attention on the effects of technology-driven personalisation on our culture and society today.
Why do we keep returning to our favourite social media and online shopping platforms? It’s because of personalisation.
Algorithms operate behind the scenes, learning our interests to curate and show us more of what we might like to see, hear and experience on these platforms. They are designed to keep us captivated so that we keep coming back for more.
If I spend a day watching videos about how otters can be really nasty, personalisation means the platform will push out more content about nasty otters while excluding content about nice otters. If I did not know any better, I may develop a false impression that otters are really nasty creatures.
The same can be said about beliefs, political views, values and so on. If I consume content with a particular leaning, online platforms will continually feed me with more content of a similar leaning. Personalisation wraps me in a bubble of ignorance, reinforcing an inaccurate perception that everyone holds the same beliefs or values as I do.
Not only does personalisation widen the gap separating those who do not share the same beliefs, values or interests from me, but it can also make people within the same bubble increasingly more polarised in their views and beliefs.
PARALLEL WORLDS
We may all be living on the same planet, but the personalised experience of online platforms creates numerous parallel worlds that fragments our communicative cultures - this problem quietly escapes our attention, and it is dividing us to such extent that it is becoming increasingly difficult for us to respectfully converse and collaborate with others.
Each bubble will have its own communicative culture and practice. The more time we spend online, the more we immerse ourselves with other like-minded individuals within the bubble to learn and to internalise their language and communicative norms.
It is hard to tell if anything is amiss if most of one’s social circle communicates in a similar way. If we encounter someone behaving rudely, the last thought to cross our minds is the possibility that the person comes from a different communicative culture.
The same words and gestures that appear polite to one bubble’s communicative culture, can appear rude to people of another bubble, making it all too easy to misunderstand and misjudge each other.
As an educator, I regularly get seemingly abrasive e-mails from students. But when I meet them in person, I discover that they have no ill-intent. They are merely echoing the communicative practices they have learnt online, which differ from my own communicative culture.

This fragmentation of culture occurs across age groups and within age groups.
I have observed students struggling with group projects as they could not understand how anyone could work so differently from them - a false perception reinforced by their experiences of social media, where they have been interacting with similar people within their bubble.
Thankfully, bubbles can pop. Misunderstandings can be ironed out by encouraging open dialogue and fostering empathy.
I encourage my students to have frank conversations about their values, attitudes and work styles. Many were surprised to discover that such differences can exist, and it made it easier for them to work with each other thereafter.
Seeing how these personalised bubbles are making our youths struggle to understand and work with others, my worry is that this problem will worsen with children raised by AI.
If we are not careful, we risk raising a generation of children with severely skewed perceptions of the world due to the personalised bubble AI creates around them. To have such skewed perceptions at such a critical stage of development, we may not be able to break them out of their bubbles when they are older.
LISTEN - In Conversation: Why Artificial Intelligence will not replace humans. A conversation with a former White House AI advisor
Furthermore, the degree of personalised interactions with AI will create bubbles so tiny that children may have a much harder time trying to relate and understand others, leading to more conflicts, misunderstandings, and mistrust. We may well see a future where the next generation of children experience heightened loneliness and isolation.
We cannot have a functional society if people cannot civilly communicate or collaborate with each other.
Ironically, we are making machines more human-like by training them based on our human culture, yet at the heart of this issue is the fragmentation of our shared culture - the foundation of what it means to be human and humane.
To unify these fragmented cultures, we must create opportunities for children to interact with people of diverse backgrounds.
As they learn other communicative cultures and practices, they will exit their bubbles to develop understanding, empathy and trust in others. They will learn how to be human.
Jonathan Sim is Lecturer, Department of Philosophy, Faculty of Arts and Social Sciences at the National University of Singapore.