Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu

Advertisement

Advertisement

Business

Not 'safe anymore': Singaporean ex-Twitter adviser warns of faltering fight against child sex exploitation

Only one full-time employee remains in a Twitter Singapore team dedicated to removing child sexual abuse material.

SINGAPORE: A growing reliance on automation in tackling child sexual exploitation on Twitter is putting the most vulnerable victims in harm's way, according to a company executive and an independent adviser, both of whom dealt with trust and safety issues on the platform but have since left.

Only one full-time employee remains in a team in Singapore dedicated to removing child sexual abuse material from Twitter, after a slew of layoffs and walkouts since billionaire Elon Musk acquired the social media firm.

Singapore - Twitter's Asia-Pacific headquarters - is one of three regional offices, alongside Dublin and San Francisco, where trust and safety teams work together to look globally at the issue of child sexual exploitation on the platform, the ex-employee told CNA.

This international effort has been whittled down to around 10 staffers from about 20 to 30 previously, said the ex-employee, who requested that their identity, including their name and gender, not be revealed in this article.

These teams were supported by larger contract outfits who formed the first line of defence to review violative content, including in other domains like abuse and hateful conduct. These contractors have also faced job cuts.

In place of human interventions, Twitter is now leaning heavily on automation to moderate harmful content posted by users, its new head of trust and safety Ella Irwin said earlier in December.

This, however, is not making Twitter a safer place to be, said the ex-employee. "If I were a bad actor, I would be testing the system now."

These recent developments are a "huge concern", said Eirliani Abdul Rahman, a Singaporean child safety activist and founding member of Twitter's Trust and Safety Council.

"I don't think it's safe anymore for children online," she told CNA.

"You cannot do content moderation just shooting from the hip. It is done through evidence-based work."

In protest at Twitter's declining online safety standards, she and two fellow advisers on the council resigned last week.

On Monday (Dec 12), Twitter dissolved the council, which it formed in 2016 with 100 members.

The company did not respond to CNA's request for comment on its enforcement efforts against child sexual exploitation.

TWEETING IN CODE

The ex-employee outlined to CNA how automated machine-learning models often struggle to catch up with the evolving modus operandi of perpetrators of child sexual abuse material.

Trading such content on Twitter involves treading a fine line between being obvious enough to prospective "buyers" yet subtle enough to avoid detection.

In practice, this means speaking in codewords that are ever-changing, to try and evade enforcement. 

For example, "MAP" is a common acronym to identify oneself as a "minor-attracted person", but over time this has evolved to the use of the map emoji 🗺️ and even the term "cartographer".

With fewer content moderators and domain specialists in Twitter to keep track of such changes, there's a danger that abusers will take the opportunity to coordinate yet another new set of codewords that automated systems and a smaller team cannot quickly pick up, said the ex-employee.

Aside from keywords, image hashes or digital fingerprints of known child sexual abuse material and account behaviour – such as interactions with sexually explicit accounts – are also used to identify violative accounts.

Manual reviews play a key role. For instance, moderators use criteria like the Tanner scale, a sexual maturity rating, to determine whether an individual appearing in sexually explicit material is underaged based on physical characteristics.

Rarer but highly harmful activity, such as the creation of new child sexual abuse material or the solicitation of sexual services from a minor, is especially hard to track down with automation.

This is because the real human beings behind those accounts talk in organic ways that are not scripted, and no image hash exists for new visual material, said the ex-employee.

The ex-employee added that any automated process to identify violative tweets also needs to be nuanced enough to not take down "benign uses" of terms associated with child sexual exploitation.

"A victim drawing attention to their plight, having no easy way to do so and in a compromised situation or state of mind, might easily use problematic hashtags and keywords," they said.

Failure to distinguish such uses of language could conversely end up silencing and re-victimising those suffering from child sexual abuse, they said.

USER REPORTS AND AUTOMATIC TAKEDOWNS

As part of Twitter's shift to automated child online safety, it now automatically takes down tweets flagged by "trusted reporters", or accounts with a track record of accurately flagging harmful posts.

A threat intelligence researcher specialising in child sexual abuse material told Reuters she noticed Twitter taking down some content as fast as 30 seconds after she reported it.

But there is no such thing as a "100 per cent accurate" reporter, said the ex-employee, who added that the accuracy rate of reports is about 10 per cent across all types of policy violations.

The former staffer also expressed concern about safeguards to prevent abuses once users realise they are "trusted" and that their reports lead to automatic enforcement.

Another issue is the "self-selection bias" of a system based on user reports.

Users tend to report the most obvious violations, such as those by "spammy" accounts using third-party websites to sell child sexual abuse material, phish unsuspecting buyers or install malware, the ex-employee said.

Meanwhile, other accounts feeding demand for child sexual exploitation continue to lurk in the shadows.

"This is why a portion of our work has always been focused on automated logic to catch this quietly nefarious evil at high accuracy," the ex-employee acknowledged.

REAL-WORLD IMPACT

Twitter's Ms Irwin said earlier this month the platform had taken down about 44,000 accounts involved in child safety violations. She did not give a timeframe for this figure.

Before Mr Musk's acquisition, Twitter suspended more than 596,000 accounts for child sexual exploitation between July and December 2021.

The ex-employee cautioned that such numbers do not paint a picture of effective enforcement, as bad actors will regenerate and return to the platform.

"Significant" resources were devoted to prevent such recidivism, they said.

A higher number of suspended accounts will intensify the burden on the now-smaller pool of employees to process the likes of appeals.

One of the most important workflows is reporting information about severe child sexual exploitation incidents to the National Center for Missing & Exploited Children (NCMEC), as required under United States federal law, said the ex-employee.

They described NCMEC as an "international clearinghouse" for information on child sexual exploitation, with "direct hotlines" to some police forces around the world.

"This is crucial to provide law enforcement with the data necessary to bring criminals to justice," said the ex-employee.

But such reporting is not easy to automate, as each violation and subsequent report is unique, they added.

This leaves the responsibility of manually reviewing and reporting a growing number of suspended accounts to whatever few employees are left in the team. 

For Ms Eirliani, who is also a doctoral student in public health at Harvard University, all of this gives reason to worry about the real-world harm caused to victims of child sexual exploitation.

She said Twitter's announcement that it would rely more heavily on automated content moderation showed it was no longer committed to online safety.

"Algorithmic systems can only go so far in protecting users from ever evolving abuse and hate speech before detectable patterns have developed," she said in a joint statement with fellow founding member Anne Collier on their resignations from the Trust and Safety Council.

Ms Eirliani herself has since become the target of online abuse on Twitter, after tweeting her resignation statement.

The ex-employee told CNA they have also advised their contacts to stop using Twitter, believing it is no longer a safe place due to personal concerns over violative and illegal content, misinformation and inadequate protection of user data.

They also expressed deep concern at the dissolution of the Trust and Safety Council as well as recent attacks on the platform against Twitter's former head of trust and safety Yoel Roth, which reportedly caused him to flee his home.

"I think there is already a lot of harm happening on the platform, just that users don't see it unless they go out of their way to search for it, or are themselves targets of abuse.

"By the time we get to know the extent of the damage, it will be too late."

Source: CNA/dv(jo)

Advertisement

Also worth reading

Advertisement