The Internet trolls have won — sorry, there’s not much you can do

The Internet trolls have won — sorry, there’s not much you can do

When it comes to online comments and discourse and what you can do to limit their toxicity, you only have a certain amount of power. The real leverage lies with the tech companies.

In an increasingly toxic digital world, tech companies – not users – hold the power to tame online
(Illustration: Minh Uong © 2018 The New York Times)

This column is going to be a bit unusual. Typically, I write about a broad tech problem and offer some solutions. But this week, I have stumbled into a topic that many agree has no easy fix: Online comments.

Over the last decade, commenting has expanded beyond a box under web articles and videos and into social networking sites like Facebook and Twitter. That has opened the door to more aggressive bullying, harassment and the ability to spread misinformation — often with difficult real-life consequences.

Case in point: The right-wing conspiracy site Infowars. For years, the site distributed false information that inspired internet trolls to harass people who were close to victims of the Sandy Hook school shooting. This week, after much hemming and hawing about whether to get involved, some giant tech firms banned content from Infowars. (Twitter did not, after determining Infowars had not violated its policies.)

What does that show us? That you as an internet user have little power over content you find offensive or harmful online. It is the tech companies that hold the cards.

Given the way things are going, our faith in the internet may erode until we distrust it as much as we do TV news, said Zizi Papacharissi, a professor of communication at the University of Illinois-Chicago who teaches social media courses.

Why are internet comments so hopelessly bad, and how do we protect ourselves? Even though there is no simple fix, there are some measures we can aim to take. Here is what you need to know about how we got here and what you can try.

WHY ARE PEOPLE SO TOXIC ONLINE?

There are many theories about why the internet seems to bring out the worst in people. I gathered a sampling of some noteworthy findings.

Papacharissi said that in her 20 years of researching and interviewing people about online behaviour, one conclusion has remained consistent: People use the internet to get more of what they do not get enough of in everyday life. So while people have been socialised to resist being impulsive in the real world, on the internet they cave to their temptations to lash out.

“The internet becomes an easy outlet for us to shout something and feel for a moment fulfilled even though we’re really just shouting out into the air,” she said.

This is nothing new, of course. Before the internet, people took their frustrations to TV and radio talk shows. The internet was simply a more accessible, less moderated space.

Daniel Ha, a founder of Disqus, a popular internet comment tool used by many websites, said the quality of comments varies widely depending on the pieces of content being discussed and the audiences they attract. For example, there are videos about niche topics, like home improvement, that invite constructive commentary from enthusiasts. But there are others, such as a music video from a popular artist or a general news article, which ask people from all around the world to comment. That is when things can get especially unruly.

“You have an airport of people from all walks of life coming together, and they’re speaking different languages with different attitudes and they’re just saying stuff,” Ha said.

Comments can be terrible simply because many people are flawed. It is up to the content providers and tech platforms to vet their communities and set rules and standards for civilised discussion. That is an area where many resource-strained news publications fall short: They often leave their comments sections unmoderated, so they become cesspools of toxic behavior. It is also an area where tech companies like Facebook and Twitter struggle, because they have long portrayed themselves as neutral platforms that do not want to be arbiters of truth.

WHAT ABOUT FAKE COMMENTS?

Tech companies have long employed various methods to detect fake comments from bots and spammers. So-called Captcha tests, for Completely Automated Procedures for Telling Computers and Humans Apart, ask you to type a word or select photos of a specific item to verify you are human and not a bot. Other methods, like detecting a device type or location of a commenter, can be used to pin down bots.

Yet security researchers have shown there are workarounds to all these methods.

Some hackers are now getting extremely clever about their methodologies. When the Federal Communications Commission was preparing to repeal net neutrality last year, there were 22 million comments posted on its site between April 2017 and October 2017, many of which expressed support for the move.

Jeff Kao, a data scientist, used a machine-learning algorithm to discover that 1.3 million comments were likely fakes posted by bots. Many comments appeared to be very convincing, with coherent and natural-sounding sentences, but it turned out that there were many duplicates of the same comments, subbing out a few words for synonyms.

“It was like Mad Libs,” he said. “If you read through different comments one by one, it’s hard to tell that some are from the same template. But if you use these machine learning algorithms, you can pick out some of these clusters.”

The FCC said in a letter that it planned to re-engineer its comment system in light of the fakes.

WHAT CAN I DO?

For the issue of spoofed comments, there is a fairly simple solution: You can report them to the site’s owner, which will likely analyse and remove the fakes.

Other than that, do not take web comments at face value. Kao said the lesson he learned was to always try to view comments in a wider context. Look at a commenter’s history of past posts, or fact-check any dubious claims or endorsements elsewhere on the web, he said.

But for truly offensive comments, the reality is that consumers have very little power to fight them. Tech companies like YouTube, Facebook and Twitter have published guidelines for what types of comments and material are allowed on their sites, and they provide tools for people to flag and report inappropriate content.

Yet once you report an offensive comment, it is typically up to tech companies to decide whether it threatens your safety or violates a law — and often harassers know exactly how offensive they can be without clearly breaking rules. Historically, tech companies have been conservative and fickle about removing inappropriate comment, largely to maintain their positions as neutral platforms where people can freely express themselves.

By Brian X Chen © 2018 The New York Times

Source: NYT

Bookmark