Commentary: A new digital divide, between those who opt out of algorithms and those who don't
The growing divide is no longer just about access but how people deal with algorithms that permeate every aspect of their lives, says Michigan State University's Anjana Susarla.
EAST LANSING, Michigan: Every aspect of life can be guided by artificial intelligence (AI) algorithms – from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.
Big Tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them to monetise users’ collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.
In parallel, many people now trust platforms and algorithms more than their own governments and civic society.
An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human.
In the past, technology experts have worried about a “digital divide” between those who could access computers and the internet and those who could not. Households with less access to digital technologies are at a disadvantage in their ability to earn money and accumulate skills.
But, as digital devices proliferate, the divide is no longer just about access. How do people deal with information overload and the plethora of algorithmic decisions that permeate every aspect of their lives?
The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions.
THE SECRET SAUCE BEHIND ARTIFICIAL INTELLIGENCE
The main reason for the new digital divide is that so few people understand how algorithms work. For a majority of users, algorithms are seen as a black box.
AI algorithms take in data, fit them to a mathematical model and put out a prediction, ranging from what songs you might enjoy to how many years someone should spend in jail. These models are developed and tweaked based on past data and the success of previous models.
Most people – even sometimes the algorithm designers themselves – do not really know what goes inside the model.
Researchers have long been concerned about algorithmic fairness. For instance, Amazon’s AI-based recruiting tool turned out to dismiss female candidates. Amazon’s system was selectively extracting implicitly gendered words – words that men are more likely to use in everyday speech, such as “executed” and “captured.”
Other studies have shown that judicial algorithms are racially biased, sentencing poor black defendants for longer than others.
As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.
Meanwhile, some AI researchers have pushed for algorithms that are fair, accountable and transparent, as well as interpretable, meaning that they should arrive at their decisions through processes that humans can understand and trust.
WHAT TRANSPARENCY OFFERS
What effect will transparency have? In one study, students were graded by an algorithm and offered different levels of explanation about how their peers’ scores were adjusted to to get to a final grade.
The students with more transparent explanations actually trusted the algorithm less. This, again, suggests a digital divide: Algorithmic awareness does not lead to more confidence in the system.
But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.
For example, in 2014, Ben Bernanke, the former chair of the Federal Reserve, was initially denied a mortgage refinance by an automated system. Most individuals who are applying for such a mortgage refinance would not understand how algorithms might determine their creditworthiness.
FEW ENGAGE EVEN WHEN THEY KNOW HOW ALGORITHMS AFFECT THEM
While algorithms influence so much of people’s lives, only a tiny fraction of participants are sophisticated enough to fully engage in how algorithms affect their life.
There are not many statistics about the number of people who are algorithm aware. Studies have found evidence of algorithmic anxiety, leading to a deep imbalance of power between platforms that deploy algorithms and the users who depend on them.
A study of Facebook usage found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83 per cent of participants modified their behaviour to try to take advantage of the algorithm, while around 10 per cent decreased their usage of Facebook.
A November 2018 report from the Pew Research Center found that a broad majority of the public had significant concerns about the use of algorithms for particular uses. It found that 66 per cent thought it would not be fair for algorithms to calculate personal finance scores, while 57 per cent said the same about automated resume screening.
A small fraction of individuals exercise some control over how algorithms use their personal data. For example, the Hu-Manity platform allows users an option to control how much of their data is collected.
Online encyclopedia Everipedia offers users the ability to be a stakeholder in the process of curation, which means that users can also control how information is aggregated and presented to them.
However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them.
About 74 per cent of Facebook’s users said in a survey that they were not aware of how the platform characterises their personal interests.
In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle.
This lifestyle has a meaningful impact on how people interact with others; on their ability to pay attention to new information; and on the complexity of their decision-making processes.
Increasing algorithmic anxiety may also be mirrored by parallel shifts in the economy. A small group of individuals are capturing the gains from automation, while many workers are in a precarious position.
Opting out from algorithmic curation is a luxury – and could one day be a symbol of affluence available to only a select few. The question is then what the measurable harms will be for those on the wrong side of the digital divide.
Anjana Susarla is Associate Professor of Information Systems at Michigan State University. This commentary first appeared on The Conversation. Read it here.