CALIFORNIA: It is not news that, for all its promised benefits, artificial intelligence (AI) has a bias problem.
Concerns regarding racial or gender bias in AI have arisen in applications as varied as hiring, policing, judicial sentencing, and financial services.
If this extraordinary technology is going to reach its full potential, addressing bias will need to be a top priority. With that in mind, here are four key challenges that AI developers, users, and policymakers can keep in mind as we work to create a healthy AI ecosystem.
1. BIAS BUILT INTO DATA
We live in a world awash in data. In theory, that should be a good thing for AI: After all, data give AI sustenance, including its ability to learn at rates far faster than humans. However, the data that AI systems use as input can have built-in biases, despite the best efforts of AI programmers.
Consider an algorithm used by judges in making sentencing decisions. It would obviously be improper to use race as one of the inputs to the algorithm. But what about a seemingly race-neutral input such as the number of prior arrests?
Unfortunately, arrests are not race neutral: There is plenty of evidence indicating that African-Americans are disproportionally targeted in policing. As a result, arrest record statistics are heavily shaped by race.
That correlation could propagate in sentencing recommendations made by an AI system that uses prior arrests as an input.
The indirect influence of bias is present in plenty of other types of data as well. For instance, evaluations of creditworthiness are determined by factors including employment history and prior access to credit — two areas in which race has a major impact.
To take another example, imagine how AI might be used to help a large company set starting salaries for new hires. One of the inputs would certainly be salary history, but given the well-documented concerns regarding the role of sexism in corporate compensation structures, that could import gender bias into the calculations.
2. AI-INDUCED BIAS
An additional challenge is that biases can be created within AI systems and then become amplified as the algorithms evolve.
By definition, AI algorithms are not static. Rather they learn and change over time. Initially, an algorithm might make decisions using only a relatively simple set of calculations based on a small number of data sources.
As the system gains experience, it can broaden the amount and variety of data it uses as input, and subject those data to increasingly sophisticated processing. This means that an algorithm can end up being much more complex than when it was initially deployed.
Notably, these changes are not due to human intervention to modify the code, but rather to automatic modifications made by the machine to its own behaviour. In some cases, this evolution can introduce bias.
Take as an example software for making mortgage approval decisions that uses input data from two nearby neighbourhoods — one middle-income, and the other lower-income.
All else being equal, a randomly selected person from the middle-income neighbourhood will likely have a higher income and therefore a higher borrowing capacity than a randomly selected person from the lower-income neighbourhood.
Now consider what happens when this algorithm, which will grow in complexity with the passage of time, makes thousands of mortgage decisions over a period of years during which the real estate market is rising.
Loan approvals will favour the residents of the middle-income neighbourhood over those in the lower-income neighbourhood. Those approvals, in turn, will widen the wealth disparity between the neighbourhoods, since loan recipients will disproportionally benefit from rising home values, and therefore see their future borrowing power rise even more.
Analogous phenomena have long occurred in non-AI contexts. But with AI, things are far more opaque, as the algorithm can quickly evolve to the point where even an expert can have trouble understanding what it is actually doing. This would make it hard to know if it is engaging in an unlawful practice such as redlining.
The power of AI to invent algorithms far more complex than humans could create is one of its greatest assets — and, when it comes to identifying and addressing the sources and consequences of algorithmically generated bias, one of its greatest challenges.
3. TEACHING AI HUMAN RULES
From the standpoint of machines, humans have some complex rules about when it is okay to consider attributes that are often associated with bias.
Take gender: We would rightly deem it offensive (and unlawful) for a company to adopt an AI-generated compensation plan with one pay scale for men and a different, lower pay scale for women.
But what about auto insurance? We consider it perfectly normal (and lawful) for insurance companies to treat men and women differently, with one set of rates for male drivers and a different set of rates for female drivers — a disparate treatment that is justified based on statistical differences in accident rates.
So does that mean it would be acceptable for an algorithm to compute auto insurance rates based in part on statistical inferences tied to an attribute such as a driver’s religion? Obviously not.
But to an AI algorithm designed to slice enormous amounts of data in every way possible, that prohibition might not be so obvious.
READ: When machines make decisions on our behalf, fears of redundancy, new questions emerge, a commentary
Another example is age. An algorithm might be forgiven for not being able to figure out on its own that it is perfectly acceptable to consider age in some contexts (e.g., life insurance, auto insurance) yet unlawful to do so in others (e.g., hiring, mortgage lending).
The foregoing examples could be at least partially mitigated by imposing upfront, application-specific constraints on the algorithm. But AI algorithms trained in part using data in one context can later be migrated to a different context with different rules about the types of attributes that can be considered.
In the highly complex AI systems of the future, we may not even know when these migrations occur, making it difficult to know when algorithms may have crossed legal or ethical lines.
4. EVALUATING CASES OF SUSPECTED AI BIAS
There is no question that bias is a significant problem in AI. However, just because algorithmic bias is suspected does not mean it will actually prove to be present in every case.
There will often be more information on AI-driven outcomes — e.g., whether a loan application was approved or denied; whether a person applying for a job was hired or not — than on the underlying data and algorithmic processes that led to those outcomes. This can make it more difficult to distinguish apparent from actual bias, at least initially.
While an accusation of AI bias should always be taken seriously, the accusation itself should not be the end of the story. Investigations of AI bias will need to be structured in a way that maximises the ability to perform an objective analysis, free from pressures to arrive at any preordained conclusions.
While AI has the potential to bring enormous benefits, the challenges discussed above — including understanding when and in what form bias can impact the data and algorithms used in AI systems — will need attention.
These challenges are not a reason to stop investing in AI or to burden AI creators with hastily-drafted, innovation-stifling new regulations. But they do mean that it will be important to put real effort into approaches that can minimise the probability that bias will be introduced into AI algorithms, either through externally-supplied data or from within.
And, we’ll need to articulate frameworks for assessing whether AI bias is actually present in cases where it is suspected.
John Villasenor is a nonresident senior fellow in Governance Studies and the Centre for Technology Innovation at Brookings. He is also a professor of electrical engineering, public policy, and management, and a visiting professor of law, at the University of California, Los Angeles, a member of the World Economic Forum’s Global Agenda Council on Cybersecurity, and a member of the Council on Foreign Relations.
This commentary first appeared in the Brookings Institution’s blog, TechTank. Read it here.