Opinion | The incredible power – and potential danger – of algorithms
How much would you pay for a book about flies?
When you hear about Amazon’s algorithm failing so spectacularly, you might chuckle and move on. But what do you do when an algorithm discriminates against you based on the color of your skin, your gender, or where you live?
Algorithmic bias is unfortunately not dystopian science fiction. Although countless algorithms improve our quality of life, discriminatory algorithms have been around for decades. The first step in fighting such algorithmic bias is to recognize that algorithms are human products and not inherently unbiased.
There are concrete steps we can take to address the issue of algorithmic bias, such as increasing diversity among the creators and maintainers of algorithms, auditing algorithms, and passing legislation. But first, let us take a step back to understand what algorithms are and how they discriminate.
Algorithms lose their mystery when we realize that they are nothing more than cake recipes: They take inputs (ingredients), perform a series of steps (instructions), and produce outputs (cake). So, how can a simple cake recipe wreak havoc?
Take, for example, the 1980 algorithm designed to select jurors in Allen County, Indiana. For 22 years, it flawlessly followed its instructions: It ran through all townships alphabetically and stopped after 10,000 juror selections had been made. The catch: The algorithm never got to “W” even though 75 percent of Allen County’s Black population lived in Wayne Township, leading to a drastic underrepresentation of Black jurors.
As computing power has evolved, so have algorithms and their potential for discrimination. No longer just a mere series of human-programmed steps, modern computer algorithms are correlation finders that “learn” what patterns and rules there might be by studying – or, being “trained” on – lots of data.
Instead of a cake recipe penned by an experienced baker, a modern computer algorithm is the equivalent of looking at lots of cake recipes, determining what ingredients probably belong together and in which order, and coming up with a recipe based on these findings.
But what if an algorithm were only given inedible cakes to learn from? The answer, unsurprisingly, is that the algorithm wouldn’t magically learn how to make edible cakes. Rather, it would get excellent at producing inedible cakes.
That’s the Achilles heel of modern algorithms: No matter how well they are programmed, they are only as good as the data they are trained on.
An appalling example of this was the wrongful arrest last year of a Black man from Farmington Hills, Michigan who was incorrectly identified by a facial recognition algorithm.
Facial recognition algorithms have traditionally been trained on data sets that are predominantly white and male. M.I.T. Media Lab researcher Joy Buolamwini demonstrated the enormous impact of such skewed data sets in 2018, when she found that three leading facial recognition algorithms incorrectly identified only 1% of lighter-skinned males but up to 35% of darker-skinned females.
Other shocking examples of the effects of biased data on algorithms include Google’s search algorithm, Amazon’s attempt at a hiring algorithm, and predictive policing algorithms—as used in Portage, Michigan.
So, what can we do?
First, we need to realize that algorithms are not objective – hiding human biases behind a veneer of numbers and probabilities doesn’t erase their existence. We have to accept that the history, culture, and biases of programmers and the public affect algorithms.
Thus, one way of mitigating algorithmic bias is to recruit programmers from a wide variety of backgrounds who can check one another’s biases while programming and training algorithms. For instance, only 3.7% of Google’s workforce identify as Black and 5.9% as Latinx. It would also make sense for programmers to learn about algorithmic bias and for them to collaborate with specialists in ethics and social justice.
Another way is to rely less on the goodwill of technology companies to tackle algorithmic bias. Algorithms can be audited to test how biased they are, and by passing legislation, such as the Algorithmic Accountability Act of 2019, Congress can create guidelines for companies.
States could also pass legislation, perhaps in the form of granting residents greater ownership over their data. For example, the Biometric Information Privacy Act introduced in Michigan in 2017 has yet to be passed. The act would help protect Michiganders from having their biometric identifiers (such as fingerprints and face scans) unknowingly collected, shared, or sold by private entities.
In general, being aware of and taking ownership of your online data is critical to understanding how you might be discriminated against. You could start by visiting adssettings.google.com to see what Google has inferred about you. In the past, women, for example, have been less likely to be targeted with advertisements for high-paying jobs.
Last, we need to be critical when using and interpreting the outputs of algorithms. Algorithm outputs – even our daily Google search results – should not be accepted unquestioningly but, rather, in light of algorithms’ proneness to reinscribe existing societal biases. Blindly trusting algorithms to be fair is a recipe for disaster.
If you learned something from the story you're reading please consider supporting our work. Your donation allows us to keep our Michigan-focused reporting and analysis free and accessible to all. All donations are voluntary, but for as little as $1 you can become a member of Bridge Club and support freedom of the press in Michigan during a crucial election year.