Section Two - Bias in technology
A story about hungry judges and Tay the Biased Bot (21 minutes)
A piece of history of technological bias
From the AI Now Institute:
In the 1930s, Dr. Gertrude Blanch led the important Mathematical Tables Project, a nearly 450-person effort to compute logarithmic, exponential, and other calculation results essential to American government, military, finance, and science. After earning her doctorate in mathematics at Cornell, she led new approaches to computation and published volumes of tables and calculations in scientific journals.
Despite her contributions, Blanch did not appear as the author of the papers she wrote.
For the majority of her time on the project, her male supervisor Arnold Lowan instead received credit. This is a lasting injustice: women, Black, Brown, and Indigenous people, people with disabilities, and other historically excluded individuals have long made crucial contributions to science, technology, and engineering, yet their labor has been continuously erased.
These erasures deny contributors of their expertise, labor, community standing, and ultimately, their power. They have very real material consequences; contributions unrecognized are contributions that are not compensated. These erasures also rob us of our histories, building false narratives that justify White supremacist patriarchy as “natural” and “evidenced.”
The above quote illustrates that bias in technology is a widespread problem. Since this is a crash course, we cannot cover everything, yet we thought the example above was too important not to mention. The full article is listed under the added materials (section 6).
In the remainder of this section, we'll look at human bias and coded or artificial intelligence bias.
Biased humans
Humans are biased.
Humans do not make rational decisions. Humans are the heroes in their own fabricated stories. Their memory is distorted. Humans will bend facts so they fit into the way they view the world. Humans, in short, cannot be trusted. Not even real smart, try-to-be-objective humans like, for example, judges. There have been studies that seem to prove that judges are more lenient just after the lunch.
Hungry judges give harsher sentences.
So, you might say, please give us some software. There seems to be a lot of potential for objective computers. Objective computers make the world fairer. After all, the world will be a better place if we let objective computers make decisions. If we have objective Artificial Intelligence sentencing people. Or hiring people. Or deciding if someone gets a loan. Or an insurance policy. Then things would become fairer.
This is probably true. There is only one problem, which you already know, if you did crash course four and/or five. There is no such thing as objective AI.
Watch this talk in which documentary maker Robin Hauser asks these questions (12 minutes) and uses some great examples like Tay the Biased Bot, a swearing Watson and Google as cliché machine (12 minutes).
After this video you can play a fun family game where you enter a different term in Google images and then immediately see in which clichés we as a society think about these matters. Just try: black man, white woman, small child, fit elderly, criminal, and so on ...
In her talk Robin Hauser shows that the problem with artificial intelligence often is the data with which it is trained. She shows that the machines are not programmed with prejudice, but that the lack of diversity in the dataset or the bias in the dataset has unwanted results. If you train a system with biased data you get biased output.
Garbage in. Garbage out.
Of course there also a lot of systems that are biased by design. For example the algorithms that define the newsfeeds or recommend YouTube video's and are designed to keep you hooked (and show inflammatory content as a result). This is covered in detail in crash course two.
There is another, more technical, issue at play that also causes artificial intelligence using neural networks to be biased. It has to do with the fact that in a neural network, as we saw in crash course 5, certain weights are assigned to certain properties. The sum of these weights determines whether a particular neuron is fired. That all sounds very technical, and it is, but it is important to remember that the weights and the way in which the sum is calculated, both (can) involve prejudices and are calculated by the AI, somewhere in the black box.
So even if you think the training data is not biased, the AI itself can still make all kinds of biased connections.
More bias
Above we talked about bias in AI because this is going to be very important in a future in which AI develops further and faster and makes increasingly important decisions for us and about us.
However, there is also bias in other technologies. For example, a large pickup truck can be biased because small people (often women) cannot drive the truck, because they cannot reach the pedals.
In addition, we understand that terms such as 'diversity', 'inclusiveness' and 'unbiased' may sound normative, but are determined by culture, politics and economics. At the current juncture, it therefore seemed useful to us to zoom in, in the next two sections, on two current prominent prejudices: (race - section three and gender - section four).
Take aways from section two:
- Humans are often biased;
- AI is often biased;
- Advanced AI (like machine learning) is not a guarantee for inclusivity, often the opposite;
- AI can be biased because it is trained with biased data;
- Or because there is bias somewhere in the black box;
- Thinking about bias in AI is very important in a world in which AI becomes more and more important.