Is AI Biased Or Is It Just Us?
--
We’ve come a long way. Women have the right to vote, can do common things (such as to get a credit card or a job) without the permission of a male figure, and aren’t fired once they become pregnant. Yet, it seems there are new ways women are being impacted by gender bias. The culprit? AI.
This should sound odd considering that AI is commonly depicted as disconnected, unaffected, and driven by fact machines. Yet, as we begin to rely on it more and more, we are beginning to realize that something is a little off. Our own biases are being incorporated into the algorithm when using machine learning.
First, we begin with data. Anything relevant to your program. The number of bank transactions, pictures of people’s faces, the number of spots on your aunt’s cats. This data is to be used as the training data.
From there, you would choose a machine learning model, supply the training data you picked out and let the computer find patterns and make predictions
An incomplete or skewed data set can affect the performance of the algorithm since it means that there is not enough to make predictions in a complete manner. This can be caused by a lack of diversity in the data set used, causing the algorithm to not work on every input.
For example, let’s say you’re building an app that is supposed to recognize the age of a berry. Most of the data you input is of blackberries and raspberries. Only 10% of the data focuses upon blueberries. Now, some innocent user is trying to figure out how old their precious blueberry is. But because the user is applying the model to something that wasn’t well presented in the dataset, the model is much more likely to make mistakes. The model was trained less on blueberries so, isn’t familiar with how to identify what it’s supposed to.
This often happens in programs. A real-life example would be cardio-vascular diseases. For many decades, cardiovascular diseases were mostly thought of as a man’s illness. Even today, online apps based on male-dominating data may suggest to a female that the pain in their left arm and back is due to depression. This could mean that they decide that they’re fine when they should seek medical attention. On the other hand, the male user is more likely to be told to contact a doctor based on their heart attack symptoms.
Another reason for AI to be biased is its modelling techniques. For a long time, speech-to-text technology performed much worse for female voices than their male counterparts. This was because the way the speech was analyzed and modelled was more accurate traits most prominent in men. Traits like being a taller speaker with longer vocal cords and a lower-pitched voice.
This is a good time to say (or I guess write) that this bias not only affects women but truly any other demographic. Immigrants, racialized groups, ethnic minorities. As soon as a characteristic is used as a defining factor, and you don’t give the full picture, people are affected. There are cases of AI not properly recognizing skin cancer on darker skin tones as well as labelling someone to be increasingly more dangerous the darker their skin tone.
It’s quite apparent that it is our biases that are being transferred to the AI and not the AI developing its own ideas. Where do we go from here?
We can begin by adding more data to improve accuracy among lesser shown groups. We also need more diversity in the field of computer programming. The more diverse the programmers, the more mistakes are brought to light before they get out there and impact people.
Another thing we can do is use post-processing techniques that transform some of the model’s predictions after they are made in order to satisfy a fairness constraint. Sometimes, the model makes a random, unintended conclusion or due to an intrinsic limitation. By checking and finding where the error went wrong, we can make sure that the bias isn’t there.
The need for AI will only grow in the coming future. This means that the more we figure out how to reduce our own biases from the conclusions of AI, the sooner it can truly help us without hurting others.
Feel free to contact me and chat more about the subject: olivka.sliuka@gmail.com