Bias in Artificial Intelligence (AI) is an anomaly in the output of algorithms from trained machines . These could be due to the prejudiced assumptions made during the algorithm development process or prejudices in the training data.
Humans are Biased
As a starting point it is important to understand that humans are biased in the first place and not only software. This understanding can be explained by Joshua Greene’s Moral Tribes theory. His theory is based on our brain and how it works – the dual process theory of cognition. Human brains have two modes. Mode 1 (fast thinking) lets human take intuitive, emotional and rule-based decisions. In mode 2 (slow thinking) humans decide utilitarian, deliberative and rational. While mode 1 focuses on the moral understanding of our own values and to live within our moral tribe, mode 2 is consensus orientated and has the goal to live together prosperous and peacefully. Human bias comes from mode 1. We cannot remove our biases, but we must try to fight and eliminate them.
Different Types of Biases
There are different types of AI biases. Cognitive Biases can affect how individuals make decisions and can seep into machine training processes. The algorithm is being trained with input data that is already biased. The algorithm therefore replicates the bias in the data. Further the bias of the programmer influences the algorithm.
Another type arises due to Lack of complete data. If data is not complete, it may not be representative and therefore it may include bias.
Biases in algorithms often lead to discrimination. If algorithms include personal information like the kind of device we are using, our search history, battery status, skin brightness, etc. it is getting problematic. Algorithms in those cases undermine human rights of non-discrimination. Such algorithms are used for personalized algorithmic pricing or in AI recruiting. Often we do not know that algorithms are discriminating as algorithms are not transparent. Algorithm transparency is a key feature to fight discrimination.
There is no quick fix to remove all biases. But minimizing bias will be critical if artificial intelligence is to reach its potential and increase people’s trust in the system.
1) be aware Of contexts in which Al can help correct for bias and those in which there is high risk for AI to exacerbate bias
2) establish processes and practices to test for and mitigate bias in AI systems
3) engage ind fact-based conversations about potential biases in human decisions
4) fully explore how humans and machines can best work together
5) invest more in bias research, make more data available for research (while respecting privacy), and adopt a multidisciplinary approach
6) Invest more in diversifying the AI field itself
Bias is all of our responsibility. Together we should fight against biases in the real and the digital world. Raising awareness is the first step.
 I prefer the term “trained machines” to the term “machine learning” as it sounds more like something a human is doing with a machine than a machine itself.