The Cloud Experience Everywhere
1748140 Members
3530 Online
108758 Solutions
New Article ๎ฅ‚
Glyn_Bowden

AI Ethics: Understanding the Basis of Bias

Iโ€™ve been involved in a lot of conversations about the ethics of Artificial Intelligence (AI) recently. In my current role as a senior advisor and architect with HPE AI services team, I have spoken to customers, analysts and multiple vendors who all have their own views on AI ethics. Some organizations have formed task forces to address the issues and come to some form of conclusion about what it means for them and what actions need to be taken. Many commentators, including some HPE AI experts, have weighed in on deep questions such as whether it is moral to imbue machines with consciousness (see for example this article and check out the video at the end of the piece.) Others have argued that all such discussions are just a distraction, or even, as some have suggested, simply the IT industry reacting to criticism of failures such as the privacy leaks that led to the Cambridge Analytica saga. These commentators claim that the industry is determined to show that lessons were learned, and that usersโ€™ best interests are being considered now.

Iโ€™d like to take a step back from these larger questions and look at one AI issue with important ethical implications: data bias.

The issue is largely centered around the trend of using AI, and particularly machine learning algorithms, to make decisions that can potentially greatly impact peopleโ€™s lives. This might include, for example, financial decisions, such as whether a person is granted a loan.  Or it could include even more impactful things such as a prison inmateโ€™s suitability for parole, or the identification of suspects in criminal activity. AI systems donโ€™t always deliver appropriate decisions when theyโ€™re focused on human activities. So, what lies behind these sorts of miscalculations? The answer is data โ€“ and more importantly, bias in that data.

GettyImages-467194569.jpg

Where does bias begin?

We cannot be certain what features a deep neural network has decided to prioritize in order to build its model. There are usually so many features involved, and so many potential iterations of that model to choose from, that it is a very complex matter to determine why the inference led to the result it did. Even when we build in capability to visualize these weightings, it can be very hard to determine exactly the root of the outcome.

One example is a model that was designed to find certain objects in a forest terrain. The model was trained to identify when the objects were present, and it worked well โ€“ that is, until it didnโ€™t. There was a batch of data that produced false positives, and this eluded the data scientists involved for some time. Ultimately, when they examined the training data, it became apparent that a certain tree was present in all the images of one object, and that was what was being detected. There are many examples of this type of training-data-induced error, and every data scientist has a story like this one.

As it turns out, many of our historical data archives contain this sort of bias. That could be a result of the data features that were selected for collection at the time, when a person, with their own biases, decided what they thought was relevant data for the purpose. This was a manual assumption based on the technical, environmental and cultural context at the time. Which likely means that data that was relevant was never persisted, or data that is irrelevant, and which skews the model, is dominant.

 Hunting the bias beast

How do we correct those biases? Sadly, the answer is not an easy one. And it involves yet more human assumption and, by extension, more bias.

Letโ€™s look at an example of how we might overlook or even inject bias. Letโ€™s take a question close to my heart (especially with the Rugby Union World Cup just around the corner!): Who is the best rugby union player in the world? Now we could do this by looking at the number of appearances each player has made, combined with the points scored, and come up with a pretty good idea. The problem with that is the data sources themselves. Are all appearances included? Does the data contain information on assists, or yards run, or passes completed, etc.? There are essentially two major issues. We must first agree on what โ€œbestโ€ means, and then on what metrics are relevant to that decision. The meaning of โ€œbestโ€ is (at best!) an opinion. It may be a collaborative opinion, but itโ€™s opinion none the less. And as weโ€™ve established, opinion is bias โ€“ good or bad.

How about we instead look at a playerโ€™s record of being selected to represent their country? For this, we could look at similar statistics and train our model based on the stats of those players who have been selected. However, there are again things that we might omit or include which would influence the result. The overall pattern of the national team could influence selection. You could be voted the best player in a local team, or even a league, but that isnโ€™t a guarantee that the national coach will see you as a fit for the squad. The style of play of all of the players would be a factor, and that is hard to quantify. So, to capture that data would require yet more opinion.

The next problem is, we are excluding a large number of players based on the availability of data. What about players in countries that do not have that level of record, or do not have a leading national league of note but do have excellent players in regional leagues?

So, putting aside my own claim to the crown of Best Player, you can see that even with the ability to infer details from all of the provided data, there are multiple opportunities to inject bias โ€“ and in fact, in some cases, itโ€™s unavoidable.

This is a bit of a trivial example, but it does build the picture of how, even when trying to correct for bias, we may end up introducing even more.

The answer lies with the humans

We live in a world of garbage-in, garbage-out. Our data for making judgements on social or personal questions is woefully skewed and incomplete. We should be using AI systems only to guide us, while always applying a large measure of human common sense to the output. Blaming artificial intelligence for amplifying our own biases or poor judgement is not the way to make things better. We need to build representative and diverse pools of talent to work in this field. That will ensure that we continue to challenge and question the results from these fledgling models and the bias they reflect.

What we should not be doing is shrinking away from artificial intelligence and machine learning in some sort of โ€œtechno-panicโ€ (to quote journalist and author Jeff Jarvis). Talking about AI ethics in excessively negative, fear-mongering terms is detrimental and demeaning to those who work in the field. Instead, we need to manage expectations of what AI is currently capable of. We need to be cognisant of its shortcomings โ€“ for which we ourselves are ultimately the source. And we need to work together to build processes and communities that can better guide AI innovation in a fair and inclusive way.

The current focus on the ethics of AI should be seen in the context of a larger issue: ethics in technology. These are not new questions; we have faced them at every technology advancement. When early humans first picked up weapons to enable a more efficient hunt, it wasnโ€™t long until somebody realized it was more efficient again to use those weapons on the hunter and take all the spoils! We have always faced the questions โ€œhow can this be used for good โ€“ and how will it likely be used otherwise?โ€

Technology is not inherently good or bad. Itโ€™s our influence, and our application of it, that makes the difference. Letโ€™s make sure we evolve ourselves at the same rate as our technology.

Here are some other resources that may be of interest to you:

HPE article: 4 obstacles to ethical AI (and how to address them)

McKinsey & Company: The ethics of artificial intelligence

MIT News article: Ethics, computing, and AI: Perspectives from MIT

WIRED article: Historian Yuval Noah Harari and computer scientist Fei-Fei Li discuss the promise and perils of the transformative technology with WIRED editor in chief Nicholas Thompson: Will AI Enhance or Hack Humanity?

0 Kudos
About the Author

Glyn_Bowden

Senior Technology leader and strategist. Delivering opportunities, value, and outcomes through applied technology and artificial intelligence.