- Community Home
- >
- Services
- >
- The Cloud Experience Everywhere
- >
- AI Ethics: Understanding the Basis of Bias
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
AI Ethics: Understanding the Basis of Bias
Iโve been involved in a lot of conversations about the ethics of Artificial Intelligence (AI) recently. In my current role as a senior advisor and architect with HPE AI services team, I have spoken to customers, analysts and multiple vendors who all have their own views on AI ethics. Some organizations have formed task forces to address the issues and come to some form of conclusion about what it means for them and what actions need to be taken. Many commentators, including some HPE AI experts, have weighed in on deep questions such as whether it is moral to imbue machines with consciousness (see for example this article and check out the video at the end of the piece.) Others have argued that all such discussions are just a distraction, or even, as some have suggested, simply the IT industry reacting to criticism of failures such as the privacy leaks that led to the Cambridge Analytica saga. These commentators claim that the industry is determined to show that lessons were learned, and that usersโ best interests are being considered now.
Iโd like to take a step back from these larger questions and look at one AI issue with important ethical implications: data bias.
The issue is largely centered around the trend of using AI, and particularly machine learning algorithms, to make decisions that can potentially greatly impact peopleโs lives. This might include, for example, financial decisions, such as whether a person is granted a loan. Or it could include even more impactful things such as a prison inmateโs suitability for parole, or the identification of suspects in criminal activity. AI systems donโt always deliver appropriate decisions when theyโre focused on human activities. So, what lies behind these sorts of miscalculations? The answer is data โ and more importantly, bias in that data.
Where does bias begin?
We cannot be certain what features a deep neural network has decided to prioritize in order to build its model. There are usually so many features involved, and so many potential iterations of that model to choose from, that it is a very complex matter to determine why the inference led to the result it did. Even when we build in capability to visualize these weightings, it can be very hard to determine exactly the root of the outcome.
One example is a model that was designed to find certain objects in a forest terrain. The model was trained to identify when the objects were present, and it worked well โ that is, until it didnโt. There was a batch of data that produced false positives, and this eluded the data scientists involved for some time. Ultimately, when they examined the training data, it became apparent that a certain tree was present in all the images of one object, and that was what was being detected. There are many examples of this type of training-data-induced error, and every data scientist has a story like this one.
As it turns out, many of our historical data archives contain this sort of bias. That could be a result of the data features that were selected for collection at the time, when a person, with their own biases, decided what they thought was relevant data for the purpose. This was a manual assumption based on the technical, environmental and cultural context at the time. Which likely means that data that was relevant was never persisted, or data that is irrelevant, and which skews the model, is dominant.
Hunting the bias beast
How do we correct those biases? Sadly, the answer is not an easy one. And it involves yet more human assumption and, by extension, more bias.
Letโs look at an example of how we might overlook or even inject bias. Letโs take a question close to my heart (especially with the Rugby Union World Cup just around the corner!): Who is the best rugby union player in the world? Now we could do this by looking at the number of appearances each player has made, combined with the points scored, and come up with a pretty good idea. The problem with that is the data sources themselves. Are all appearances included? Does the data contain information on assists, or yards run, or passes completed, etc.? There are essentially two major issues. We must first agree on what โbestโ means, and then on what metrics are relevant to that decision. The meaning of โbestโ is (at best!) an opinion. It may be a collaborative opinion, but itโs opinion none the less. And as weโve established, opinion is bias โ good or bad.
How about we instead look at a playerโs record of being selected to represent their country? For this, we could look at similar statistics and train our model based on the stats of those players who have been selected. However, there are again things that we might omit or include which would influence the result. The overall pattern of the national team could influence selection. You could be voted the best player in a local team, or even a league, but that isnโt a guarantee that the national coach will see you as a fit for the squad. The style of play of all of the players would be a factor, and that is hard to quantify. So, to capture that data would require yet more opinion.
The next problem is, we are excluding a large number of players based on the availability of data. What about players in countries that do not have that level of record, or do not have a leading national league of note but do have excellent players in regional leagues?
So, putting aside my own claim to the crown of Best Player, you can see that even with the ability to infer details from all of the provided data, there are multiple opportunities to inject bias โ and in fact, in some cases, itโs unavoidable.
This is a bit of a trivial example, but it does build the picture of how, even when trying to correct for bias, we may end up introducing even more.
The answer lies with the humans
We live in a world of garbage-in, garbage-out. Our data for making judgements on social or personal questions is woefully skewed and incomplete. We should be using AI systems only to guide us, while always applying a large measure of human common sense to the output. Blaming artificial intelligence for amplifying our own biases or poor judgement is not the way to make things better. We need to build representative and diverse pools of talent to work in this field. That will ensure that we continue to challenge and question the results from these fledgling models and the bias they reflect.
What we should not be doing is shrinking away from artificial intelligence and machine learning in some sort of โtechno-panicโ (to quote journalist and author Jeff Jarvis). Talking about AI ethics in excessively negative, fear-mongering terms is detrimental and demeaning to those who work in the field. Instead, we need to manage expectations of what AI is currently capable of. We need to be cognisant of its shortcomings โ for which we ourselves are ultimately the source. And we need to work together to build processes and communities that can better guide AI innovation in a fair and inclusive way.
The current focus on the ethics of AI should be seen in the context of a larger issue: ethics in technology. These are not new questions; we have faced them at every technology advancement. When early humans first picked up weapons to enable a more efficient hunt, it wasnโt long until somebody realized it was more efficient again to use those weapons on the hunter and take all the spoils! We have always faced the questions โhow can this be used for good โ and how will it likely be used otherwise?โ
Technology is not inherently good or bad. Itโs our influence, and our application of it, that makes the difference. Letโs make sure we evolve ourselves at the same rate as our technology.
Here are some other resources that may be of interest to you:
HPE article: 4 obstacles to ethical AI (and how to address them)
McKinsey & Company: The ethics of artificial intelligence
MIT News article: Ethics, computing, and AI: Perspectives from MIT
WIRED article: Historian Yuval Noah Harari and computer scientist Fei-Fei Li discuss the promise and perils of the transformative technology with WIRED editor in chief Nicholas Thompson: Will AI Enhance or Hack Humanity?
- Back to Blog
- Newer Article
- Older Article
- Deeko on: The right framework means less guesswork: Why the ...
- MelissaEstesEDU on: Propel your organization into the future with all ...
- Samanath North on: How does Extended Reality (XR) outperform traditio...
- Sarah_Lennox on: Streamline cybersecurity with a best practices fra...
- Jams_C_Servers on: Unlocking the power of edge computing with HPE Gre...
- Sarah_Lennox on: Donโt know how to tackle sustainable IT? Start wit...
- VishBizOps on: Transform your business with cloud migration made ...
- Secure Access IT on: Protect your workloads with a platform agnostic wo...
- LoraAladjem on: A force for good: generative AI is creating new op...
- DrewWestra on: Achieve your digital ambitions with HPE Services: ...