Advancing Life & Work
Curt_Hopkins

What is Trustworthy AI and why is everyone talking about it?

OK, perhaps I’m exaggerating. Maybe not everyone has heard of it. So we’ll tell you what it is and why everyone should be, and probably soon will be, discussing it.

The AI Research group at Hewlett Packard Labs, including Paolo Faraboschi, Hewlett Packard Enterprise Fellow and Director of the AI Research Lab, and distinguished technologists Suparna Bhattacharya and Soumyendu Sarkar, have been working on this mission and how to put it into practice. So I asked them to define it.

Hewlett Packard LabsHewlett Packard Labs

“It's AI you can trust,” says Faraboschi. “One of the challenges of AI is that, unlike some of the rule-based systems people have used before, in which humans would put in conditional rules and let the program roll, AI suffers from a couple of very distinctive issues.”

First, AI makes decisions based on the data it’s used to train it. Is it the right data? Is it useful? Are there insights you can derive from it? And is it an unbiased expression of that part of the world you wish to examine?

“If we train an AI system to help a company pick the next CEO, it turns out that 93 percent of the top company’s leaders are men,” according to Bhattacharya. That description of how things are is not the same as describing how you believe things ought to be.

“If you make decision based on that, you can easily end up making the wrong decisions,” says Faraboschi. “So Trustworthy AI is basically the set of techniques, that make AI trusted, secure, private, robust, and human-focused. In other words, that is the technology that makes AI follow a set of established ethical principles.”

 

The challenge of humanity
Everyone agrees that identifying a set of ethical principles that should guide AI is needed, according to Faraboschi.

“In the last few years, several of the world's large companies have come up with their own classification,” he says. However, there isn’t a universal set of principles that applies everywhere.

At HPE, the AI Research Lab has been working with the company’s Ethics and Compliance Office to derive a set of five principles of ethical AI.

  1. Privacy-enabled and secure – respect individual privacy and be secure
  2. Human focused – respect human rights and be designed with mechanisms and safeguards to support human oversight and prevent misuse
  3. Inclusive – minimize harmful bias and support equal treatment
  4. Responsible – be designed for responsible and accountable use, inform an understanding of the AI, and enable outcomes to be challenged
  5. Robust – be engineered to allow for quality testing, and include safeguards to maintain functionality, and minimize misuse and the impact of failure

 

The human aspect of technology
Most people believe bias is the greatest challenge in AI. Faraboschi, however, believes that an equally important challenge is to make sure humans can remain in the loop in AI-driven decisions.

“It is very tempting to replace experts with an AI engine, but we always have to keep in mind the limitations of the technology in handling corner cases that may have never been seen during training, or captured in the underlying model” he says.

This is particularly important when AI is used in the context of mission critical applications, where the effect of a bad decision can be catastrophic. So, there is a need to have safeguards in place so human oversight can always override an AI decision.

 

The benefits of Trustworthy AI
“Let’s say you're a physician and you’re using an AI which is inferring from an MRI image or an x-ray what sort of a disease it is detecting,” posits Sarkar. “As a user, you do not just want to know what the disease is. You also want to know how the machine’s learning model came to that decision.” This is the human element of double-checking the AI’s “thought process.”

Consider what you expect when you hire an employee, he says. One thing you require is accountability. In part, the trustworthiness in Trustworthy AI is in ensuring that it is doing its job, there is something resembling accountability, something built in as well as human-derived.

If an AI is “untrustworthy” it could give you the wrong answer. In some situations that can be positively fatal. But it may also, notes Bhattacharya, give you the right answer for the wrong reason. Traceability helps limit, or eliminate, the risk of both of those negative outcomes.

Hewlett Packard Labs’ Trustworthy AI research is exploring techniques that enable a user to “embed elements in the AI that allows you to track the decision-making process and thereby to improve trustworthiness overall,” says Bhattacharya, as well as to allow the creation of AI models that are in and of themselves more trustworthy.

“How do we make the system easier for users to build models that are trustworthy?” she asks. “How do you make it easier to look all the way through the data then provide the feedback to actually improve it in practice?”

This issue is also closely connected to the notion of bias. Tracking the origin of bias in an AI requires you to understand what Faraboschi calls “the characteristics of the data sets” which you used to train the system.

“It may help to consider a concrete example, such as vaccine drug trials,” he says. “Imagine that someone says, well, this vaccine only works 72 percent of the time. You have to figure out both why and how. What was the population? Was it randomly chosen? Was it self-selected? Was it correlated with some other factor that would skew the results in a particular direction? In regulated spaces like drug trials, data science can track those parameters across the life cycle of the experiment.”

Now imagine that in an AI-driven lifecycle, where trillions of samples are used to train deep learning models containing billions of parameters, across multiple organization and different applications. Advancements in novel data foundation mechanisms to capture and utilize this information intelligently become essential.

Trustworthy AI is built around two things: analysis for trust and design for trust, says Sarkar.

The mechanism of Trustworthy AI
Trustworthy AI is built around two things: analysis for trust and design for trust, says Sarkar.

Think of analysis for trust as the AI equivalent of crash-testing a car. There is an entire industry, and related technology, dedicated to safety-testing a car, including virtual simulation and physical crash. In the AI space, a model is like a car – it should work, but you really have to stress it to the limit to understand how it will behave in every possible situation. So, analysis for trust includes techniques that look for bias, identify vulnerabilities, explore corner cases, and point to possible remediation approaches, such as augmenting the data set for re-training.

Design for trust is the next step. Here an AI design platform can automatically build a system in which we can embed some of the trust element detectors with the machine learning models. They help both during training and deployment of AI models in finding cases where AI violates trust to fix them. In other domains, for example circuit design, there are almost as many transistors dedicated to checking a function as dedicated to implementing it.

&

Ethics in AI is too important to remain static. Trustworthy AI is a daring but necessary step toward making technology reliable from the ground up. The complexity of today’s tech requires an ethical protocol that is every bit as complex as the threats that face it but simple to employ. Hewlett Packard Labs provides a way to move from a word of constant threat anxiety to one of measured confidence.

 


Curt Hopkins
Hewlett Packard Enterprise

twitter.com/hpe_labs
linkedin.com/showcase/hewlett-packard-labs/
labs.hpe.com

About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs