- Community Home
- >
- Solutions
- >
- Tech Insights
- >
- Cultivating trust in AI
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Cultivating trust in AI
Trust is vital to economics, society, and sustainable development. Thatโs equally true when it comes to artificial intelligence. To develop trusted AI, security should be an integral part of your AI development lifecycle. Learn why and how.
With every technology paradigm change, attackers are there to exploit capabilities. In response, cyber team defense patterns have also evolved. Starting in the 1980s, the security focus changed from network to endpoint and applications. Now, with digital transformation, I would say the focus is revolving around AI โ and that includes using it for attacks and strengthening security defense strategies.
You can find more about why security needs to be integrated into every AI solution in my first blog. Today, Iโm discussing how to cultivate trust in AI.
Start with the algorithms
Learning algorithms โ be they basic regression and classification algorithms or state of art neural network algorithms โ are currently an integral part of our regular applications we all use in life and business. This includes everything from fitness tracking apps to voice recognition apps in our mobiles to spam filters in our email in box.
Along with benefits they bring, AI features in our applications also increase the attack risk and raise privacy issues. Growing innovation in AI given rise to adversarial attacks on AI, following a path to breed of exploits that manipulate the behavior of algorithms by providing them with corrupted input data.
How adversarial attacks are different from regular attacks?
In general, all software has both common and unique vulnerabilities which security engineers target to fix before releasing the product to market. These vulnerabilities are exploited by attackers using techniques ranging from basic SQL injection or cross-site scripting to memory based attacks or DDoS.
AI adversarial attacks are not very different from other cyberattacks. Itโs the rate of impact and damage AI machines can cause when attacked defines AI attacks as adversarial.
If we look at these attacks, they can generally be classified into three types.
- Data poisoning which is attacking and manipulating the input data
- Adversarial attack where target is to manipulate the model parameters or machine learning (ML) algorithm
- Learning processes to consume all the resources beyond the threshold for service denial
Types of AI substantial attacks
There are three basic types here:
- Manipulation attacks โ Adversaries can use attacks to get around predicted AI behavior or even make AI systems execute unanticipated tasks. Attackers can undertake real-time evasion assaults or adversarial reprogramming of AI systems using maliciously produced inputs.
- Exfiltration attacks โ To undermine the quality of AI decisions and give AI systems stealth control, poisoning, Trojan, and backdoor attacks are used to taint data used for training, exploit hidden triggers in AI behavior, and disseminate malicious AI models.
- Infection attack โ In an attempt to steal data from AI systems attacks like model inversion, membership and attribute inference, or model extraction, infect data samples needed for AI training, secret AI inputs, and internals of AI algorithms.
Based on trends and attacks happened, most advanced attacks on AI machines come through manipulation or evasion, which targets to change the behavior of AI bot to unexpected by allowing adversaries either through changing input data used for training or reprogramming. These attacks degrade the faith in an intelligent systems and its functioning.
Why AI should be trustworthy
You could say that trust equals reliability. Trust is connected to both human conscious and unconsciousness. Trust and distrust are employed in the evolutionary landscape to control the rewards and hazards of social contact. Reliance on another person or subject might provide benefits, but it also exposes one to exploitation and deception. Conditional trust, a method for distinguishing between the trustworthy and the untrustworthy, has been proven to be evolutionary advantageous, according to game theory research. As a result, trust was crucial to our survival, and it still drives our interactions.
Trust in terms of interaction with solutions built using AI technologies relates to outcomes that are reliable, accurate, and ethical. Trust is the foundation of society, economics, and sustainable development.
The first initiative and conference on trustworthy AI started in 2019. It took around two years to shift focus from the basic AI policies to the trusted AI, so the trend will certainly continue. Individuals, organizations, and communities will only ever be able to fulfil the full promise of AI if trust can be created in its development, deployment, and use,
Trust needs to be built across physical, cyber, and social layers. Physical trust covers the perspective of reliability and safety. Cybertrust is well aligned to IT issues, and it covers three essential security principles: confidentiality, integrity, and availability. Because AI systems based on machine learning are stochastic, trustworthiness also entails fairness in the system's behavior, which corresponds to the nonexistence of bias and ethical.
How AI security differs from typical application security in terms of enhancing trust?
AI systems are complicated and lack of the ability to explain outcomes makes them more complex. Whatโs more, AI machines and its algorithms tend to behave differently under the same conditions. This makes the maintenance of AI systems significantly harder than general applications โ yet its security is even more critical comparatively.
With growing development and usage of AI machines, attack surfaces are also growing. Every layer in AI ecosystem needs special focus starting from hardware layer, infrastructure layer (on-premises or cloud) to software and application layers which perform data analysis and deploy AI learning algorithms while often connecting to live data streams.
How to cultivate trust by enhancing existing security frameworks for AI
In the U.S., the National Institute of Standards and Technology (NIST) initiated a process to develop an AI Risk management framework. This is intended to support designers, developers, users, and evaluators of intelligence systems to better manage risk throughout the AI lifecycle. The goal is to guide innovation around trustworthiness of AI.
According to NIST, trust is established by ensuring that AI systems are cognizant of and are built to align with core values in society, and in ways which minimize harms to individuals, groups, communities, and societies at large.
NIST published a summary analysis about AI risk management framework RFI responses from industry leaders across the public and private sectors that indicates various themes covering security for AI from process, procedure, and guidelines perspective. However key technical themes proposed are correlating risk with AI system lifecycle and continuous monitoring on AI systems and data for risk.
Building trust on AI systems and outcomes requires an integrated security framework which is comprehensive to protect it from all directions. A solution which secure AI systems from both the traditional attacks and also the adversarial attacks.
To build trust, first we have to understand the risk attached to each individual component in the AI architecture starting with AI infrastructure, storage, data processing and transformation tools or Bigdata solutions, libraries used to build machine learning algorithms and top of all data the critical asset of an intelligent system and its estimated impact on outcome or decisions. If we assess the risk and build security controls which can act as defenses to protect every layer including hidden layers then it will eventually lead to developing trust on the solution.
How to build defenses in AI systems
Implementing solutions such as security operations (which execute validation and transformations of data inputs and outputs as well as input verification for anomaly), controlling algorithm modifications through signing and encryption methods, securing model retraining, and evaluating performance metrics from a corruption point of view can all limit the attack surface against AI model attacks.
Other practical and executable defense solutions to build trust in AI solutions include control data bias in the training data to build explainable AI solutions that analysts should then be able to understand, explain, and make transparent decisions based on solutions.
Another technical feature to be integrated in AI solutions is continuous monitoring. AI systems in majority of cases are designed to run continuously, so do the risk detection models which run continuously to monitor the variations in the data trends and model learnings, as well as changes in data KPI and overall risk score.
Other threat detection models for AI security boost the defense system of intelligent machines. These methods correspond with vulnerability remediation, with threat intelligence services able to provide 360ยฐ security coverage for intelligent solutions.
Cybersecurity should enhance the ability to predict and develop defenses against attacks before they occur. AI-built defense systems should function as a digital immune system for our businesses and assets, similar to how antibodies in humans operate as system offensives against outside chemicals.
The right mix of AI security skills and experts
Based on HPEโs established ethics and privacy programs, as well as its commitment to environmental, social, and governance issues, HPE has established principles defining what we believe constitutes the ethical development and application of AI. HPE formed an AI Ethics Advisory Board to help guide our risk assessment and mitigation with respect to the development, use, and deployment of AI. The Advisory Board developed our ethical principles which HPE applies when developing products and solutions, deploying AI applications both for HPE and customer use, and when pursuing business.
Wherever you are within your AI journey, itโs never too early to start thinking about security, trust, risk, and compliance requirements
Advisory and professional services experts with HPE Pointnext Services already work with number of organizations to assess business needs under the guidance of AI ethics principals. We help architect, design, and implement a secure AI framework in hybrid cloud by integrating security controls at every stage of an AI solutionโfrom edge to cloud. Our experts have many years of experience in building and implementing complex security solutions for a wide range of problems across industries and around the world. Our team also partners with leading security solution vendors to protect data, platforms, and data insights as part of our AI security offerings.
As a best practice, HPE experts combine AI, data, cloud, and security expertise to build security-embedded AI solutions and data services that are specially designed to protect AI implementations from attacks โ especially adversarial attacks. Our framework is aligned with the NIST and ISO AI & Data security standards, policies, as well as the MITRE-proposed threat matrix.
Learn more
- HPE artificial intelligence solutions
- HPE security and digital protection services
- HPE AI and data transformation services
Rohini Chavakula
Hewlett Packard Enterprise
twitter.com/HPE_AI
linkedin.com/showcase/hpe-ai/
hpe.com/us/en/solutions/artificial-intelligence.html
R_Chavakula
Rohini is a data scientist in HPE GreenLake Cloud Services where she works on building trustworthy AI machines. Rohini advises and designs responsible AI systems for trusted outcomes. Working with the security practice and building AI solutions to tackle business challenges across domains have combined to foster her interest in AI security.
- Back to Blog
- Newer Article
- Older Article
- Amy Saunders on: Smart buildings and the future of automation
- Sandeep Pendharkar on: From rainbows and unicorns to real recognition of ...
- Anni1 on: Modern use cases for video analytics
- Terry Hughes on: CuBE Packaging improves manufacturing productivity...
- Sarah Leslie on: IoT in The Post-Digital Era is Upon Us โ Are You R...
- Marty Poniatowski on: Seamlessly scaling HPC and AI initiatives with HPE...
- Sabine Sauter on: 2018 AI review: A year of innovation
- Innovation Champ on: How the Internet of Things Is Cultivating a New Vi...
- Bestvela on: Unleash the power of the cloud, right at your edge...
- Balconycrops on: HPE at Mobile World Congress: Creating a better fu...
-
5G
2 -
Artificial Intelligence
101 -
business continuity
1 -
climate change
1 -
cyber resilience
1 -
cyberresilience
1 -
cybersecurity
1 -
Edge and IoT
97 -
HPE GreenLake
1 -
resilience
1 -
Security
1 -
Telco
108