AI Unlocked
1819696 Members
3366 Online
109605 Solutions
New Article
RichardHatheway

Differences Between Artificial Intelligence Techniques

All Artificial Intelligence (AI) is not the same; it actually includes quite a few different techniques. Those techniques include Machine Learning (ML), Neural Networks (NN), Deep Learning (DL), Swarm Learning (SL), and Natural Language Processing (NLP). Due to the proliferation of these different techniques, it’s often hard to keep them separate, much less understand where the differences lie.

HPE Ezmeral AI 1.jpg

This blog will provide a brief overview of what these different techniques are and where they are used.

What AI Is and Is Not

Let’s start by defining what AI is and is not.

The formal definition of artificial intelligence is “the field of study of intelligent agents”; that is, any system that perceives its environment and is then able to take actions to maximize its chance of achieving its goals2. Translated into English, that means artificial intelligence is the technology that allows machines (e.g., computers) to mimic or simulate human behavior and intelligence. This is done by analyzing data, making decisions, and performing tasks based on that analysis. AI systems also typically have an iterative self-learning capability that allows them to improve the accuracy of the output based on previous data collected.

AI is not almost every product labeled as “smart,” such as smart TVs and smart watches. Most of those are really nothing more than marketing ploys designed to separate you from your money. AI is also not evil computers like HAL 9000, killer robots like the Terminators, or sentient control systems that take over the world like SkyNet. Most of all, AI is not alive.

In addition, for the purpose of this article, I use the term “AI” as shorthand for all the various techniques we’ll discuss. So now that we have a basic understanding of what AI really is, let’s take a quick look at the various techniques used in the field of artificial intelligence today.

Why Are There So Many Different Types of AI?

To clarify, there aren’t numerous types of AI, there are different AI techniques, which is a subtle but important difference. The four “types” that AI is currently categorized into are: reactive machines, limited memory machines, theory of mind machines, and self-aware machines3. Today, only the first two types of machines exist.

The different techniques used in AI research stems from the beginning of the modern study of artificial intelligence, which effectively began in 1956. In that year, scientists from a variety of fields including mathematics, economics, engineering, psychology, and political science came together at the Dartmouth Summer Research Project on Artificial Intelligence1, where they began discussing the possibility of creating an artificial brain. The genesis of that discussion was recent research in the field of neurology where it was discovered that the human brain function was essentially an electrical network with electrical pulses transmitting information between neurons.

Because so many different scientific fields were represented at the Dartmouth conference, numerous different approaches to AI were discussed at that time and throughout the following years. As different types of AI research developed, it became clear that some techniques were better suited for specific types of problem solving than others. In addition, as technology developed, different techniques used different methods to analyze data, categorize data, and make predictions based on that data. These predictions were then fed back into the system as another data input, providing a manner for the system to essentially learn from past experiences. This is known as iterative or self-learning. These different approaches ultimately led to the proliferation of AI techniques in existence today.

Types and Techniques

AI is typically categorized into categories based on the specific application and the type of learning the system needs to do. The following techniques are some of the most common approaches used today.

  • Machine Learning

Machine learning is a branch of artificial intelligence that focuses on computers using data and mathematical algorithms (aka, models) to imitate the way humans learn. Machine learning automates the process of analytical model building and allows computers to adapt to new scenarios independently, based on the analysis of the data that is input. ML can be used in almost any area that has a defined set of rules or data points to be evaluated. The model goes through an iterative process where the computer takes in data, evaluates it, performs some type of calculation, and then delivers an output. That output is then compared to an expected output value to see how close it is to the expected value. The model is then adjusted by the computer based on how close to or far away from the forecast value the model output is. 

Over time, machine learning models gradually improve the accuracy of their output by going through this iterative evaluation process and effectively fine-tuning the model. Once the model has been fine-tuned to the point that the output regularly meets or exceeds the expected output parameters, it’s ready to be used in a production or real-world environment. ML models are often used in business in applications such as pattern recognition, retail purchase recommendations, and fraud detection.

  • Neural Networks

HPE Ezmeral AI techniques 2.jpgA neural network is a subset of machine learning that mimics the structure and function of how neurons communicate and signal one another in the human brain. This is done through the use of algorithms. Modeled on the human brain, a neural network is typically comprised of three node layers: an input layer, one or more hidden layers, and an output layer. Each node (i.e., artificial neuron) is connected to another one and that connection has an associated “weight” and “threshold limit” assigned to it. As data is input into a neural network at the input layer, it is assigned a weight which is then analyzed by the algorithms in the hidden layer(s). If the weight exceeds the threshold limit of the connection (i.e., a positive weight) the connection is “excited”, and data is transferred between the two nodes via the output layer. If the weight does not exceed the threshold limit (i.e., a negative weight), the connection is “inhibited”, and the data is not transferred. Neural networks use training data to learn and improve their accuracy over time. Neural networks are at the heart of deep learning models and are typically used in circumstances where powerful computer processing is required to analyze large quantities of data, such as image recognition or speech processing.

  • Deep Learning

Deep learning is a subset of machine learning and uses neural networks as the backbone to focus on improving the process of how machines learn. Whereas ML uses a predefined set of rules provided by a data scientist to the algorithms that are used to analyze data, deep learning allows the system to analyze raw data on its own, without any predefined rules. By removing the human element (i.e., the predefined rules), the system algorithms can ingest and process raw, unstructured data. The raw data is then analyzed to determine the common characteristics, which are then used to determine the categories the data should be sorted into. By processing and characterizing the data on its own, without any predefined rules by which to analyze the data, the system clarifies the structure of the ingested abstract data into a more structured dataset as an output. That output is then used as an input back into the system, where the entire process is gone through again. This iterative process is continued until the output reaches an acceptable level of accuracy. Deep learning is often used in applications such as virtual assistants, chatbots, and optimizing user experiences.

  • Swarm Learning

Swarm learning is a decentralized machine learning framework that uses peer-to-peer networking to foster collaboration and blockchain technology to preserve data privacy. What this means in English is that swarm learning unites the edge computing capabilities of multiple networked nodes, combined with blockchain technology, to allow data exchange and collaboration across a network without violating data privacy requirements. It is structured differently than traditional machine learning models, which use a central server that hosts a trained model and then data is fed into that model via a data pipeline.

Swarm learning recognizes that data is created at the edge, so this model takes advantage of edge processing capabilities to eliminate having to send data back and forth to a central server. Instead, each edge processing location (i.e., node) builds an independent AI model of its own where it can analyze data without sending data back and forth to a central processing location, improving efficiency. In addition, each node is also connected to all the other nodes on the network via peer-to-peer networking, making the network dynamically scalable, eliminating the single point of failure problem, and creating a more robust system.

Since blockchain technology is used to safeguard the privacy of the datasets, data and learnings can be shared among the networked nodes, regardless of location, providing each node, and the model, with more data to be analyzed, which improves overall model accuracy and reduces model bias. As there are now numerous nodes sharing data and working on the same problem together, this structure amplifies the capabilities of the individual nodes on the network. As the models are now more accurate, the overall accuracy of the results is improved, in much the same way that animals coming together increased their group intelligence, and therefore, improved the outcome of their actions. Hence the name swarm learning. Swarm learning is often used in medical and scientific applications, where data is widely dispersed, and data trust and security is critical.

  • Natural Language Processing

HPE Ezmeral AI techniques 6.jpgNatural language processing is the branch of AI that provides computers with the ability to understand both text and the spoken word in a manner similar to how a human does. This allows the computer to interact with humans in their natural language. NLP combines statistical algorithms with machine learning and deep learning models that are programmed with the rules of language, allowing the system to automatically analyze the spoken or written word.

In an NLP system, the computer receives text or speech as input data. This data is then parsed and readied for analysis. The analysis automatically extracts, classifies, and labels elements of the text or spoken word data. It also compares the placement of the words relative to each other and in the larger structure (e.g., a sentence or paragraph). The system then assigns a statistical value for each possible meaning of the various data elements (i.e., text or spoken word). This process is known as “tagging.” The tagged information is then compared to the ML/DL models to allow the system to understand what was written or said and derive the meaning of the words. In more sophisticated NLP systems, the models include enough detail that the system is not only able to understand the meaning, but also the speaker or writer’s intent. NLP systems are used in common everyday tools such as spell-check, autocorrect, predictive text, and automatic language translation, as well as in sentiment analysis and fraud detection.

Constantly Pushing the Boundaries

While this blog highlights some of the primary techniques used in artificial intelligence, AI includes many additional techniques and technologies not discussed here. AI research is constantly pushing the boundaries, so new techniques are constantly being developed.

HPE Ezmeral and AI techniques.pngIn addition, multiple techniques are often combined in new and exciting ways. Self-driving cars, for instance, currently include a combination of machine learning and neural network systems, combined with image and pattern recognition systems. Virtual and augmented reality systems are another example where combining several AI techniques provides an enhanced user experience in areas from computer gaming to simulation training.

To learn more about the topics presented in this blog, visit the HPE glossary of enterprise IT terms.

1 – AI Magazine, Volume 27, Number 4, Winter 2006, https://ojs.aaai.org/index.php/aimagazine/article/download/1911/1809  

2 – Artificial intelligence, Wikipedia, https://en.wikipedia.org/wiki/Artificial_intelligence

3 – “Understanding the Four Types of Artificial Intelligence,” GovTech, November 14, 2016, https://www.govtech.com/computing/understanding-the-four-types-of-artificial-intelligence.html

 

Hewlett Packard Enterprise

HPE Ezmeral on LinkedIn | @HPE_Ezmeral on Twitter

@HPE_DevCom on Twitter 

 

 

About the Author

RichardHatheway

Richard Hatheway is a technology industry veteran with more than 20 years of experience in multiple industries, including computers, oil and gas, energy, smart grid, cyber security, networking and telecommunications. At Hewlett Packard Enterprise, Richard focuses on GTM activities for HPE Ezmeral Software.