The Cloud Experience Everywhere
1745786 Members
3852 Online
108722 Solutions
New Article
Glyn_Bowden

A 5-minute Intro to AI: What it Can (and Can’t) Do

 

Artificial intelligence (AI), particularly the area of AI known as machine learning, is currently the darling of the IT world. Many businesses are touting these technologies as differentiators for their products; be it a mobile device, a search engine, or a photo management site, nothing is seen as complete unless it leverages AI. The close attention that AI is receiving inevitably raises the danger that companies could perceive it as the cure for every ill and the solution to every challenge. That’s far from an accurate perception, however, and to understand why, it’s helpful to have a clear picture of what exactly AI is and what it can – and can’t – do. Armed with that deeper insight, it’s easier to pick out some of the truly spectacular business opportunities that the technology can help you to seize.

Why AI can’t replace human intelligence (any time soon)

First, let’s define artificial intelligence and dispel some common myths about it. It’s important to understand that the goal of AI is not, on the whole, to emulate human intellect. There is a specific research area called artificial general intelligence that focuses on this task, but it’s in its extreme infancy and nowhere near ready for general consumption today. The AI we generally deploy today can be defined as a focused set of rules with a finite number of complex variables that provide context. Essentially, artificial intelligence at this level is mathematics and statistics applied to large data sets with a specific outcome or decision process in mind. It’s often referred to as narrow-focus or single-task AI.

HPE20160525257_800_0_72_RGB.jpgSo, what’s the difference between this sort of AI and human intellect? It’s the “context” mentioned in the definition above. In AI algorithms, the input variables are well defined and provide the context within which the data is treated. We may increase the number of input variables, but generally they will be of the same type, or their relationship to the algorithm will be clearly defined. Human intellect, in contrast, has the unique property of being able to apply context to completely new stimuli in order to absorb that information into our analytical thought process. We make a cognitive decision on the value of the data provided by the new stimuli and its relationship to other data, and we generate context for it. This way we can make our decision tree exponentially more complex while continually refining it. It’s how humans learn.

To bring this back to a very basic level, think of how a child learns that touching something that’s too hot causes discomfort. Children learn by doing. They touch an object and react according to its temperature. This is a similar process to AI, which samples and tests actual results against expected results in order to statistically group outcomes. By continually seeing which value of variable x (“hot”) causes outcome y (“discomfort”), the model is trained to know the level at which that occurs and can then assess risk based on the variable without sampling it.

However, if you add the variable “height” to that process, the algorithm would not be able to infer context, and so would be unable to predict that falling from a specific value of that variable – say six feet – would also cause discomfort. This is information that the human intellect and learning process would assimilate quite simply; we generate a new model for falling, one that’s associated to the same outcome of discomfort. Work is going on to help algorithms adjust models in this manner, but they are not yet mature enough for mainstream application.

The AI advantage: pulling value from data volume

So, that said, how can AI be useful to organisations if it doesn’t replace anyone’s actual intellect? The answer lies in the sheer volume of data it can repetitively and accurately ingest and process in order to improve its model and predictions.HPE20160720027_800_0_72_srgb.jpg

Artificial intelligence, and in particular machine learning, ultimately deals with matters of probability. This could be binary probability: either yes or no. “Yes” might be returned as the algorithm determined it was over a prescribed certainty threshold. But there’s always the question of accuracy that must be considered when thinking of where to apply AI. Vast quantities of data allow the model to derive accuracy and even refine itself over time, based on the millions of calculations and tests it performs.

This means that its guesses are much more accurate, or at least supported by much more data, than the typical human “gut reaction.” Our limited memory capacity means that we can only work on the aggregated result of data we have already digested and memorised plus the new data. AI, in contrast, can reprocess and retest all previous assumptions and outcomes against its refined model to reinforce accuracy.

How AI can work even in low-data-storage environments

I mentioned that AI is becoming very pervasive, for example on mobile devices, and it’s also true that it’s becoming more frequently found at the Edge, that place where our systems and interfaces meet the physical environment and external ecosystems. I also stated that vast quantities of data are required to get AI up and running. These statements may appear to be contradictory. Vast amounts of data are not typically readily available on mobile devices, or at the Edge, where often only the real-time output of sensors is available, and very little is stored beyond sensible caching levels. So how do we reconcile these facts?

Let's look at the stages of AI and machine learning. Before AI can be applied, we need to train the model. This is the process of providing data to the algorithm so that it can attempt different variables and test the outcome either against expected results or – when no pre-determined data is available – by clustering the results to determine probability. (Training is called “supervised” when pre-determined data is available to confirm the output, and “unsupervised” when no such data is available). This first phase results in a model that the AI process will use to determine future results based on the best guesses it made and confirmed in this phase.

The model can then be moved to other locations or devices to be used in the second phase: inference. This is the process of taking new data and inferring the result based on the structure of the model. No access to the large data set that was used to train the model is required in this phase. As a result, AI becomes available even in restricted data storage environments like mobile phones and at the Edge. (However, ordinarily that new data and those results will be sent back to the training location to reinforce or fine-tune the model in that environment for later re-release into the inference stage.)

Up next: take the leap and start your AI journey

I’ve described what AI is, what it’s looking to achieve, and the basic steps we take in order to train it and implement it. So now the big question comes up: How do we tie all of this knowledge to generating business value and opportunity? That journey typically starts with a look at the datasets that an organisation captures as part of daily business. The value will come from some combination of those data sets, together with an application of business logic, and I’ll take a closer look at how that works in my next blog. Stay tuned!

 For a great overview of the potential of AI, see this interview that Beena Ammanath, HPE’s Global Vice President for Big Data, Artificial Intelligence and Innovation, gave with SiliconANGLE journalists Dave Vellante and Peter Burris (available below). And check out Beena’s article Is AI the Magic Bullet for Your Company’s Data Glut?

 

Featured articles:

 

About the Author

Glyn_Bowden

Senior Technology leader and strategist. Delivering opportunities, value, and outcomes through applied technology and artificial intelligence.