- Community Home
- >
- HPE AI
- >
- AI Unlocked
- >
- A Short History of Artificial Intelligence
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
A Short History of Artificial Intelligence
Artificial Intelligence (AI): It seems like itโs everywhere you look today. From AI-enabled chatbots to AI algorithms identifying patterns in consumer behavior to AI-driven programs analyzing scientific data, artificial intelligence is now mainstream. But when and where did it all start?
This blog will provide a brief historical overview to help you understand from where and when it came.
A little history
The concept of artificial intelligence actually goes back thousands of years as stories, myths, legends, and tales of artificial beings imbued with intelligence exist in almost every culture. The assumption made in all those stories and tales is that human intelligence and thought can be artificially replicated.
However, the modern understanding of artificial intelligence effectively began in the 1940s and 50s when scientists from a variety of fields including mathematics, economics, engineering, psychology, and political science began discussing the possibility of creating an artificial brain. That discussion was driven by recent research in the field of neurology. where it was discovered that the human brain function was essentially that of an electrical network with electrical pulses transmitting information between neurons.
Claude Shannon (who became known as the father of information theory) described digital signals that functioned in the same way as the electrical pulses in the brain. Alan Turingโs โTheory of Computationโ described how any form of computation could be described digitally. These two ideas taken together became the basis for the idea that it might be possible to construct an electronic brain.
The first mention of artificial intelligence
In 1956, the Dartmouth Summer Research Project on Artificial Intelligence1 was held. The Dartmouth Summer Research Project brought together researchers from many different fields and many new ideas, concepts, and papers were submitted for consideration and discussion. It was at this conference, which was organized by Marvin Minsky, John McCarthy, Claude Shannon and Nathan Rochester (all of whom would become well known in the field of AI) that the term โartificial intelligenceโ was proposed by John McCarthy and is considered to be the first reference to the term.
In the years immediately after the Dartmouth conference, research and innovation in the field of AI grew rapidly. This was due in part to the fact that many countries around the globe, including the US, Great Britain, and Japan, were actively funding research. During this time, numerous different approaches to AI were being pursued, including:
- Reasoning as search, where AI programs used the same basic algorithm and followed a defined set of specific steps to achieve a goal
- Natural language, where the goal was for the computer to communicate in a natural language such as English
- Micro-worlds, where research was conducted by focusing on artificially simple situations, such as a set of colored blocks
- Automata, the concept of self-contained basic robots with tactile and visual sensors that would allow them to walk, differentiate and pick up objects, and even communicate via speech
AI growth in the 1960s and 70s
In the 1960s and 70s, the development of industrial robots, new programming languages, as well as films and movies all reflected the interest of not only the scientific community, but the world at large.
Several noteworthy accomplishments that are indicative of the times took place in the 1960s and 70s:
- 1961 โ Unimate, the first industrial robot, began working on a General Motors assembly line
- 1961 โ the Stanford Cart, a remote-controlled, TV-equipped mobile robot was created
- 1965 โ ELIZA, an interactive computer program, was developed and could functionally converse in English
- 1966 โ Shakey the Robot was the first general-purpose mobile robot
- 1968 โ the movie โ2001: A Space Odysseyโ was released, featuring HAL (Heuristically programmed ALgorithmic computer), a sentient computer
- 1970 โ WABOT-1, the first anthropomorphic robot, was built at Waseda University in Japan
- 1977 โ โStar Warsโ was released, featuring both intelligent humanoid robots (C-3PO) and other protocol androids (R2-D2)
- 1979 โ the Stanford Cart was updated with a mechanical swivel to move the TV camera side to side; it successfully crossed a chair-filled room without assistance in 5 hours, making it one of the earliest examples of an autonomous vehicle.
Throughout the 1970s, advancements continued, but they were mainly focused on robotics. This was because much of the research into artificial intelligence was based on the assumption that human activities and thought processes could be automated and mechanized.
The first AI winter
However, the technology of the time had not yet caught up with the vision for what AI could do. This lack of computing power to support the stated goals of AI research showed how difficult it was, resulting in numerous delays and missed objectives. Additionally, a global recession that was beginning to be felt resulted in funding for AI research drying up, leading to what is now known as the first AI Winter (1974-1980)2.
However, as Mooreโs Law3 predicted, as technology advanced, computing power and storage capacities began rapidly increasing, both of which were necessary components to support computers being used for AI research, which then experienced a resurgence in the 1980s.
AI resurgence in the 1980s
In the 1980s, computer scientist Edward Feigenbaum4 introduced the concept of expert systems, which could mimic the decision-making process of a human expert. In his own words, โAn 'expert systemโ is an intelligent computer program that uses knowledge and inference procedures to solve problems that are difficult enough to require significant human expertise for their solution5.โ
Expert systems were comprised of programs that answered questions using logical rules (such as sets of โif-thenโ queries) applied to specific domains of knowledge. The simple design of these expert systems made it easy to build and modify programs. This approach to AI was quickly applied to numerous fields such as medical diagnosis, circuit design, and financial planning.
However, as in the previous decade, artificial intelligence proved not quite ready for prime time yet again. While advances had been made, numerous AI companies were unable to deliver commercial solutions, so those companies failed. Then in 1984, John McCarthy (a computer scientist and pioneer in the field of AI) criticized expert systems because โโฆhardly any of them have certain common sense knowledge and ability possessed by any non-feeble-minded human.6โ
This โcommon sense approachโ to AI was becoming a known issue, as researchers began to realize that AI was not just based on specific domain knowledge, but also on the ability to use a large amount of diverse knowledge in different ways. To address that challenge, a very large database known as โCycโ was created with the intent that it would contain all types of mundane facts an average person would know. Effectively, they wanted to create an ontological knowledge database encompassing the most basic concepts, rules, facts and figures about how the world works. However, AI researchers also realized that due to the scope of this project, it would not be completed for decades.
Then in 1987, Jacob T. Schwarz, Director of DARPA/ISTO reportedly stated that AI has โโฆvery limited success in particular areas, followed immediately by failure to reach the broader goal at which these initial successes seem at first to hintโฆโ.
The second AI winter
Shortly thereafter, AI funding for continued AI research began to be reduced as the second AI Winter set in (1988-1993). Many factors contributed to this second AI Winter. The availability of relatively cheap, yet powerful desktop computers from IBM and Apple made more expensive Lisp and other systems that had been used for AI research no longer a requirement, which caused that industry to fail. In addition, some of the earliest expert systems began to prove too expensive to maintain and update.
1990 and beyond
As the 1990s dawned, new methods began to gain sway in the field of AI, as expert systems gave way to more machine learning (ML) methods such as Bayesian networks, evolutionary algorithms, support vector machines, and neural networks. In addition, the concept of an โintelligent agent,โ a system that perceives the environment and then takes action that will maximize success, was introduced. This resulted in a paradigm shift towards systems that studied all types of intelligence and was not just focused on solving specific problems.
In 1997 this new approach was demonstrated when IBMโs Deep Blue computer beat Garry Kasparov, the reigning world chess champion. This was a significant step forward in the AI world, as it indicated that an artificial intelligence program could be used to make decisions in real time.
Then in the late 1990s, Dr. Cynthia Breazeal7, a robotics scientist at MIT, began development of Kismet8,9, a robotic head that could recognize and simulate emotions, allowing it to interact with humans in an intuitive, almost human-like way based on facial expressions, tones of voice, and various head positions.
A new century
In the two decades since, the field of artificial intelligence research has continued to grow. Cheaper and faster computers and storage systems, coupled with advances in networking technology, made distributed systems that could be used for AI research more available and more powerful. Access to massive amounts of data (i.e., โbig dataโ) also made it easier for AI researchers.
In addition, numerous consumer products, such as the Furby toy (1998), the Sony AIBO robotic dog (1999), the first Roomba (2002), Appleโs Siri (2011), Microsoftโs Cortana (2014), Googleโs Alexa (2014), and Samsungโs Bixby (2018) have kept the concept of AI and the benefits it can provide in the public eye. Chatbots, virtual assistants, online purchase recommendations based on previous user purchases, and other similar tools also continue to increase daily interaction with AI, helping to make it more familiar.
Today, the field of artificial intelligence has continued to expand to include research into areas that include deep learning, neural networks, natural language processing, general artificial intelligence, and more.
For more information on artificial intelligence, visit:
- HPE Artificial Intelligence Solutions
- What is Artificial Intelligence?
- What is Deep Learning?
- What is Machine Learning?
1 โ AI Magazine, Volume 27, Number 4, Winter 2006, https://ojs.aaai.org/index.php/aimagazine/article/download/1911/1809
2 โ โAI Winter: The Highs and Lows of Artificial Intelligence,โ https://www.historyofdatascience.com/ai-winter-the-highs-and-lows-of-artificial-intelligence/, published September 1, 2021
3 โ โMooreโs Law,โ https://corporatefinanceinstitute.com/resources/knowledge/other/moores-law/, published July 12, 2020
4 โ Edward Feigenbaum, https://en.wikipedia.org/wiki/Edward_Feigenbaum
5 โ โExpert Systems in the 1980sโ, E. A. Feigenbaum, https://stacks.stanford.edu/file/druid:vf069sz9374/vf069sz9374.pdf
6 โ โSome Expert Systems Need Common Sense,โ https://www-formal.stanford.edu/jmc/someneed/someneed.html, Stanford University, 1984
7 โ Dr. Cynthia Breazeal, https://en.wikipedia.org/wiki/Cynthia_Breazeal
8 โ Kismet (robot), https://en.wikipedia.org/wiki/Kismet_(robot)
9 โ โMIT team building social robot,โ MIT News, https://news.mit.edu/2001/kismet, published February 14, 2001
Hewlett Packard Enterprise
HPE Ezmeral on LinkedIn | @HPE_Ezmeral on Twitter
@HPE_DevCom on Twitter
RichardHatheway
Richard Hatheway is a technology industry veteran with more than 20 years of experience in multiple industries, including computers, oil and gas, energy, smart grid, cyber security, networking and telecommunications. At Hewlett Packard Enterprise, Richard focuses on GTM activities for HPE Ezmeral Software.
- Back to Blog
- Newer Article
- Older Article
- Dhoni on: HPE teams with NVIDIA to scale NVIDIA NIM Agent Bl...
- SFERRY on: What is machine learning?
- MTiempos on: HPE Ezmeral Container Platform is now HPE Ezmeral ...
- Arda Acar on: Analytic model deployment too slow? Accelerate dat...
- Jeroen_Kleen on: Introducing HPE Ezmeral Container Platform 5.1
- LWhitehouse on: Catch the next wave of HPE Discover Virtual Experi...
- jnewtonhp on: Bringing Trusted Computing to the Cloud
- Marty Poniatowski on: Leverage containers to maintain business continuit...
- Data Science training in hyderabad on: How to accelerate model training and improve data ...
- vanphongpham1 on: More enterprises are using containers; hereโs why.