HPE Ezmeral: Uncut
1755406 Members
3233 Online
108832 Solutions
New Article ๎ฅ‚
RichardHatheway

A Short History of Artificial Intelligence

HPE Ezmeral History of AI.jpgArtificial Intelligence (AI): It seems like itโ€™s everywhere you look today. From AI-enabled chatbots to AI algorithms identifying patterns in consumer behavior to AI-driven programs analyzing scientific data, artificial intelligence is now mainstream. But when and where did it all start?

This blog will provide a brief historical overview to help you understand from where and when it came.

A little history

The concept of artificial intelligence actually goes back thousands of years as stories, myths, legends, and tales of artificial beings imbued with intelligence exist in almost every culture. The assumption made in all those stories and tales is that human intelligence and thought can be artificially replicated.

However, the modern understanding of artificial intelligence effectively began in the 1940s and 50s when scientists from a variety of fields including mathematics, economics, engineering, psychology, and political science began discussing the possibility of creating an artificial brain. That discussion was driven by recent research in the field of neurology. where it was discovered that the human brain function was essentially that of an electrical network with electrical pulses transmitting information between neurons.

Claude Shannon (who became known as the father of information theory) described digital signals that functioned in the same way as the electrical pulses in the brain. Alan Turingโ€™s โ€œTheory of Computationโ€ described how any form of computation could be described digitally. These two ideas taken together became the basis for the idea that it might be possible to construct an electronic brain.

The first mention of artificial intelligence

In 1956, the Dartmouth Summer Research Project on Artificial Intelligence1 was held. The Dartmouth Summer Research Project brought together researchers from many different fields and many new ideas, concepts, and papers were submitted for consideration and discussion. It was at this conference, which was organized by Marvin Minsky, John McCarthy, Claude Shannon and Nathan Rochester (all of whom would become well known in the field of AI) that the term โ€œartificial intelligenceโ€ was proposed by John McCarthy and is considered to be the first reference to the term.

In the years immediately after the Dartmouth conference, research and innovation in the field of AI grew rapidly. This was due in part to the fact that many countries around the globe, including the US, Great Britain, and Japan, were actively funding research. During this time, numerous different approaches to AI were being pursued, including:

  • Reasoning as search, where AI programs used the same basic algorithm and followed a defined set of specific steps to achieve a goal
  • Natural language, where the goal was for the computer to communicate in a natural language such as English
  • Micro-worlds, where research was conducted by focusing on artificially simple situations, such as a set of colored blocks
  • Automata, the concept of self-contained basic robots with tactile and visual sensors that would allow them to walk, differentiate and pick up objects, and even communicate via speech

AI growth in the 1960s and 70s

In the 1960s and 70s, the development of industrial robots, new programming languages, as well as films and movies all reflected the interest of not only the scientific community, but the world at large.

Several noteworthy accomplishments that are indicative of the times took place in the 1960s and 70s:

  • 1961 โ€“ Unimate, the first industrial robot, began working on a General Motors assembly line
  • 1961 โ€“ the Stanford Cart, a remote-controlled, TV-equipped mobile robot was created
  • 1965 โ€“ ELIZA, an interactive computer program, was developed and could functionally converse in English
  • 1966 โ€“ Shakey the Robot was the first general-purpose mobile robot
  • 1968 โ€“ the movie โ€œ2001: A Space Odysseyโ€ was released, featuring HAL (Heuristically programmed ALgorithmic computer), a sentient computer
  • 1970 โ€“ WABOT-1, the first anthropomorphic robot, was built at Waseda University in Japan
  • 1977 โ€“ โ€œStar Warsโ€ was released, featuring both intelligent humanoid robots (C-3PO) and other protocol androids (R2-D2)
  • 1979 โ€“ the Stanford Cart was updated with a mechanical swivel to move the TV camera side to side; it successfully crossed a chair-filled room without assistance in 5 hours, making it one of the earliest examples of an autonomous vehicle.

Throughout the 1970s, advancements continued, but they were mainly focused on robotics. This was because much of the research into artificial intelligence was based on the assumption that human activities and thought processes could be automated and mechanized.

The first AI winter

However, the technology of the time had not yet caught up with the vision for what AI could do. This lack of computing power to support the stated goals of AI research showed how difficult it was, resulting in numerous delays and missed objectives. Additionally, a global recession that was beginning to be felt resulted in funding for AI research drying up, leading to what is now known as the first AI Winter (1974-1980)2.

However, as Mooreโ€™s Law3 predicted, as technology advanced, computing power and storage capacities began rapidly increasing, both of which were necessary components to support computers being used for AI research, which then experienced a resurgence in the 1980s.

AI resurgence in the 1980s

In the 1980s, computer scientist Edward Feigenbaum4 introduced the concept of expert systems, which could mimic the decision-making process of a human expert. In his own words, โ€œAn 'expert systemโ€™ is an intelligent computer program that uses knowledge and inference procedures to solve problems that are difficult enough to require significant human expertise for their solution5.โ€

Expert systems were comprised of programs that answered questions using logical rules (such as sets of โ€œif-thenโ€ queries) applied to specific domains of knowledge. The simple design of these expert systems made it easy to build and modify programs. This approach to AI was quickly applied to numerous fields such as medical diagnosis, circuit design, and financial planning.

However, as in the previous decade, artificial intelligence proved not quite ready for prime time yet again. While advances had been made, numerous AI companies were unable to deliver commercial solutions, so those companies failed. Then in 1984, John McCarthy (a computer scientist and pioneer in the field of AI) criticized expert systems because โ€œโ€ฆhardly any of them have certain common sense knowledge and ability possessed by any non-feeble-minded human.6โ€

This โ€œcommon sense approachโ€ to AI was becoming a known issue, as researchers began to realize that AI was not just based on specific domain knowledge, but also on the ability to use a large amount of diverse knowledge in different ways. To address that challenge, a very large database known as โ€œCycโ€ was created with the intent that it would contain all types of mundane facts an average person would know. Effectively, they wanted to create an ontological knowledge database encompassing the most basic concepts, rules, facts and figures about how the world works. However, AI researchers also realized that due to the scope of this project, it would not be completed for decades.

Then in 1987, Jacob T. Schwarz, Director of DARPA/ISTO reportedly stated that AI has โ€œโ€ฆvery limited success in particular areas, followed immediately by failure to reach the broader goal at which these initial successes seem at first to hintโ€ฆโ€.

The second AI winter

Shortly thereafter, AI funding for continued AI research began to be reduced as the second AI Winter set in (1988-1993). Many factors contributed to this second AI Winter. The availability of relatively cheap, yet powerful desktop computers from IBM and Apple made more expensive Lisp and other systems that had been used for AI research no longer a requirement, which caused that industry to fail. In addition, some of the earliest expert systems began to prove too expensive to maintain and update.

1990 and beyond

As the 1990s dawned, new methods began to gain sway in the field of AI, as expert systems gave way to more machine learning (ML) methods such as Bayesian networks, evolutionary algorithms, support vector machines, and neural networks. In addition, the concept of an โ€œintelligent agent,โ€ a system that perceives the environment and then takes action that will maximize success, was introduced. This resulted in a paradigm shift towards systems that studied all types of intelligence and was not just focused on solving specific problems.

In 1997 this new approach was demonstrated when IBMโ€™s Deep Blue computer beat Garry Kasparov, the reigning world chess champion. This was a significant step forward in the AI world, as it indicated that an artificial intelligence program could be used to make decisions in real time.

Then in the late 1990s, Dr. Cynthia Breazeal7, a robotics scientist at MIT, began development of Kismet8,9, a robotic head that could recognize and simulate emotions, allowing it to interact with humans in an intuitive, almost human-like way based on facial expressions, tones of voice, and various head positions.

A new century

In the two decades since, the field of artificial intelligence research has continued to grow. Cheaper and faster computers and storage systems, coupled with advances in networking technology, made distributed systems that could be used for AI research more available and more powerful. Access to massive amounts of data (i.e., โ€œbig dataโ€) also made it easier for AI researchers.

In addition, numerous consumer products, such as the Furby toy (1998), the Sony AIBO robotic dog (1999), the first Roomba (2002), Appleโ€™s Siri (2011), Microsoftโ€™s Cortana (2014), Googleโ€™s Alexa (2014), and Samsungโ€™s Bixby (2018) have kept the concept of AI and the benefits it can provide in the public eye. Chatbots, virtual assistants, online purchase recommendations based on previous user purchases, and other similar tools also continue to increase daily interaction with AI, helping to make it more familiar.

Today, the field of artificial intelligence has continued to expand to include research into areas that include deep learning, neural networks, natural language processing, general artificial intelligence, and more.

For more information on artificial intelligence, visit:

 

1 โ€“ AI Magazine, Volume 27, Number 4, Winter 2006, https://ojs.aaai.org/index.php/aimagazine/article/download/1911/1809

2 โ€“ โ€œAI Winter: The Highs and Lows of Artificial Intelligence,โ€ https://www.historyofdatascience.com/ai-winter-the-highs-and-lows-of-artificial-intelligence/, published September 1, 2021

3 โ€“ โ€œMooreโ€™s Law,โ€  https://corporatefinanceinstitute.com/resources/knowledge/other/moores-law/, published July 12, 2020

4 โ€“ Edward Feigenbaum, https://en.wikipedia.org/wiki/Edward_Feigenbaum

5 โ€“ โ€œExpert Systems in the 1980sโ€, E. A. Feigenbaum, https://stacks.stanford.edu/file/druid:vf069sz9374/vf069sz9374.pdf

6 โ€“ โ€œSome Expert Systems Need Common Sense,โ€ https://www-formal.stanford.edu/jmc/someneed/someneed.html, Stanford University, 1984

7 โ€“ Dr. Cynthia Breazeal, https://en.wikipedia.org/wiki/Cynthia_Breazeal

8 โ€“ Kismet (robot), https://en.wikipedia.org/wiki/Kismet_(robot)

9 โ€“ โ€œMIT team building social robot,โ€ MIT News, https://news.mit.edu/2001/kismet, published February 14, 2001

 

Hewlett Packard Enterprise

HPE Ezmeral on LinkedIn | @HPE_Ezmeral on Twitter

@HPE_DevCom on Twitter 

About the Author

RichardHatheway

Richard Hatheway is a technology industry veteran with more than 20 years of experience in multiple industries, including computers, oil and gas, energy, smart grid, cyber security, networking and telecommunications. At Hewlett Packard Enterprise, Richard focuses on GTM activities for HPE Ezmeral Software.