The Cloud Experience Everywhere
1826218 Members
3000 Online
109691 Solutions
New Article
ServicesExperts

Interacting with the knowledge base of a device via natural language: An HPE demo

At Hannover Messe, HPE demonstrated human interaction with a co-bot with a solution based on Large Language Models from Aleph Alpha. Here’s how it works – learn more at HPE Discover 2023.

by Raffaele Tarantino, GTM Strategy Lead, HPE Advisory & Professional Services' Data, Analytics & AI Practice

GettyImages-900253108_800_0_72_RGB.jpgI’m back from an intense full week at Hannover Messe, where technology and industry leaders come together to showcase their most advanced implementations and share best practices. Participating in the event, you can clearly feel innovation sparking in every corner of the fair. The Data, Analytics and Artificial Intelligence services team from HPE brought to life human interaction with a co-bot with a solution based on the Large Language Models provided by Aleph Alpha.

The biggest challenge with conversational AI systems is to keep high performance when using them in domain-specific implementations. We targeted the ability to query with natural language the Knowledge Base of a given device. Being able to ask about safety status or instructions about how to perform a specific task not only provides information from a domain specific area, but also provides an incremental learning path for users who are new to the device under inspection.

There are two exciting features to highlight in such context:

Multimodal AI – the ability to interact with the system in natural language and access semantic meaning from images, generating text related to them.

Explainable AI – the ability to identify the specific area from the provided data that are mostly influencing the generated answer.

Based on the industrial context, market challenges and features at our disposal, and with the contribution of the HPE Center of Excellence for AI and Data, the system at the fair shopfloor provided an easy-to-navigate user interface, including the ability to take pictures and ask questions verbally using Speech-to-Text abilities. Among the hundreds of visits we received I would like to document one relevant example that illustrates our abilities at their best.

HPE-Services-AI-Demo.png

 

The user interface allows the user to take pictures and query the system with natural language. In this example, direct information is provided: the robot arm can turn 360 degrees. In order to define what specific joint the system is referring to, we can “explain the last answer,” applying a scoring heatmap to the original image.

The applications unlocked by the demonstrated features are quite vast. Quoting Andy Longworth, HPE Global Data Platform Team Lead & Architect, from the podcast on Robotics in the Industry: “As system integrator, we are pulling in all integration of the pipeline.” This process is key for enterprises that want to move the high potential of advanced AI services into valuable applications, creating an impact at the core of their business.

Let’s have a closer look to the Knowledge Base use case. Iveta Lohovska, HPE Principal Data Scientist, during the conference at the Industrial Transformation Stage, stated that “the difference between the most powerful conversational chatbots and the most powerful knowledge workers is actually how optimal the architecture is.” To achieve the optimal setup for the demo, we applied a 3-step approach that we encourage the industry to adopt, as it provides quick results with limited efforts.

  1. Select the area of documentation, perhaps product manuals or technical specifications, that’s most relevant for building the knowledge base. Split it into sections and augment them with real-world images. Each section can be pre-computed, producing equivalent encoded versions to be used in the next steps. The technical name is embedding.
  2. Capture the question from the user and produce the equivalent embedding. It can now be compared with the set of sections elaborated in the previous step. This technique is called asymmetric search.
  3. Once the most relevant section is scored – where the system expects to find and generate the answer – a package of encoded question, selected section, and additional instruction with inference parameters configuration is issued via an API call. The effort to prepare this step optimally, which is key for the end-result and to ensure high performance, is called prompt engineering.

If you are looking to see this demo in action again, be sure to register for HPE Discover 2023 to stay ahead of the trends and technologies that will fast-forward your data-first modernization.

There you can also find out how HPE offers a breadth of technology services consulting that can help you accelerate your digital transformation journey. And you shouldn’t miss out on learning more about HPE GreenLake, the open and secure edge-to-cloud platform you've always wanted … but maybe haven’t known it.

Raffaele Tarantino.jpgRaffaele Tarantino is the GTM Strategy Lead for the A&PS DAI Practice - Advisory & Professional Services Data, Analytics & AI Practice. He is responsible for the messaging and sales enablement of the services portfolio. Raffaele has 7 years' experience in HPE roles ranging from private cloud consultant to compute specialist and finally AI architect as a member of the worldwide practice. During the last year Raffaele designed the HPE Machine Learning Development Services and contributed to the HPE GreenLake Data Immersion launch.


Services Experts
Hewlett Packard Enterprise

twitter.com/HPE_Services
linkedin.com/showcase/hpe-services/
hpe.com/services

About the Author

ServicesExperts

HPE Services Team experts share their insights on the topics and technologies that matter most for your business.