- Integrated Systems
- About Us
- Integrated Systems
- About Us
Transform customer experiences with voice analytics and AI
Learn how voice analytics and AI are turning text and audio files into smart data for myriad industries including, most recently, COVID-19 research.
Say what? Without a doubt, the spoken language can be incredibly complicated for the human ear to decipher and understand. In any scenario, including written and recorded audio, the context of speech is incredibly important to its comprehension.
Challenges come in many ways. Separating words from background noise is not easy. People talk fast and run words together. (Did he say “dancing and smiling” or “dance, sing, and smile”?) In English alone, homophones abound. (Does she mean “red” or “read”? Does he mean “their” or “there”?) And small mistakes can result in very different meanings. (Did she say “reading in the library is always loud” or “reading in the library is always allowed”?)
These factors all make traditional manual transcription of audio files a costly, labor-intensive activity that is also not very productive when it comes to gleaning meaningful data. What’s needed are more modern ways to abstract data from audio files.
Unlocking the value of audio data
The reality is, of all the structured and unstructured data pouring into your organization, audio data can still be the most challenging to handle. Yet it also holds great promise. Here’s a look how a combination voice analytics and artificial intelligence (AI) solutions available to deliver on that promise—plus a few use cases where companies are already seeing positive business outcomes.
Key components to these solutions, speech and natural language processing (NLP) technologies can transform human speech in audio data into a rich set of semantic data to search and analyze. This in turn enables you to unlock valuable business insights, shave costs from labor-intensive processes, and enhance compliance and fraud detection efforts.
Spanning multiple industries
Today, speech and NLP are proving to be an ideal fit in many areas. Financial services use speech and NLP for compliance monitoring, fraud detection, and investigations. Legal operations find this the right solution for both e-discovery and cost reduction. In customer service and contact centers, look for quality of service, call monitoring, and service optimization applications. And for emergency services, uses focus on automated call monitoring, sentiment analysis, and business insight.
And now: playing an important role in COVID-19 research
HPE AI experts are collaborating to support the COVID-19 Open Research Dataset. They are expecting to develop AI tools to mine data across thousands of scholarly articles related to COVID-19 and related coronaviruses to help the medical community develop answers to high-priority scientific questions.
NLP is an area of interest (or one of the key AI applications) for HPE data scientists and technologists specifically for the recent COVID-19 Open Research Dataset Challenge to mine COVID-19 Open Research Dataset (CORD-19). This is a free resource from the Allen Institute and partners, offering access to more than 29,000 articles and text about COVID-19 and the coronavirus family of viruses for use by the global research community.
Enhancing accuracy, eliminating privacy issues
As hinted at earlier, many current tools are hindered by the same factors as those human transcriptionists, relying heavily on properly formatted sentences to “understand” what is in a file. In addition, the heavy processing demands of most voice solutions drive organizations to rely on the cloud to record and store voice data. What’s more, with cloud providers acting as the conduit for all information to and from the consumer, sensitive financial and health information is vulnerable to breaches.
Keenly aware of these challenges customers are facing, HPE is working with different partners such as Intelligent Voice (IV), NVIDIA®, and Microsoft to deliver speech and NLP solutions that are tuned specifically to transform unstructured audio data into a rich, accurate set of semantic data for instant insight and intelligence.
Turning audio into smart data
The combination of HPE’s highly optimized hardware portfolio, NVIDIA’s experience in machine learning and graphics processing unit (GPU) technology, and Intelligent Voice’s speech recognition expertise work together to deliver deep connection and processing capabilities. Data can flow directly to your preferred platform, search-ready for review and analysis. Your workforce, customers, and partners can interact with devices with the utmost privacy, for greater confidence and a more robust user experience.
In addition, ready scalability means solutions can be deployed on a single edge-based server for off-network or highly localized use cases—or in a fully redundant, multi-tier environment to maximize scalability and throughput.
Case study: Bringing full text of audio files to the rescue
One European police department sought to use voice analytics to categorize incoming calls at its police call center. The main goal was to reduce unnecessary demand in the call centers and ensure staff were free to answer priority calls. Achieving this goal would have a two-fold benefit: identifying vulnerability in callers and improving staff wellbeing.
The police team chose to work with HPE and HPE Pointnext, NVIDIA, and Intelligent Voice to deliver an operational solution that would provide am ongoing and flexible view of call demand. Using speech and NLP, the solution would identify actions to reduce failure demand and access the impact of demand reduction measures. The solution also enabled the department to correlate changes in demand related to external factors, such as the unavailability of social services, closure of mental health units, and seasonality.
To focus in on one aspect of the initiative related to custody-related calls, full text search capabilities allowed queries to search a transcript for keywords, phrases, and combinations of the two. Accuracy was based on a random sample of 50 calls with someone listening and confirming that the calls were related to custody issues.
It’s important to note the police department call center received 2,475 custody-related calls across a seven week period. That equated to a total time of approximately 200 hours, or 4.1 hours per day spent by call center staff answering the calls. The average length of the calls were 5.7 minutes. In this case, the categorization accuracy of these calls was 98%. (The initial target was >80% accuracy.) The business sponsor for the project described the results as “gold dust.”
Following a 3-step methodology to speed design and deployment
In engagements such as the one with the police force in Europe, HPE Pointnext uses a three-point methodology to guide clients through each unique and specific AI journey that includes voice analytics, speech. and NLP.
- Explore—We work with clients to understand the outcomes and challenges AI brings. We ground teams on common AI terminology, fostering shared understanding and selecting the best use cases. The goal is to clearly align technology with the business, so the initiative benefits from having the business buy-in early on.
- Experience—We identify the data sources that will be required for the use case and create a high-level roadmap for use case implementation. This is followed by a proof of value (POV) as to how the solution would be deployed into a production environment. This POV is tested and the outcome is validated.
- Evolve—Now we are ready to work with clients to evolve and scale the AI solution. Leveraging HPE’s optimized infrastructure that spans from AI edge to cloud coupled with HPE GreenLake pay-per-use consumption models makes this much easier part of the complete journey.
The delivery phases for your AI solution move from workshop to PoV and design to implementation, and operations. When you engage with HPE AI experts, you can discover ways to apply AI to your specific needs in weeks as opposed to months—so you can quickly identify how to maximize the value of your data. This insight translates into tangible benefits that include limiting scheduled downtime, avoiding defective production, reduced costs through automation, and improved quality.
- Technical white paper: HPE AI Speech and Natural Language Processing Solution – Building an AI Solution from Edge to Core
- Brochure: Unlock insights from audio data
- Here to Help: HPE COVID-19 Response Center
Hewlett Packard Enterprise
Iveta Lohovska is a data scientist with the HPE WW Center of Expertise for Artificial Intelligence, Data and Emerging Technologies. She has a passion for AI, natural language processing, and related areas like machine learning and data mining. She is experienced in project lifecycle development with a focus on data warehouse augmentation and IoT data.