The Cloud Experience Everywhere
1753384 Members
5812 Online
108792 Solutions
New Article
Sean_Sargent

3 Things You Can Do Better and Faster with Memory-Driven Computing

 

“The perfect answer to compute’s perfect storm” … that’s how I described HPE’s Memory-Driven Computing in my previous blog, the storm in question being the deceleration of advances in the semiconductor industry  coupled with the ongoing explosion of data from all sources. I argued that HPE’s revolutionary new compute architecture will enable enterprises to accelerate past that turbulence, taking us to places we’ve hardly even imagined up till now.

In this post, I’ll explore some areas where I expect Memory-Driven Computing to have the greatest initial impact. I’ll group them under three themes:

1. Get deeper insights from your data.

We’re seeing increasing interest in Memory-Driven Computing from companies that want to make the right connections among vast numbers of data points. They’re looking for better ways to find the needles in the data haystack, dig out nuggets of value from the data quarry … whatever metaphor you wish to use.

For example, given the low customer loyalty and intense competition that are typical of today’s markets, many businesses urgently need to generate more revenue from their installed customer base. To do that, they need a comprehensive, 360-degree view of the customer relationship so they’ll know when to offer special promotions, which products the customer has already purchased, when a contract is coming up for renewal, and so on. But customer data is often siloed in various functions, notably sales and finance, and scattered across systems such as contract repositories, customer relationship management (CRM) tools, and enterprise resource planning (ERP) platforms.

coaching-coders-coding-7374.jpg

Large in-memory systems can bridge the silos and ensure that customer relationship data is available for anyone who needs it. Sales teams have the data at their fingertips, so they can capitalize on upsell and cross-sell opportunities, and provide a smoother, more seamless customer experience. Finance teams can keep a close eye on contracted commitments and ensure that revenue flows develop as expected.

Health care delivery is another sector that can benefit from a 360-degree view of the “customer” – the patient. By holding personal health data in-memory and constantly available on demand, Memory-Driven Computing can make precision medicine a reality. Physicians can draw on that information to prescribe appropriate medication and therapy tailored to the patient’s specific, personalized needs.

Life sciences research is another natural use case for Memory-Driven Computing. In my previous blog, I described how the technology is helping DZNE, a research institution established by the German Federal Ministry of Education and Research, to battle Alzheimer’s. (Read more here: German research institute tests Memory-Driven Computing to fight neurodegenerative diseases.)

Deep levels of life sciences research involve vast quantities of data. Historically, researchers have relied on extract, transform, load (ETL) processes to assemble the data for online analytical processing (OLAP). The analytics processes often bump up against memory limits, so the researcher have to save data back down onto disk. Long analytics runtimes can cause expensive delays in research programs. With Memory-Driven Computing, the data can be held in memory, eliminating the need to keep loading, storing and moving data around.

2. Answer “what-if” questions.

Memory-Driven Computing helps organizations to run simulations and scenarios to model possible futures and predict the impact of change. A classic example is Monte Carlo simulations, which model processes that are heavily impacted by random variables (hence the name, taken from the famous casino in Monaco). Monte Carlo simulations are useful in many verticals, but perhaps nowhere more so than in the financial services space. An investment bank might use the technique, for example, to better understand its risk position or to model price variation in highly complex derivatives. Large Monte Carlo simulations are often run on high-performance grid environments, and they can take hours or days to generate the most probable outcome from the input variables. Now, if you're in investment banking, that’s far too long. You want to be able to price your derivatives quickly and competitively. You want to understand your risk exposure now, not in 24 hours.

This is where Memory-Driven Computing comes in, with a little extra innovation from Hewlett Packard Labs. Our Labs team has invented a way to pre-populate some algorithms so that a complex Monte Carlo simulation can run within our large memory systems with blazing speed. Instead of taking, say, an hour to run a simulation, it can now take just seconds.HPE20160726038_800_0_72_srgb.jpg

Operational logistics is another area that can benefit from “what-if” scenario planning in Memory-Driven Compute. Think of an airport, for example, and its complex systems of interconnecting services – check-in, baggage handling, package processing. Optimizing flight schedules and routes takes massive amounts of compute. What would happen if a volcano erupts in Iceland, disrupting the usual air traffic lanes? Simulations can help organizations understand the impact of multidimensional problems before they arise, and Memory-Driven Compute can hugely accelerate the output. With all of the data held in memory, a complex simulation that might otherwise take a couple of days to complete may take just a few seconds.

3. Respond in real time to real-time data.

With Memory-Driven Computing and high-performance data analytics, companies can go beyond modeling potential futures – they can take action quickly in response to changes in the real world as they unfold. Take fraud prevention, for example. If you used your Visa or your MasterCard online, and a bad actor gets hold of your personal details, there's a need to react very, very quickly. Today, the onus is largely on the credit card’s owner to realize that the data has been stolen and identify fraudulent transactions, which may not happen until he or she receives an account statement. But what if the bank could detect abnormal patterns in real time and shift into prevention mode, potentially even shutting down a fraudulent transaction before it’s completed? Large memory systems can hold huge volumes of data to support deep learning and pattern recognition in the battle against fraud.

Real-time response capabilities are a natural complement to the kind of “what-if” optimization I described above. In supply chain management, for example, you could use simulations to model the impact of a power outage or a natural disaster at any given hub. Then if that actually happens, you can react, recalculate and re-optimize in real-time as well. The ability to hold large volumes of data in memory can hugely accelerate time-to-action.

Prescriptive analytics is the key here – systems that provide guidance on possible outcomes to support decision-making. In manufacturing there’s a growing interest in predictive analytics, which can tell you when a component is likely to fail; with prescriptive analytics, the industry is moving towards systems that can recommend, or “prescribe,” remedial activities. (See Hande Sahin-Bahceci’s post Making Artificial Intelligence Enterprise-Ready: HPE Unveils New AI Solutions.) That might mean, for example, suggesting replacement of a machine part long before risk of failure; or, equally, it could mean recommending a slower output to reduce wear and tear on the component and increase the lifespan of the machine.

Large in-memory compute can certainly support some compelling use cases in manufacturing, but it’s not hard to imagine how it could transform other verticals, too. After all, the data doesn’t have to come only from sensors or machines. You can just as easily pull in data from any device or system and run an analytics engine on it to provide actionable results. Let’s say you want to improve employee retention in your organization. With Memory-Driven Computing you can hold large amounts of HR data available for AI to identify and advise on appropriate actions. One department is taking excessive amounts of medical leave on Fridays? Do we have a morale problem here? Time to prescribe some recognition programs to acknowledge and reward their hard work.

Vistas of the Future

Memory-Driven Computing opens all kinds of exciting vistas on the future. I’ve been reading recently how major automobile manufacturers are enabling cars to “talk to” each other within a certain radius. With vehicle-to-vehicle communication, if there’s an accident ahead, approaching cars could slow down automatically. Imagine how that kind of intelligence, extended and amplified by Memory-Driven Computing, could transform traffic management in cities. The image that comes to mind is a flock of birds, each one acting individually but with close awareness of its neighbors as they make constant minor adjustments to their environment. Instead of running up against the frustrating random bottlenecks that we all struggle with now, “flocks” of cars could adjust effortlessly to flow around obstacles and avoid dangers. Something to think about on your daily commute!

We invite you to partner with us to help create the future of compute.

Check out the Machine User Group, our community of developers, technologists and industry experts interested in Memory-Driven Computing. You’ll find developer toolkits, training and workshops, and social forums. You can join The Machine User Group here.

HPE Discover LV Blog Banner.JPG

Chat with us in Vegas

Want to learn more, in person? At HPE Discover, we’ll be hosting a session on Wednesday, June 20 (3-4pm local time) called “Memory-Driven Computing: Taking The Machine from Hewlett Packard Labs to the enterprise.” Experts from HPE Pointnext and Hewlett Packard Labs will provide real-world examples of how to successfully exploit large memory systems such as in HPE servers like the HPE Superdome Flex. Emerging technologies developed by Hewlett Packard Labs are hitting the mainstream and can be exploited by the enterprise right now. Join us to learn about the early fruits from The Machine research project and learn how your business can benefit from Memory-Driven Computing combined with HPE Pointnext expertise.

Featured articles:

About the Author

Sean_Sargent

Chief Architect with 20 years industry experience working for leading IT organizations developing and future-proofing worldwide consulting portfolio strategies and capabilities, with a focus on emerging technologies.