Advancing Life & Work
1752338 Members
5497 Online
108787 Solutions
New Article ๎ฅ‚
Curt_Hopkins

Re: The era of real-time everything

tomasz-frankowski-198764-unsplashmdccloudlessvision (Custom).jpg

By Curt Hopkins, Staff Writer, Enterprise.nxt

In the early days of the internet, companies like AOL offered a walled garden of connectivity and content. Soon enough, the internet as a radically open structure began to knock holes in that wall and the web raised it to the ground.

Somehow, computing companies have looked back to the era of siloed data and services as a golden age and have kept attempting to rebuild those ruins, with fluctuating levels of effectiveness. But once again, the tendency toward openness has reasserted itself on a large scale.

โ€œWe have a duty to do something radical,โ€ said Hewlett Packard Enterprise CEO Antonio Neri. โ€œRadical in terms of computer architecture, radical in how we build and distribute apps, radical in how we consume IT, and radical in how we protect ourselves.โ€

As Mooreโ€™s Law slows to a crawl, tech companies around the world are seeking solutions that will allow computing to continue us to increase the speed of our computing while minimizing the amount of energy we spend on it. Possible solutions include quantum computing, chaos computing, non-volatile computing, and, in the case of HPE, Cloudless Computing, photonics, accelerators for focused tasks, and Memory-Driven Computing.

Kill sprawl

But whatโ€™s the real solution in terms of an overall architecture? Flattening the internal hierarchy of the computer and taking the walled garden to ground once and for all, especially as regards data.

โ€œI know for a fact that we spend more moving data around than we do actually using that data,โ€ says Neri. Whatโ€™s the solution? โ€œKill sprawl.โ€

According to Dave Carlisle, HPE Fellow and HPEโ€™s Global IT CTO, a root problem is the disconnect between the sheer volume of data an enterprise user has to deal with, and some of the foundational technology at enterprisesโ€™ disposal today. Perhaps somewhat counterintuitively much of this begins with foundational hardware and infrastructure aspects, not software.

โ€œWe have a fundamental kind of paradigm that a given core, a given unit to compute if you will, does not have guaranteed fast access to an entire enterpriseโ€™s data set,โ€ says Carlisle. โ€œThat breeds a whole bunch of interesting architectural implications. Not just in hardware, but software as well and how we manage our data.โ€

In essence, we have to cobble together the technology of the past to process the data of the future. Carlisle suggests a blueprint for moving beyond the limitations inherent in this stratospheric jerry-rigging.

โ€œNow we start to think about a world where open, universal interconnect standards like Gen-Z are enabled and we where we can actually scale, and where fast-access persistent memory is at the center of the architecture, not the compute,โ€ says Carlisle. โ€œMost importantly this is a world we can scale the heterogeneous compute tier, attach it to the fabric, giving everyone access to the same fast-access persistent memory pool. Then we can start to rethink architectures.โ€

A funny thing happens when you reach a certain scale. Today, HPE can provide a Superdome Flex with up to 48TB of memory. These systems have opened doors that were previously firmly locked, achieving important results in Alzheimerโ€™s research, in genomics, in cybersecurity and in financial modelling. But thatโ€™s just a drop in the bucket.

If you consider problems in terms of petabytes โ€“ enough to hold the entire working state of an enterprise โ€“ paradoxically, everything gets simpler. You can take out complexity and kill sprawl. The world spends more moving data around than actually using that data. That cost has zero value.

Imagine a world where many thousands of diverse compute cores can access any byte across tens of petabytes of ultra-fast memory virtually instantly.  Now enterprise-scale architectures and capabilities can look very different. With this level of modularity, instead of tens of thousands of databases and data islands, instead of so much time and resources spend moving and replicating data, they might just have a handful of enterprise data services and platforms with lighting fast serverless and cloud-native app components on top.

โ€œI can make the argument that some of the most popular data processing frameworks today have a dead-man-walking element to them, because the very compute paradigms that they're architected around are being reversed,โ€ says Carlisle. โ€œItโ€™s hardware that is about to drive disruption in software.โ€

The here and now

This set-up is not just a map for an eventual future, according to Carlisle. You can start changing the way you work today.

โ€œYou can start to shift how you build your apps in order to move faster,โ€ he says. โ€œWe start doing far more containerized stuff on top of Memory-Driven Computing systems running a Memory-Driven Computing data store.โ€

This approach will allow users to engage in strategic analytics and produce a different level of understanding of their customer sets and markets, creating unexpected competitor differentiations.

Today we live in the past, working on data sometimes months old. With this approach, that changes, and we enter an era of real-time, big data analytics. Users can realize a real-time supply chain and real-time financials, closing the books every second, not every quarter.

Itโ€™s an era of real-time everything.

โ€œThis will change the game from the legacy slow IT world and latent data to high-end, very fast user-centric analytics,โ€ says Carlisle.

Featured article


Photo by 
Tomasz Frankowski on Unsplash

0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs

Comments
Fixhere

Nice post. I really like to read it