- Community Home
- >
- Storage
- >
- Around the Storage Block
- >
- Building the case for HPE Alletra Storage MP archi...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Building the case for HPE Alletra Storage MP architecture
Discover the architectural benefits of HPE GreenLake for Block Storage with HPE Alletra Storage MP
The latest release of HPE GreenLake for Block Storage built on HPE Alletra Storage MP offers an array of new and improved architectural features – with even more on the horizon.
– By Dimitris Krekoukias, Senior, Distinguished Technologist, HPE
Plenty has been said about the many features of HPE Alletra Storage MP platforms and how the flexible new hardware platform manifests into different “personalities” for high end block and file solutions. Now it’s time to take a deeper dive into the architectural benefits of our approach – and how the new R4 software for HPE GreenLake for Block Storage built on HPE Alletra Storage MP enables certain things no other vendor can come close to. You’ll also get a preview of what may be possible in the future given the flexibility of the underlying architecture.
From fractional multi-dimensional scaling (that allows things impossible with other vendors, like adding a single controller node to enhance performance granularly without needing to add capacity) to resiliency in the face of simultaneous failures that would cripple other storage systems, HPE Alletra Storage MP has a lot going on under the hood.
HPE GreenLake for Block Storage built on HPE Alletra Storage MP: Efficiency and scale enhancements
Apart from the recently announced all-SDS offering, the new release offers two major boosts around improved storage efficiency plus a scale enhancement:
- Enhanced compression and dedupe with more intelligence around real-time determination of how to reduce certain data means more efficiency, offering double digits % benefit in many use cases.
- Up to 25% more usable space with certain small or large configurations, including much more efficient use of spare space and improved heuristics for certain other defaults.
- 5.6PB maximum capacity per system (actual capacity, not counting data reduction).
HPE GreenLake for Block Storage built on HPE Alletra Storage MP offers many other more general architectural possibilities, some of which are enabled by R4, others that have been there since R1, and some that are still on the horizon.
Three technology fundamentals
Now you can count on:
- No more controllers in HA pairs – A fundamental aspect of this architecture is that there is no concept of HA pairs of controllers. “Classic” storage systems normally have pairs of controllers mirroring writes between them, which naturally leads to a lack of resiliency in case of failures, but also reduced flexibility since everything must be done in pairs.
- Controllers don’t matter for write cache resiliency – The second fundamental technology is that all the write cache resiliency has been moved out of the controllers. It goes hand-in-hand with not having a concept of HA pairs of controllers. As a result, losing a controller doesn’t reduce write cache integrity, and there’s no need for batteries or supercapacitors to protect write cache. The same can’t be said of most other storage systems.
- No more controller ownership of disks/shelves – The third fundamental technology is that there is no need for controllers to “own” disks or shelves. Indeed, all controllers see all disks and shelves, all the time. Again, this is unlike most storage systems that enforce a strict ownership of disks by controllers – losing both controllers in an HA pair for those systems always means losing access to that data, no matter how “orderly” the loss may be.
Let’s take a look at six scenarios demonstrating these fundamental technologies in action.
1. The need for just a bit more speed
Let’s say my storage system is getting busy but not busy enough to need upgrading to the next model up or to need 2 more controllers. I just need a bit more speed and/or headroom. Simple: With fractional node scaling, I can just add a single controller node. And since there’s no concept of having to own capacity, I can simply add the new node without having to add more capacity. The system will just balance itself and give workloads to the new controller automatically. And in the future I can add yet another controller – again without needing to add capacity. Very clean and easy. In virtually all other systems, I would normally need to do one of the following:
- Replace both controllers with faster ones, assuming faster ones exist.
- Add two new controllers plus disk, then balance things and deal with the extra complexity and hassle. And if you already have enough space, why should you be forced to buy more capacity? This is also the problem with traditional grid-type HCI solutions.
- In the case of monolithic systems, replace everything.
All of those options are more expensive and time consuming than what HPE provides with HPE Alletra Storage MP. This granular node addition ability is a unique ability available in the R4 release.
2. The need to add capacity
Adding capacity is simple, and generally the least interesting thing in storage. The major difference with HPE Alletra Storage MP compared to other block systems is that the capacity doesn’t belong to any specific controller. You just need to plug it into the fabric and all controllers now happily and automatically share the new space. No need to worry about balancing things or assigning anything to anything,
3. The need to add even more speed but not more capacity
If you need even more speed later on, you can just add even more controllers – again without having to add more capacity. This is a natural evolution of the first scenario. Going to more than four nodes is possible by the architecture and is an upcoming feature.
4. Losing a whole shelf
The ability to survive a whole disk shelf loss is already available, you can do this with as few as three shelves without resorting to wasteful mirroring. You can also protect against dual simultaneous drive shelf failure if you have 8 or more shelves.
5. Losing multiple nodes simultaneously
Several vendors will claim being able to lose multiple controllers, but since they still rely on mirroring things, they don’t tell you they can’t lose any two-plus controllers simultaneously. Truly simultaneous loss of the wrong two controllers would be catastrophic in most systems, which is the big downside of mirroring cache between two nodes.
Of course, for this level of protection to happen properly, the right cluster conditions need to exist. (N/2)-1 nodes can be lost at the same time for HPE Alletra Storage MP. This rule exists to avoid cluster split brain issues. So in a 6-node cluster, any two nodes can be lost simultaneously. Although the maximum cluster size with the R4 release of HPE Alletra Storage MP is four nodes, the architecture supports more than four nodes, as is planned for a future release. But the (N/2)-1 code is already in R4, waiting for increased node counts to be certified.
Conceptually, the architecture allows arbitrary node counts. For example, with eight node clusters, (8/2)-1=3, any three nodes could be truly simultaneously lost without issues as cluster size increases. This formula can be used to calculate any cluster count. Note that even with seven nodes you could still lose three controllers simultaneously, since you’d still have a majority available (four more). Same thing with five nodes – you could lose two simultaneously. Just mentioning for clarity since you can have odd node counts. We don’t force even numbers.
“Simultaneous” means losing more than one node before the cluster has a chance to “calm down”. So you lose one node, then before the cluster can stabilize again, you lose another one (most clusters take some time to stabilize, there’s no such thing as “instant” in clusters). This scenario describes losing things with the worst possible timing. This failure can easily happen with systems that claim multiple node loss protection – they really mean rolling node loss protection, not simultaneous.
6. Rolling failure (losing all your controllers, one after another
Imagine you simultaneously lost a couple of controllers and now you’re gradually losing all others – but it’s happening slowly, giving the cluster a chance to breathe. Perhaps due to environmental reasons.
We now allow N/2 rolling failures in R4, and later may potentially allow more. Admittedly it’s not a very probable scenario, but this may help in situations where you need a system to stay up in horrible conditions without the possibility of spares arriving on time (or maybe ever, in some deployments). Add our Active Peer Persistence clustering and you could engineer a system that can survive not just a site disaster but also further massive failures in each site – like multiple controllers or whole disk shelves – and never suffer downtime.
7. Replacing any number of controllers with dissimilar types for non-disruptive tech refresh and lifecycle
Replacing any number (including odd numbers) of controllers with dissimilar ones is impossible for other architectures. With the HPE Alletra Storage MP architecture, there are no node pairs to worry about and you may have an odd number of nodes anyway (three, five, etc.) so anything is possible. The disaggregated, shared everything architecture makes lifecycle refreshes easier since everything is modular and nothing belongs to anything. Why be shackled to a specific enclosure type? (This is not yet available in R4, but it’s coming in a later release.)
Start small and scale big
With this architecture, you get the flexibility and resiliency you need to grow from small to huge solutions with as little as 15TB and as much as 5.6PB. Maybe you don’t need all this ability up front. That’s OK. But you can feel safe knowing that if you need it in the future, you can do those things, with investment protection, instead of being trapped with architectures that can never do them and force you to do forklift upgrades.
Ready to learn more?
Read the paper: HPE GreenLake for Block Storage architecture
Meet Storage Experts blogger Dimitris Krekoukias, Senior, Distinguished Technologist, HPE
Dimitris contributes to HPE’s strategy, product and process enhancements, and product launches. Focused on bringing value to HPE’s largest customers, he engages with senior decision makers. He also speaks at industry, competitive, and marketing events.
Storage Experts
Hewlett Packard Enterprise
twitter.com/HPE_Storage
linkedin.com/showcase/hpestorage/
hpe.com/storage
- Back to Blog
- Newer Article
- Older Article
- haniff on: High-performance, low-latency networks for edge an...
- StorageExperts on: Configure vSphere Metro Storage Cluster with HPE N...
- haniff on: Need for speed and efficiency from high performanc...
- haniff on: Efficient networking for HPE’s Alletra cloud-nativ...
- CalvinZito on: What’s new in HPE SimpliVity 4.1.0
- MichaelMattsson on: HPE CSI Driver for Kubernetes v1.4.0 with expanded...
- StorageExperts on: HPE Nimble Storage dHCI Intelligent 1-Click Update...
- ORielly on: Power Loss at the Edge? Protect Your Data with New...
- viraj h on: HPE Primera Storage celebrates one year!
- Ron Dharma on: Introducing Language Bindings for HPE SimpliVity R...