- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- LeftHand P4500 G2 Recommendations
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-11-2014 01:44 AM - edited 12-11-2014 01:49 AM
12-11-2014 01:44 AM - edited 12-11-2014 01:49 AM
LeftHand P4500 G2 Recommendations
Good morning everyone!
We have just completed a mammoth 36 hour offline upgrade of our LeftHand SAN. With this now complete and everything back online, I wanted to have a review of the setup to see if there was anything we could do to improve the setup and reduce downtime for any future upgrades.
The setup:
- 5 * P4500 G2 LeftHand systems.
- Currently a single-site setup, but looking to change this long term (I believe this would require a 6th node and FOMs).
- Each system running 2 * 6 disk RAID 5 arrays with no hot spares.
- Mixture of network RAID 5 and 10 volumes. Some physical LUNs, some VMware RDMs and a good number of VMware datastores.
Some questions:
- What is the significance of the separate “devices” under the Cluster\Storage System\Node\Storage\RAID Setup tab? Or is this just a way of displaying/identifying the RAID arrays?
- Are the storage node RAID 5 arrays logically separate, or actually running RAID 50 at a node level?
- Is RAID 5 (or 50) on the node level without hot-spares wise? (i've always worked on the logic that quick rebuilds with hot spares on RAID 5 arrays was key)
- Is RAID 5 (or 50) on the node level itself considered a bad idea, even with two separate arrays? (moving to RAID10 on the node level would be possible, but with network RAID 10 that’s going to really eat into our effective space)
- Should firmware updates be a non-disruptive process, or do the network RAID 5 volumes prevent this?
- Are RAID 5 volumes themselves considered a bad idea? (It’s obviously more space efficient, but doesn’t seem a logical way to organize data and must surely add a significant overhead to performance)
- Can network RAID 5 volumes be converted to network RAID 10 on the fly, or is this a disruptive process?
- Is there any other recommendations based upon the above or otherwise?
Thanks in advance for any help with this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2014 07:59 AM
12-12-2014 07:59 AM
Re: LeftHand P4500 G2 Recommendations
wow. what caused your outage? unless you are running NR0, all upgrades should be able to be done live.
The hardware raid 5 is not raid 50. They do raid5 because it provides the balance they wanted for price/performance/capacity/redundancy. You can change to raid10, but obviously that comes at the cost of net capacity and its a decision you would have to make, but generally raid5 is sufficient. You are correct about rebuild time concerns, but that is why they split it into two raid5 groups instead of one massive one. Unless you have a good reason, I would leave it as HP has it configured it... it does work.
why do you use NR5? the HP documentation is pretty clear that this should ONLY be used for archive data in pretty much read-only environments. There are some documents on how it actually lays out the data, but it isn't exactly like traditional raid5 layouts. 99% of your data should be on NR10.
you can convert your LUNs from NR5 to NR10 live without disruption (assuming you have the available capacity in the cluster).
You should move any live data that isn't something like a ISO store or fixed archive from NR5 to NR10.
If you are looking to run multi-site setup in the future, be sure to read that guideline closely... while the feature is included in the software, it adds some requirements for connectivity which are usually very expensive to provide and if you don't meet those requirements it will totally kill the performance of the entire cluster.