- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- Expanding storage on HPE MSA 2050
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-03-2022 04:59 AM - last edited on тАО04-04-2022 08:44 PM by support_s
тАО04-03-2022 04:59 AM - last edited on тАО04-04-2022 08:44 PM by support_s
Expanding storage on HPE MSA 2050
I have a VMware vSphere HA cluster utilizing a pair of ProLiant DL380 Gen10s and an MSA 2050 SAN enclosure. We currently have 2 pools with one volume each, each pool consisting of 1 800GB SAS SSD plus 7 600GB 10k SAS drives, for a total of 16 drives. The VMDKs for our stable of VMs is distributed between the 2 volumes.
We recently ran out of storage in one of the pools, as some shares served by one of the VMs caused the associated VMDK to fill the pool to capacity. We were able to clear off significant space and return to full operational status, but we decided we needed to expand the available pool capacity. We acquired 8x 1.2TB 10k SAS drive to fill the remaining 8 slots in the chassis, knowing that it has no ability to create a 3rd storage pool. We are planning to do the following:
Shut down all of the VMs which are homed on one of the 2 pools (let's say it's pool A)
Migrate the VMDKs for the file server VM (OS drive and shares drive) over to pool B (there should be enough space to hold this whole VM plus a bit extra)
Remove the 10k drives from pool A and add them to pool B (there will be other VMs here, but they aren't critical and are backed up)
Restore the remaining VMs from pool A onto pool B via backups
Update the VM configuration to use the VMDKs from the new pool location and make sure they boot
Add the 1.2TB drives to pool A
Migrate the file shares VMDK from pool B back to the much larger pool A
So we'll end up with pool A having 9 total devices, and pool B having 15, including cache SSD for each pool. My question is this: is there a better way to do this migration? Something like:
Add 1.2TB drives to pool A
Remove 600GB drives from pool A and add them to pool B
Migrate all VM C: VMDKs from pool A to pool B (leaving the file share VMDK on pool A)
Expand the partition on pool A and allow the file share VMDK to grow to fill the space eventually
Is something like that possible on the MSA 2050? I think it could make the migration/expansion process smoother if so. Another related quesion: should the SSD cache drive be at least as big as one of the individual pool member drives? Or will keeping the existing 800GB SSD cache be sufficient even for an individual member with 1.2TB capacity?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-03-2022 05:59 AM
тАО04-03-2022 05:59 AM
Query: Expanding storage on HPE MSA 2050
System recommended content:
1. HPE MSA 2050 Storage - Supported Configurations
Please click on "Thumbs Up/Kudo" icon to give a "Kudo".
Thank you for being a HPE valuable community member.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-04-2022 12:48 PM
тАО04-04-2022 12:48 PM
Re: Expanding storage on HPE MSA 2050
@rudrasete
A lot of good questions here and a well thought out migration plan.
Please look at the MSA Best Practices Guide and you can also review the MSA Gen5 Virtual Storage Technical reference both can be found with a google search.
Starting with the second option.
Yes you can add the 1.2TB drives as a new Disk-Group in Pool A, the new capacity will be added, the new disk-group does have some background tasks to run like INIT so it could take some time before you see the new capacity. The system will start rebalancing (moving data pages to the new disk-group) at that point you can remove the old disk-group (7x 600GB) but this process can take a LONG time as it runs in the background to not interfere with I/O processing. But the system is completely online the whole time. And if the DRAIN of the old Disk-group takes too long you can revert back to your first process.
For Step 2. Migrate VMDKs over to Pool B
As the MSA is thin provisioned, if you believe that there is enough spare capacity on the B pool you could try a VOLUME COPY on the MSA. This would be an internal to the MSA system copy of all the volume data, you could even bring up the VMs when you mapped it all out to the ESX hosts (might be a good process check to make sure you have your data) You could also do a backup of the entire volume while the VMs and I/O running to Pool A is offline.
For Step 3. Remove the 600GB 10k drives from Pool A
Prerrable to Delete the Pool from the CLI, it amounts to the same as this is the only capacity disk-group but if it gets confused by the READ-CACHE when removing a disk-group the process could be delayed. You could also remove the READ-CACHE first which should take less than a minute.
If you do a volume copy in step 2 then all the VMs, including the ones which you were going to restore from backup would be available.
The second part of Step 3 'and add them to Pool B' will return you to the original physical capacity. At this point all of your VMs could be running from Pool B. This would likely be the same or even more performance than you had previously as the limit to your performance will be HDD spindles which would be doubled once all the rebalancing is complete.
And you can follow the process you described for migrating the VMs back to the NEW Pool A.
The recommentation would be to have your READ-CACHE or performance tier be at least 10% of your capacity. If you are using the default RAID 6, I think you are over 10% on both Pools. You can also look at the 'I/O Workload' graph in the SMU (WebUI). This graph is a rough view of your workload breadth, if 80% of your current workload is within the 800GB SSD capacity it means that your daily workload 'could' be housed within the SSD (READ-CACHE). Note the words 'rough' and 'could' there are a lot of details around the specific workload that could render the graph misleading.
Hope this helps
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-04-2022 09:51 PM
тАО04-04-2022 09:51 PM
Re: Expanding storage on HPE MSA 2050
The MSA 2050 can grow incrementally to a maximum of 96 LFF, 192 SFF drives, or a combination of SFF and LFF enclosures up to the maximum of 8 total enclosures. Virtual Storage Disks Groups can be spanned across multiple enclosures. Virtual Storage RAID levels supported.