- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- HP VSA On 2 Node Hyper-V Cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-05-2016 05:55 AM
03-05-2016 05:55 AM
HP VSA On 2 Node Hyper-V Cluster
Hi peeps,
I've been asked to setup a HP VSA on a 2 Node Hyper-V cluster.
Servers are HP DL380 Gen9 with 8x600Gb local disks. With additional 4x NIC adapter, making 8x NIC in total.
OS Is window server 2012 R2 Datacenter. Full version for now, Hyper-V role & Failover Clustering installed.
1 NIC for Management
3 NIC for VM traffic
3 NIC for iSCSI traffic
1 NIC for LiveMigration
Questions:
1) All disks are in one big RAID6 volume on P440i adapter, OS is 200Gb of total size, VHD storage is rest , is this ok ?
2) I've created a NIC Team for the 3 VM NICs and created a vSwitch on it) , Also created a NIC Team for the 3 iSCSI NICs, and created a vSwitch on it.. Is this the best way to go ? Or should the NIC Team for iSCSI be removed and the 3 NICs be seperate vSwiches ?
3) As per Questin 2) - iSCSI NIC Team setup , How is MPIO working in this setup ?
4) Shoul I create one big VHDx for VSA or several smaller ones ? (thinking of 1 Big for CSV and one smaller for Quorum of Hyper-v Cluster, not VSA Cluster !).
Later on will add a FOM to a older server so have full functional VSA clustering failover.
Thanks already for reading and replying !
Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2016 04:10 PM
03-09-2016 04:10 PM
Re: HP VSA On 2 Node Hyper-V Cluster
install CMC on a desktop computer and read the help section. HP actually wrote usefull information in there and I would stick to it.
Unless your load on your SAN is going to be really really light, you are not going to want to share those disks with over VMs. It will likely cause random performance problems unless those disks are really fast and your load is really light. Other than that, install the HP DSM for MPIO and configure according to the manual. Install the app aware snapshot programs as well. Beyond that, its pretty straight forward.
You will need to come up with a 3rd hyper-v host to run a failover manager or v12.5 I think gives you the option of a NFS share as well. Do not use a virtual manager as that will not allow you to have seamless failover during an event. Also, while they give the option of NR (Network raid) NR0, NR10, and NR5, the only real option in production is NR10. Ignore the rest and do NOT touch them with a 10' pole. Yes the design of is SAN is very wasteful of raw disk space, but the beauty is that you can use cheap disks and when setup with NR10, you can pretty much achieve 99.9999999% availability. I stopped tracking my cluster uptime after it hit the 4+ year mark which includes a demolishing and rebuilding of our server room and reclocation of our racks... twice. Follow the suggestions in the help guide and you will be fine.