- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- How to get more than 1Gbps per StoreVirtual 10.5 V...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-25-2013 10:37 AM - edited тАО03-25-2013 11:40 AM
тАО03-25-2013 10:37 AM - edited тАО03-25-2013 11:40 AM
In planning my HP Store Virtual SAN design I have read through the latest HP StoreVirtual Install & Configuration Whitepaper and found that a VSA does not support Bonding at the LefthandOS level like it does for the hardware version of the LefthandOS. Because the 1 NIC attached to the VSA for iSCSI traffic is a VMXNET3 the way I understand it is that I should have the ability to get more than 1Gbps to that VSA by using multiple 1Gbps NICS with some type of active-active network configuration at the vSphere vSwitch level. There is little documentation on what HP recommends for a configuration at that level other than using 2 1Gb NICs.
Is there a supported way to get more than 1 Gbps of bandwidth per HP StoreVirtual 10.5 VSA running on vSphere without using 10Gbps?
The closest way I could think to do this is create a 2-NIC Etherchannel team using IP Hash Load Balancing on my vSwitch where the VSA VM network will reside. The reason I think this may work is because I have 4 vSphere hosts each using 2 NICs with MPIO round robin for iSCSI traffic. If I understand Etherchannel with IP Hashing correctly each of the 8 paths to to a vsa will negotiate a path and will result in both NICs in the Etherchannel team being used for the VM Network of the VSA.
Does anyone know if I am right or wrong in my understanding of this configuration?
Is there a better way to obtain my goal?
Does anyone know of additional documentation that I haven't seen for this?
Details of Planned Environment:
Overview:
I am planning a 2-node StoreVirtual SAN using a network RAID-10 that will have about 8TB of usable space. Each of the 2 vSphere hosts will only host 1 VSA and nothing more. I will have 4 other vSphere 5.1 hosts configured in a vSphere Enterprise Cluster that will use the storage via iSCSI on the 2-Node VSA SAN which will host all of my Virtual Machine workload. The iSCSI configuration will be 2 NICs per Cluster host using vmware MPIO Round robin using both iSCSI switches, no network teaming configured.
2 VSA Nodes:
Hardware:
Server: Dell R710, 8GB RAM, PERC H700i 512MB NV Cache
Internal Storage: (Qty:8 - 300GB, 10k SAS, RAID 5: Capacity: 1953GB, Datastore1)
2- H810 with 1GB NV Cache
Dell MD1220 (Qty: 24-146GB, 15k SAS, RAID 50: Capacity: 3212GB, DataStore2)
Dell MD1220 (Qty: 24-146GB, 15k SAS, RAID 50: Capacity: 3212GB, DataStore3)
Each MD1220 will be hooked to its own PERC H810 RAID Controller & the internal storage will use the PERC H700i
Software:
Hypervisor: vSphere 5.1
VSA VM: HP StoreVirtual VSA 10.5
VSA Hardware:
CPU: 2 vCPU
RAM: 5GB
Storage: ~8TB
Disc 0: OS Drive for VSA, SCSI 0:0 Capacity: defualt size of deployment
Disc 1: DataStore1, SCSI 1:0, Capacity: ~1900GB
Disc 2: DataStore2, SCSI 1:1, Capacity: ~1600GB
Disc 3: DataStore2, SCSI 1:2, Capacity: ~1600GB
Disc 4: DataStore3, SCSI 1:3, Capacity: ~1600GB
Disc 5: DataStore2, SCSI 1:4, Capacity: ~1600GB
iSCSI Initiators:
4-Node vSphere 5.1 Enterprise Cluster
4-Dell R710, 2 CPUs, 144GB RAM, 8 1-Gbps NIC Ports (per node)
Mgmt/vMotion, vSwitch0, 2-NICs, Server Stack, Etherchannel Team with IP Hashing
VM Networks, vSwitch1, 2-NICs, Server Stack, Etherchannel Team with IP Hashing
iSCSI, vSwitch2, 2-NICs, iSCSI Stack (seperate switches), No Teaming, Jumbo Frames enabled
--2 iSCSI VMKernels
--each iSCSI VMKernel will be statically assigned to different NICs
--VMware MPIO round robin enabled
--Jumbo Frames enabled
iSCSI Stack:
2-Cisco 3750x Switches (Configured in a Stack and only used for iSCSI)
--Jumbo Frames & Flow Control Enabled
Server Stack:
6-Cisco 3750x Switches (Configured in a Stack and not used for iSCSI)
Solved! Go to Solution.
- Tags:
- iSCSI
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-02-2013 07:58 AM
тАО04-02-2013 07:58 AM
SolutionI ran across this HP Communities post that I believe answers the question I had.
http://h30499.www3.hp.com/t5/HP-StoreVirtual-HP-LeftHand/VSA-ESX-Multiple-PNIC-s/m-p/6001243#M6508
Using an etherchannel with 2 or more physical NICs for the vm Network on the vSphere host that hosts the VSA should provide me with the ability to successfully load balance my I/O traffic across multiple NICs coming into the VSA. The better the load balancing that is achieved the better chance to reach more than 1 Gbps of bandwidth per VSA.
In order for IP hash to load balance the best the more iSCSI initiator IP addresses communicating with the VSA the better load balancing will get. The two best ways I see doing that are:
1. Adding more vSphere hosts that use the VSA storage.
2. Configuring MPIO (round robin) on each of these vSphere hosts with the iSCSI intiators using the VSA Storage. The more phyisical NICs used in the MPIO configuration per vSphere host the better the load balancing will get.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-07-2013 08:25 PM
тАО04-07-2013 08:25 PM