- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: VSA Nics
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-10-2013 08:56 AM
06-10-2013 08:56 AM
VSA Nics
I've recently upgraded to a 4 node system. I want to move from 1GbE to 10GbE and I have a couple of questions around this for the VSA. I find the documentation for VSA vs. the Physical Storevirtual range to be problematic as HP doesn't adequately distinguish between the two - particularly around some of the VSA's limitations.
I want to know how many of you use both eth0 and eth1 on your VSAs?
Do you use the same subnet for both?
I am a VMWare shop so do you split eth0 into its own vswitch and run eth1 as the iscsi nic through another vswitch. I have a dual 10Gbe Nic would you use them on a 1:1 basis to the nics or would you dedicate a 1GbE nic to the VMNetwork of the VSA and use both 10GbE to the iSCSI? If you could show a capture of your VM Networking screen it would be very helpful
Thanks
- Tags:
- NIC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-11-2013 03:49 PM
06-11-2013 03:49 PM
Re: VSA Nics
Unfortunately you can only use one of the two NICs. I tried to use the second NIC in version 9 and 9.5 and I could never get it to work. If you have multiple 1GbE or 10GbE you "may" get some performance benefit from the VMXNET3 and jumbo frames. I don't have any number to back that up, but I have been playing with 10GbE and trying to use the VSA in a high performance environment.
Depending on your workload you might consider separating the VSA and VMware iSCSI vmkernel port(s) on separate vswitches and pNICs if you run into latency issues.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2013 08:10 AM
06-12-2013 08:10 AM
Re: VSA Nics
That is exactly how I have approached it. I have never used both eth0 and eth1 but I'm curious to see whether trying it would help.
I experimented with Jumbo frames in the 9.x days and determined there was little benefit if any so have abandoned them for this setup in 10.5
I"ve set mine up like this:
I have a 10GbE card with dual ports. I've created one vmk (vmk1) connected to port 1 of the pNic in a vSwitch (vSwitch4) that plugs into a dedicated 10GbE port on a physical switch for iSCSI. I have created a second vmk (vmk2) conected to vSwitch4 bound to the second port on the pNic and connected to a different 10GbE physical switch. I've done this for Resilency's sake on all 4 nodes. For each of the nodes I've used a separate vswitch and pnic (1GbE) for the VSA's VMNetwork.
My question is am I actually better off in this configuration? Or because of the VSAs limitations vs the Physical Storevirtual models is my setup pointless and I should just connect 1 pNic to 1pSwitch and call it a day.
Cheers
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-14-2013 08:31 AM
06-14-2013 08:31 AM
Re: VSA Nics
From what you described I suspect you cannot configure any type of link aggregation using both of your physical switches. Does your VMware environment support distributed virtual switches?
Are there any other guests on your hosts besides the VSA?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-14-2013 10:48 AM
06-14-2013 10:48 AM
Re: VSA Nics
Yes I have other VM guests on the hosts with the VSA's on them but they are using separate pnics, pswitches and vnics and switches.
I could probably set up a Trunk but I don't think I could stack them (although I could be wrong). I am using an 8 port 10GbE module in each of a E5406 chasis. Separated physically in our building.
I don't have enterprise plus - just enterprise so no distributed vSwitches.
This is what it looks like right now but I could easily change this. I'm considering adding the vm network portgroup into the same vswitch (vswitch4) so that is has a 10GbE connection. Doubt this would do much though.