- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: Virtual Connect + iSCSI + VMware... best pract...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-13-2009 05:39 AM
тАО05-13-2009 05:39 AM
Virtual Connect + iSCSI + VMware... best practices?
Traditionally, I would just run a DL380 w/ a multi-port nic in a PCI slot and create a vSwitch w/ a LOM NIC and an expansion NIC for the iSCSI initiator. I would then connect those pNics to a Cisco 3750G stack (cross-stack etherchannel) and run a teaming policy of route based on IP hash. This would enable me to get maximum throughput and redundancy but I've been scratching my head on how to achieve the same thing with Virtual Connect. I know I can do it with a pair of 3120s or pass-throughs but I'd like to save the complexity, hassle, cost and cabling associated with each option if I could.
It is my understanding that with Virtual Connect, while I could technically setup two modules w/ shared uplink sets to core switches, on units that are side by side with shared uplink sets to the same switches, one of the shared uplink sets will go into a blocking state (to prevent loops). In that case, does the traffic that would have been set to go out of that VC route (I.e. - LOM 2 on I/O Module 2) not pass traffic or would it then route traffic through the 10Gig-E internal cross connect and out of the shared uplink set on I/O module 1?
Also, as there is no downlink teaming available, is there a recommended VMware teaming policy that should be applied to vSwitches that use VC? I'm assuming the recommended method would be route based on originating virtual port ID?
Any thoughts, suggestions or insight would be greatly appreciated.
Thanks,
Tyler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-14-2009 01:36 PM
тАО05-14-2009 01:36 PM
Re: Virtual Connect + iSCSI + VMware... best practices?
I'm not aware of any docs that talk about iSCSI + VC + VMWare. The only think I would do is try to have a dedicated VLAN for iSCSI traffic.
From above:
"In that case, does the traffic that would have been set to go out of that VC route (I.e. - LOM 2 on I/O Module 2) not pass traffic or would it then route traffic through the 10Gig-E internal cross connect and out of the shared uplink set on I/O module 1?"
You are talking about a single Shared Uplink Set with physical uplink ports on 2 or more VC modules. With a single Shared Uplink Set you end up with only 1 port or channel active and all others in standby. If your active port or channel is on Bay 1 the NIcs on Bay 2 for example will use the internal 10Gb stacking link and go out the active port or channel on Bay 1.
You can implement 2 Shared Uplink Sets in what is called an Active/Active Uplink Configuration (look at the VC Ethernet cookbook). Then the stacking links aren't used and each NIC uses an uplink port or channel from the same module it directly connects to.
As far as Teaming in ESX the only one that isn't supported is route based on IP hash because it requires switch configuration which you can't do in VC. The other teaming methods will work, mileage may vary depending on environment traffic pattern
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-15-2009 02:47 AM
тАО05-15-2009 02:47 AM
Re: Virtual Connect + iSCSI + VMware... best practices?
Each VC pairing has a active/stand-by setup for uplinks.
Whilst our setup works (and btw we use NFS because the iSCSI performance was awful), we have run in to problems during VC firmware upgrading which need further investigation (disconnect of storage during VC failover).
I'm also unclear where I should be doing the teaming. Should I team at the VC level & present them to one NIC on the ESX server, or present two unteamed connections and team at the ESX level (e.g. 1x10Gbe from VC in Bay 5 to vmnic2 and 1x10Gbe from VC in Bay6 to vmnic3 and software-team for storage connectivity)?
I'm also keen to hear of others experience.
Regards,
Julian.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-06-2009 01:28 PM
тАО07-06-2009 01:28 PM
Re: Virtual Connect + iSCSI + VMware... best practices?
Tyler, I saw your comment on the 'trends in infrastructure', Kleinch mentioned a whitepaper on iSCSI, iSCSI boot and bladesystem, has that materialised?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-06-2009 05:13 AM
тАО11-06-2009 05:13 AM
Re: Virtual Connect + iSCSI + VMware... best practices?
and
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01471917/c01471917.pdf
In our case, we made a bond (NIC Team) out of the 2 LOM using LACP and it works. On top of that, we created vlan'd interfaces.