Aruba & ProVision-based
1753445 Members
5273 Online
108794 Solutions
New Discussion

Configuring 6120XG in c7000 10GB context

 
Jer2224
Collector

Configuring 6120XG in c7000 10GB context

Hi all,

So I am new to HP networking (and networking in general), and I have been landed with a pile of hardware to congigure. I have the following kit:

  • 4 x DL385G7 ESXi 5.5 hosts
  • 1 x c7000 chassis with 16 x BL460C ESXi 5.5 hosts
  • 1GB network: 2 seperate 2920 stacks for 1GB storage and 1GB data (5 switches)
  • 10GB network: 3 x Aruba 3810M SFP+ switches in a single stack
  • Mixture of 1GB and 10GB iSCSI storage arrays

So the storage is all working fine, as are the DL385G7 hosts. The hosts can see both the 1GB storage and the 10GB storage LUNs for the VMs (networking for this was done by someone else no longer with the company). The issue I'm having is with the c7000 and the uplinks to the Arubas.

The c7000 has 8 interconnects (2 x 6120XGs and 6 x 3020s). The 3020s connect to one of the 2920 stacks for 1GB iSCSI connectivity - again working fine. The 6120XGs have 4 x 10GB connections each. This is not working fine.

My objective is to configure the BL460c blades to access 10GB iSCSI storage and vMotion through the 6120XGs and access 1GB iSCSI storage and data through the 3020s.

I have tried all kinds of combinations of tagged, untagged, trunked uplinks, and the closest I have got to is the ESXi hosts being able to see the 10GB iSCSI devices but not the LUNs themselves. The devices are the same as on the DL385G7 hosts, which can see the VMFS partitions fine. The blades cannot.

Re. vLANs - I have configured 2 vLANs on the Arubas and the 6120XGs - 5 for vMotion (10.255.255.0/24) and 140 for storage (192.168.140.0/24). Jumbo frames is configured everywhere in the chain that I can see.

I'd be very grateful if someone could point me in the correct direction for configuring the uplinks from the 6120XG to the 3810s (trunk, LACP, 8 single 10GB connections?), plus what ports to I tag or untag where so the blades can access the 10GB network via the onboard mezz cards.

One additional point - all the ESXi versions on the blades and DL385s are the same, from the HP ESXi build ISO, and all use the same pNICs for 10GB (either NC523SFP in DLs or Qlogic NC532i on blades). Hosts are regularly patched by VUM. vCenter is a VM running version 6 with latest patches.

Many thanks.

Jeremy.

 

1 REPLY 1
Ian Vaughan
Honored Contributor

Re: Configuring 6120XG in c7000 10GB context

Hmmm,

I think that your effort so far is to be commended but it would be perfectly reasonable for you to present your case to your Boss that your organization either -

needs to give you the time and training to get the skills

or (if you need a quick fix)

give you some budget to get a time-served consultant to do the heavy lifting over a couple of days and let you shadow them and get to grips with this new environment. It's not just about doing "enough" to make it work but there will surely be best practices around stability, performance and security that someone who does this kind of thing daily will be able to add in terms of value to the environment to make it production grade.

Data and storage networking across mixed platforms with an inter-operability aspect between multiple vendor environments is a harsh infrastructure landscape for cutting your teeth.

If you pull it off and get the traffic flowing end to end please let us know what the "gotcha" was with the XG's and how you configured your way around it.

Thanks

Ian

Hope that helps - please click "Thumbs up" for Kudos if it does
## ---------------------------------------------------------------------------##
Which is the only cheese that is made backwards?
Edam!
Tweets: @2techie4me