Array Setup and Networking
1819876 Members
2579 Online
109607 Solutions
New Discussion

Connect a 10GB Nimble Array to an HP C7000 Blade Chassis

 
SOLVED
Go to solution
mleach26
New Member

Connect a 10GB Nimble Array to an HP C7000 Blade Chassis

After a few weeks of stumbling thru this, we still have issues with how to connect the 10GB iSCSI Nimble to the 10G iSCSI HBA ports on a C7000.  There are several ways to make the iSCSI connection, but the end result is that neither side is happy and the Nimble will not failover either...  Any hints or advice would be appreciated.

Mike

13 REPLIES 13
greig_ebeling
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Hi Mike,

What interconnect modules and blade server models + 10 GbE mezz cards are you using in the C7000?

Greig

mleach26
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

· BL460C 5-Gen 7, 2-Gen8

We have not been able to configure the Nimble so that it will failover, getting a message related to VLANs

· Did you configure VLANs for the storage?

· For VMware, do you use the built-in iSCSI adapter or install a iSCSI hardware HBA

· Realizing that there is a 10GB throughput limitation, was there any special load balancing?

Thanks,

Michael Leach

Cardiovascular Research Foundation

111 East 59th Street

New York, NY 10022

Tel: (646) 434-4564

www.crf.org<http://www.crf.org/>

michael_cowart
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

We have a similar setup with no issues:

c7000 HP Blade chassis

2 x HP 6120XG 10gig switches

BL490c G7 blades with 2 x 10 gig embedded Flex Fabric Converged network adapters

I've connected successfully via the hardware HBA, the windows software initiator, and the ESXi software initiator. I didn't have to do any special configuration other than making the ports on the 6120XG that were connected to the Nimble only have iSCSI vlan traffic (the ports connected to the blades are trunks).

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

I've got my CS260G working on the HP C3000. I am using the Flex10 modules in the back of the chassis and the array is plugged directly into them. I've got it working but there some nuances that I have to deal with. If this is what you are trying to do let me know and I will elaborate on my configuration.

Damien Tijerina

mleach26
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Yes, this is exactly what we are doing.

Any information would be greatly appreciated

- Michael

greig_ebeling
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

HP typically does not support storage devices directly connected to the Flex10 VC modules.  VC modules are not switches, they are aggregation devices.

Technically however it should be possible.  I have not done it myself.  Looking forward to Damien's reply.

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

So my configuration is not the optimal I would prefer to have a 10Gb switch, but I don't so this is how I made is work. This configuration does have its consequences which I will state below.

That being said here is what I am working with.

  • HP C3000 Chassis
  • (2) HP Virtual Connect Flex-10 Ethernet Modules
  • (1) Nimble CS260G
  • (2) HP BL460c Gen8
    • HP FlexFabric 10Gb 2-port 554FLB Adapter
    • HP FlexFabric 10Gb 2-port 554M Adapter


I setup the a VC domain on the 2 VC modules and created 2 Ethernet networks with 2 ports (one from each module).  I took port 4 from each module for one network and port 5 from each module for the other. These networks will force one port into standby. On both networks I made bay 2 ports primary.

I plugged the Nimble interface tg1 on controller A into Bay 1: Port 5 and tg1 controller B into Bay 2: Port 5. Then I did the same for tg2 on port 4 for both controllers.

My two networks.

  • vNET_Port4
    • Bay 1: Port 4                   tg2 - controller B
    • Bay 2: Port 4 (primary)     tg2 - controller A
  • vNet_Port5
    • Bay 1: Port 5                   tg1 - controller A
    • Bay 2: Port 5 (primary)     tg1 - controller B

Then I created a server profile for each blade in the chassis and created two iSCSI HBAs using the networks that I created. In this configuration if I lose a VC module then the HBAs still have have another module to work with in the other bay. If lose a Nimble controller then I I have an active link ready because of the active/passive configuration of the vNET networks. (Essentially I will always have an active link on a controller that is in standby)

In my case my blades all have VMWare ESXi 5.1 on them. I had to load drivers to support the HBAs and they are on own subnet because of the isolation that this configuration creates. It's working great and I have the ability to lose a VC module or a controller without interruption.

There are many things that I do not like about this configuration. (I plan on adding (2) 10Gb switches to deal with this)

  • The active/passive state of the ports in the networks forces me down to 10Gb total for my HBAs.
    • Due to having a link in the passive state, Nimble controller upgrades fail their prerequisite check. I worked with support and they were able to modify the controller to bypass that failure.
  • The 10Gb interfaces on the Nimble are isolated and can only be presented to the blades in the chassis.
    • I have the need to connect other servers to the array so I converted one of the management ports to data so I could use that interface. The obvious problem with this is I have no redundancy on the controller for management and data ports and its only 1Gb.
  • ESXi reports that half of my paths are down. This is due to the active/passive state of the ports in the vNETs. Remember I only have one active interface on the nimble controller at a time. The standby controller has the other active interface and is ready to respond if a failure occurs.
  • Both iSCSI HBAs are on the same physical card in my blades due the VC automatic assignment. (This may very depending on your mezzanine cards)
    • There are a couple of ways to fix this. I plan to move away from iSCSI HBAs and use software HBAs built into ESXi. Then I can use vSphere for the NIC assignments.
    • You could plugin more interfaces and/or add additional HBAs that would use the other card.
  • My iSCSI HBAs are only 5Gb a piece. (This would change if you tried add more or to fix the single card issue above.)

       

I welcome any recommendations that anyone has. This was the best I could come up with and its working for me.

Damien Tijerina

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Another down side to the 1gb management port that I converted to data.

If your 10Gb links drop and your 1Gb data doesn't the Nimble controller doesn't fail over. Which leaves all of your blades with all paths down.

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

I just discovered that my array is reporting a health status check for fail over now. I guess this came with the latest firmware update. This is preventing me from fail over. I'm going to contact support on this now.

tlee29
New Member
Solution

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Hi Damien,

I've just responded to your case.  I'll also share the information here for anyone else that's watching this,

you are correct on the problems or disadvantages with your current set up and that purchasing additional 10G switches will address current short comings.

We've seen similar deployment on HP C3000 with VICs, and confirmed with HP resource that Virtual Inter Connects do not function like a switch, it simply doesn't have the capability of passing arp between ports. So in the existing environment we will always have this problem. The only solution is to plug the Nimble into a switch(s) then plug the switch(s) into the VIC.

In a typical environment, the arp checking on Nimble is built-in intentionally as a safety mechanism to prevent accidental failing over when there is a misconfigured network and cause an outage. To assist in your situation, Tech Support can log into Nimble and set a flag to bypass the upgrade pre-check to allow failover, but it would be wise to perform this during your maintenance window in case there are any unforeseen network problem and resulting in array unavailable to your servers.

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Ok, Tien was able to change the health check script and I can now fail over the controllers. This works for now but I will definitely be purchasing some 10Gb switches soon. This is turning out to be a configuration that you can really only make work with the help of Nimble support. At first it was just the upgrades that I needed assistance with and now with the latest firmware they have to disable the failover health check also.

greig_ebeling
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Damien,

It is clear that using Flex 10 is not the ideal way to setup 10Gb iSCSI. 

There are two options for adding 10Gb switches:

1. Internal switches:

a) replace the Flex 10 modules with HP 6120XG interconnects


2. External switches (eg 2 x  HP5920)

Note uplink with low cost DACs (eg J9281B), do not use expensive transceivers.

a) configure your vNets so that each 10Gb server port maps to an individual external module port. (this limits the number of servers you can connect to storage to the available ports on the Flex 10)

b) replace the Flex-10 modules with 10Gb pass-throughs.

In the case of 1(a) and 2(b) suggest the Flex-10 modules can be repurposed by moving to C3000 bay 3 and 4, to carry LAN traffic. (add another 2 x HP FlexFabric 10Gb 2-port 554M Adapters to blade server mezz slot 2). This will significantly increase the available bandwidth available to your servers for both iSCSI and LAN traffic, and using the FLex10s for maximum utility.

BTW in your case a really cheap way to add 10Gb switching is to deploy 2 x HP2920 with 10Gb SFP+ modules (4 ports per switch).  The HP 2920s have 11.25 MB of port buffering so make quite good reliable iSCSI switches.  This solution is limited to only connecting to two servers.

greig_ebeling
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Michael,

Based on Damien's comments I would suggest adding another 10Gb HBA in mezz1 to each blade server for carrying dedicated 10Gb iSCSI traffic, and

  1. add 2 x  6120XG switches in C7000 bays 3 and 4, or
  2. add 10Gb passthrough interconnect modules and external 2 x HP 5920 switches (connect with DACS)

The Flex10 modules in bays 1 and 2 connected to the blade FlexLOMs can then be dedicated to LAN traffic.

Note:  Beware the C7000 and C3000 blades chassis have completely different port mappings, so my suggestion to Damien Jan 4, 2014 11:16 AM does not exactly apply to you.

See attachment for port map info.