Array Setup and Networking
1748124 Members
3074 Online
108758 Solutions
New Discussion юеВ

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

 
SOLVED
Go to solution
mleach26
New Member

Connect a 10GB Nimble Array to an HP C7000 Blade Chassis

After a few weeks of stumbling thru this, we still have issues with how to connect the 10GB iSCSI Nimble to the 10G iSCSI HBA ports on a C7000.  There are several ways to make the iSCSI connection, but the end result is that neither side is happy and the Nimble will not failover either...  Any hints or advice would be appreciated.

Mike

13 REPLIES 13
greig_ebeling
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Hi Mike,

What interconnect modules and blade server models + 10 GbE mezz cards are you using in the C7000?

Greig

mleach26
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

┬╖ BL460C 5-Gen 7, 2-Gen8

We have not been able to configure the Nimble so that it will failover, getting a message related to VLANs

┬╖ Did you configure VLANs for the storage?

┬╖ For VMware, do you use the built-in iSCSI adapter or install a iSCSI hardware HBA

┬╖ Realizing that there is a 10GB throughput limitation, was there any special load balancing?

Thanks,

Michael Leach

Cardiovascular Research Foundation

111 East 59th Street

New York, NY 10022

Tel: (646) 434-4564

www.crf.org<http://www.crf.org/>

michael_cowart
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

We have a similar setup with no issues:

c7000 HP Blade chassis

2 x HP 6120XG 10gig switches

BL490c G7 blades with 2 x 10 gig embedded Flex Fabric Converged network adapters

I've connected successfully via the hardware HBA, the windows software initiator, and the ESXi software initiator. I didn't have to do any special configuration other than making the ports on the 6120XG that were connected to the Nimble only have iSCSI vlan traffic (the ports connected to the blades are trunks).

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

I've got my CS260G working on the HP C3000. I am using the Flex10 modules in the back of the chassis and the array is plugged directly into them. I've got it working but there some nuances that I have to deal with. If this is what you are trying to do let me know and I will elaborate on my configuration.

Damien Tijerina

mleach26
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Yes, this is exactly what we are doing.

Any information would be greatly appreciated

- Michael

greig_ebeling
New Member

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

HP typically does not support storage devices directly connected to the Flex10 VC modules.  VC modules are not switches, they are aggregation devices.

Technically however it should be possible.  I have not done it myself.  Looking forward to Damien's reply.

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

So my configuration is not the optimal I would prefer to have a 10Gb switch, but I don't so this is how I made is work. This configuration does have its consequences which I will state below.

That being said here is what I am working with.

  • HP C3000 Chassis
  • (2) HP Virtual Connect Flex-10 Ethernet Modules
  • (1) Nimble CS260G
  • (2) HP BL460c Gen8
    • HP FlexFabric 10Gb 2-port 554FLB Adapter
    • HP FlexFabric 10Gb 2-port 554M Adapter


I setup the a VC domain on the 2 VC modules and created 2 Ethernet networks with 2 ports (one from each module).  I took port 4 from each module for one network and port 5 from each module for the other. These networks will force one port into standby. On both networks I made bay 2 ports primary.

I plugged the Nimble interface tg1 on controller A into Bay 1: Port 5 and tg1 controller B into Bay 2: Port 5. Then I did the same for tg2 on port 4 for both controllers.

My two networks.

  • vNET_Port4
    • Bay 1: Port 4                   tg2 - controller B
    • Bay 2: Port 4 (primary)     tg2 - controller A
  • vNet_Port5
    • Bay 1: Port 5                   tg1 - controller A
    • Bay 2: Port 5 (primary)     tg1 - controller B

Then I created a server profile for each blade in the chassis and created two iSCSI HBAs using the networks that I created. In this configuration if I lose a VC module then the HBAs still have have another module to work with in the other bay. If lose a Nimble controller then I I have an active link ready because of the active/passive configuration of the vNET networks. (Essentially I will always have an active link on a controller that is in standby)

In my case my blades all have VMWare ESXi 5.1 on them. I had to load drivers to support the HBAs and they are on own subnet because of the isolation that this configuration creates. It's working great and I have the ability to lose a VC module or a controller without interruption.

There are many things that I do not like about this configuration. (I plan on adding (2) 10Gb switches to deal with this)

  • The active/passive state of the ports in the networks forces me down to 10Gb total for my HBAs.
    • Due to having a link in the passive state, Nimble controller upgrades fail their prerequisite check. I worked with support and they were able to modify the controller to bypass that failure.
  • The 10Gb interfaces on the Nimble are isolated and can only be presented to the blades in the chassis.
    • I have the need to connect other servers to the array so I converted one of the management ports to data so I could use that interface. The obvious problem with this is I have no redundancy on the controller for management and data ports and its only 1Gb.
  • ESXi reports that half of my paths are down. This is due to the active/passive state of the ports in the vNETs. Remember I only have one active interface on the nimble controller at a time. The standby controller has the other active interface and is ready to respond if a failure occurs.
  • Both iSCSI HBAs are on the same physical card in my blades due the VC automatic assignment. (This may very depending on your mezzanine cards)
    • There are a couple of ways to fix this. I plan to move away from iSCSI HBAs and use software HBAs built into ESXi. Then I can use vSphere for the NIC assignments.
    • You could plugin more interfaces and/or add additional HBAs that would use the other card.
  • My iSCSI HBAs are only 5Gb a piece. (This would change if you tried add more or to fix the single card issue above.)

       

I welcome any recommendations that anyone has. This was the best I could come up with and its working for me.

Damien Tijerina

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

Another down side to the 1gb management port that I converted to data.

If your 10Gb links drop and your 1Gb data doesn't the Nimble controller doesn't fail over. Which leaves all of your blades with all paths down.

damientij122
Occasional Advisor

Re: Anyone sucessfully connect a Nimble to an HP C7000 blade chassis?

I just discovered that my array is reporting a health status check for fail over now. I guess this came with the latest firmware update. This is preventing me from fail over. I'm going to contact support on this now.