HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

Connectivity Issue

SOLVED
Go to solution
Paul Hutchings
Super Advisor

Connectivity Issue

Having a bit of an odd issue, for which I suspect the firewall but I'm struggling to locate a spare one.

We have a pair of P4500 nodes connected to an iSCSI VLAN.

Also connected to the VLAN is a basic Netgear router (for testing).

Each P4500 node has the internal (i.e. connected to the same VLAN) IP of the Netgear as its default gateway on both NICs.

What I'm seeing is that when I select a node and do a ping test, the nodes can ping the gateway IP, but not beyond.

If I connect a laptop to the same VLAN I can happily ping beyond the gateway.

The firewall is wide open, so all outbound traffic is allowed, and the firewall is a NAT'ing firewall.

I'm running the SAN/iQ release on the 9.0 Quick Recovery ISO image.

Does anyone have any suggestions other than the firewall please?

I'm assuming I do want the default gateway to be set on BOTH of the NICs on each P4500 node?

Next step will probably be I'll download a firewall VM and try using that to test with, I just want to be sure I'm not missing anything blindingly obvious.

Thanks,
Paul
13 REPLIES
Paul Hutchings
Super Advisor

Re: Connectivity Issue

Would I be right in thinking that the preferred way of setting up the NICs would be to simply assign each NIC an IP/Subnet Mask and then create a bond from the two NICs?

Not sure it should make any difference to the issue at hand but since it's there to play with for now...
Fred Blum
Regular Advisor

Re: Connectivity Issue

"Would I be right in thinking that the preferred way of setting up the NICs would be to simply assign each NIC an IP/Subnet Mask and then create a bond from the two NICs?"

Did you assign different IPs to the individual SAN nics?

Before creating the SAN NIC bond you set the hardware device settings like flow control and jumbo package size per SAN NIC. Than bond the NICS, the IP and Subnet are set on the bonded NIC instance Eth0.
Paul Hutchings
Super Advisor

Re: Connectivity Issue

Thanks for the reply.

So on node 1 I have:

NIC1 172.16.100.50/24
NIC2 172.16.100.51/24

What I'm not entirely clear on is whether I should set each NIC's default gateway, or just set it on the ALB bond?

Also it's not entirely clear in the documentation whether I should be able to ping beyond the local subnet using the "ping" option in the CMC?

Thanks,
Paul
Fred Blum
Regular Advisor

Re: Connectivity Issue

Can you check with only one NIC configured?
Paul Hutchings
Super Advisor

Re: Connectivity Issue

Will do when I'm in the office tomorrow.

I'm leaning towards the firewall as NTP sync works fine and that's on the other side of the firewall so clearly routing is happening.

SMTP alerts seem hit and miss when I use the apply and test option - sometimes I get the test email but I still see an error on the CMC (I forget the exact message sadly).

Looking at the best practices I should be bonding using ALB anyway.
Fred Blum
Regular Advisor
Solution

Re: Connectivity Issue

"What I'm not entirely clear on is whether I should set each NIC's default gateway, or just set it on the ALB bond?"

Only set the NICs settings, flow control, jumbo package size if you use them. The NICs IP and gateway are set on the bonded instance Eth0.

You cannot change the NICs individual settings afterwards when added to a cluster, meaning you have to remove them from the cluster, break the bond, set flow control or jumbo package size, rebond them, and readd them to the cluster.

Paul Hutchings
Super Advisor

Re: Connectivity Issue

Thanks Fred, that has to be "Plan A" for tomorrow then.

Not quite sure how I missed the bonding in all the documentation/best practise stuff I've been reading.

I left it downloading any patches for SAN/iQ 9 - bit freaky as it says it's downloading 4.5gb when it's already on 9.0, but I suspect it just downloads it all again or something.
Paul Hutchings
Super Advisor

Re: Connectivity Issue

Fred just to say a huge thank you. I came in early today (peace and quiet!) and deleted everything (for speed) and setup the NICs with just IP/Subnet Mask, created the bonds, added them to a new management group/cluster, and it's all working.

I'm still not convinced the firewall I was using wasn't suspect, but either way I needed to sort out the bond issue.

I may also need to break the bond and enable flow control on the NICs as I didn't spot that part of the best practise and I'm not sure (not in the office now) if flow control would be enabled or not by default on the NICs.

Thanks again,
Paul
Fred Blum
Regular Advisor

Re: Connectivity Issue

Best Practise Network configuration for the SAN is NIC jumbo frames, flow control and ALB. Needs corresponding configuration of your switch with enabling jumbo, flow control and RSPT and setting jumbo, flow control on server nics attached to the SAN Subnet. http://h10032.www1.hp.com/ctg/Manual/c01750150.pdf
With ALB you get 2GB Tx and 1Gb Rx. In the HP NCU it is now called TLB, transmit load balancing.

On the P4000 SAN you need to enable a Virtual IP adress and load balancing.

On the server node to see both NIC engaging in SAN traffic you need to set the MPIO setting in iSCSI initiator: TAB Targets, click devices, click MPIO, set load balancing from "Vendor specific" to "Round robin". You will see a passive failover connection changing to active. Click OK, Click OK, check the NICs traffic.

If you get and error like "Not Supported" in MPIO, you've a misconfiguration, for steps see: http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1459582

Test you configuration setup with SQLIO or IOmeter to check if you are achieving the throughput.
Paul Hutchings
Super Advisor

Re: Connectivity Issue

Thanks Fred, this is where different HP documents seem to contradict a little.

If I have:

P4000 nodes
2x2910al with 10gbps inter-switch-connection
vSphere hosts running mix of VM's
Wish to use MPIO on vSphere and within Windows VM's

Where *exactly* do I need to enable Flow Control?

Some documents suggest globally, others suggest only on the ports the P4000 uplinks to.
Fred Blum
Regular Advisor
Paul Hutchings
Super Advisor

Re: Connectivity Issue

Thanks Fred.

Sorry if I appear to be pedantic (or just thick, quite open to that too) but I'm still not clear which ports I would enable flow control on.

Suppose I have:

Switch 1 - Ports 1-6 P4000 NICs
Switch 1 - Ports 7-10 vSphere iSCSI NICs (mix of host and guest MPIO)
Switch 1 - Port A1 10gbps link to Switch 2
|
10gbps link - tagged iSCSI and vMotion VLANs
|
Switch 2 - Ports 1-6 P4000 NICs
Switch 2 - Ports 7-10 vSphere iSCSI NICs (mix of host and guest MPIO)
Switch 2 - Port A1 10gbps link to Switch 1
Fred Blum
Regular Advisor

Re: Connectivity Issue


Switch Flow control on all ports and Jumbo support per VLAN.

Flow control is a best practise recommendation and jumbo frames an optional recommendation.