- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Connectivity Issue
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 10:17 AM
тАО12-09-2010 10:17 AM
We have a pair of P4500 nodes connected to an iSCSI VLAN.
Also connected to the VLAN is a basic Netgear router (for testing).
Each P4500 node has the internal (i.e. connected to the same VLAN) IP of the Netgear as its default gateway on both NICs.
What I'm seeing is that when I select a node and do a ping test, the nodes can ping the gateway IP, but not beyond.
If I connect a laptop to the same VLAN I can happily ping beyond the gateway.
The firewall is wide open, so all outbound traffic is allowed, and the firewall is a NAT'ing firewall.
I'm running the SAN/iQ release on the 9.0 Quick Recovery ISO image.
Does anyone have any suggestions other than the firewall please?
I'm assuming I do want the default gateway to be set on BOTH of the NICs on each P4500 node?
Next step will probably be I'll download a firewall VM and try using that to test with, I just want to be sure I'm not missing anything blindingly obvious.
Thanks,
Paul
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 10:38 AM
тАО12-09-2010 10:38 AM
Re: Connectivity Issue
Not sure it should make any difference to the issue at hand but since it's there to play with for now...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 10:55 AM
тАО12-09-2010 10:55 AM
Re: Connectivity Issue
Did you assign different IPs to the individual SAN nics?
Before creating the SAN NIC bond you set the hardware device settings like flow control and jumbo package size per SAN NIC. Than bond the NICS, the IP and Subnet are set on the bonded NIC instance Eth0.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 11:03 AM
тАО12-09-2010 11:03 AM
Re: Connectivity Issue
So on node 1 I have:
NIC1 172.16.100.50/24
NIC2 172.16.100.51/24
What I'm not entirely clear on is whether I should set each NIC's default gateway, or just set it on the ALB bond?
Also it's not entirely clear in the documentation whether I should be able to ping beyond the local subnet using the "ping" option in the CMC?
Thanks,
Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 11:58 AM
тАО12-09-2010 11:58 AM
Re: Connectivity Issue
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 12:04 PM
тАО12-09-2010 12:04 PM
Re: Connectivity Issue
I'm leaning towards the firewall as NTP sync works fine and that's on the other side of the firewall so clearly routing is happening.
SMTP alerts seem hit and miss when I use the apply and test option - sometimes I get the test email but I still see an error on the CMC (I forget the exact message sadly).
Looking at the best practices I should be bonding using ALB anyway.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 12:05 PM
тАО12-09-2010 12:05 PM
SolutionOnly set the NICs settings, flow control, jumbo package size if you use them. The NICs IP and gateway are set on the bonded instance Eth0.
You cannot change the NICs individual settings afterwards when added to a cluster, meaning you have to remove them from the cluster, break the bond, set flow control or jumbo package size, rebond them, and readd them to the cluster.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2010 12:10 PM
тАО12-09-2010 12:10 PM
Re: Connectivity Issue
Not quite sure how I missed the bonding in all the documentation/best practise stuff I've been reading.
I left it downloading any patches for SAN/iQ 9 - bit freaky as it says it's downloading 4.5gb when it's already on 9.0, but I suspect it just downloads it all again or something.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-10-2010 10:08 AM
тАО12-10-2010 10:08 AM
Re: Connectivity Issue
I'm still not convinced the firewall I was using wasn't suspect, but either way I needed to sort out the bond issue.
I may also need to break the bond and enable flow control on the NICs as I didn't spot that part of the best practise and I'm not sure (not in the office now) if flow control would be enabled or not by default on the NICs.
Thanks again,
Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-11-2010 02:39 PM
тАО12-11-2010 02:39 PM
Re: Connectivity Issue
With ALB you get 2GB Tx and 1Gb Rx. In the HP NCU it is now called TLB, transmit load balancing.
On the P4000 SAN you need to enable a Virtual IP adress and load balancing.
On the server node to see both NIC engaging in SAN traffic you need to set the MPIO setting in iSCSI initiator: TAB Targets, click devices, click MPIO, set load balancing from "Vendor specific" to "Round robin". You will see a passive failover connection changing to active. Click OK, Click OK, check the NICs traffic.
If you get and error like "Not Supported" in MPIO, you've a misconfiguration, for steps see: http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1459582
Test you configuration setup with SQLIO or IOmeter to check if you are achieving the throughput.