- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Re: Need assistance with ISCSI reconfiguration on ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2016 09:33 AM
10-18-2016 09:33 AM
Need assistance with ISCSI reconfiguration on VMware vSphere 5.5
Here is the current network:
3 hosts, each with 2x1GBE links, manually setup on vsphere 5.5, to a single SAN. The objective was to upgrade the ISCSI links to 2x10GBE links, so pNICs were installed on the hosts, vswitch created, vNICs and port groups created with the same setup as the existing connections (except the names & IPs), and then the new vNICs were bonded to the software initiator. The old links were then un-bonded from the initiator, and all connections to the SAN dropped. I re-scanned using the same discovery IP (same subnet), and it did find the targets in the discovered/manual target list, however when looking at the Storage Adapters page, there are no devices, no paths, no nothing. Connections aren't there. I went to see if I could add the datastores from Storage > Add Storage, but it doesn't detect anything (I've re-scanned everything). I checked the Nimble, and CHAP is disabled. What can I do to get connected on the new NICs?
EDIT: I've read some things about vmware not refreshing quickly, and I am starting to see the devices & paths now, but nics are still "Not Used" status in the initiator. Rebooting the host now to see if that resolves things.
EDIT2: This didn't seem to resolve the issue. Same problem.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2016 06:19 AM
10-20-2016 06:19 AM
Re: Need assistance with ISCSI reconfiguration on VMware vSphere 5.5
Hey Josh;
What I do when I get into a spot like this is go back to the basics.
Are the new 10GB NICs on the same subnet as the old 1GB NICs?
If not has this new subnet been added to the Nimble?
Are the IQNs for the VMware hosts in a Nimble Initiator Group?
Can the VMware and Nimble NICs talk to each other? i.e. SSH into both sides and make sure they can ping
VMware iSCSI Initiator properities: Is the discover IP address still valid? Are there existing Static Discovery entries and if so are they valid?
I think you get the idea . . .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2016 07:06 AM
10-20-2016 07:06 AM
Re: Need assistance with ISCSI reconfiguration on VMware vSphere 5.5
My first thought is Initiator Groups and access; I would double check that the Initiator is a member of the appropriate Initiator Group and the Initiator Group has read/write access to the Volume.
After that, I would check the switch port configurations -- are you using the same switch ports as previously? Do you have to apply a VLAN ID to the VMKernel interface? MTU settings match?
Drop into SSH to the ESXi host and run vmkping to validate connectivity.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2016 08:19 AM
10-20-2016 08:19 AM
Re: Need assistance with ISCSI reconfiguration on VMware vSphere 5.5
A few things I'll add to the list:
- verify your cables are capable of 10 GB and GBics are compatible.
- when pinging, note latency
- VLAN and switch config
- MTU/frame size (if not 1500, more validation needed)
- subnets and default router/gateway
- dynamic discovery addresses and initiator groups... Verify both settings match on nimble and vsphere.
Check with the Nimble support folks too... they're quite good at troubleshooting these initial connectivity issues.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2016 11:41 AM
10-21-2016 11:41 AM
Re: Need assistance with ISCSI reconfiguration on VMware vSphere 5.5
vmkping is a great first step after verifying cabling and switch config, you can specify the vmk as well, so I would use the following to ensure Host and Nimble interfaces are on the same network (using your screenshot as reference):
vmkping -I vmk4 <nimble iscsi interface IP>
vmkping -I vmk5 <nimble iscsi interface IP>
Make sure you're vmkernels are able to ping each of the Nimble array iSCSI addresses. Once you get that far, you can increase MTU size to make sure Jumbo Frames are enabled from end to end.
Let us know what you figure out!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2017 09:02 AM
01-03-2017 09:02 AM
Re: Need assistance with ISCSI reconfiguration on VMware vSphere 5.5
Hi all, just following up for everybody who may be facing a similar issue. We did test just about everything from switch configuration to physical cables, connectors, etc. What we found is that after making some re-configurations on the networking for our vsphere environment, often things would not work properly until the actual host was rebooted. We even brought in external expertise to verify, and we could not locate a problem with our setup or determine why it was failing, other than a reboot of the host fixed everything. Still scratching my head at this one, but I'd recommend bouncing a host after messing with vmware networking (even when in maintenance mode!). Thanks for the input.