- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: SV3200 Best Practise setting up ISCSI initiato...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-15-2018 09:21 AM - edited 11-15-2018 10:47 PM
11-15-2018 09:21 AM - edited 11-15-2018 10:47 PM
SV3200 Best Practise setting up ISCSI initiator and MPIO
After repairing the SV3200 with a new Storage Controller NodeB-SC1, HPE Support advised to remove the VIP from the Management Group and direct attach the server iscsi nics to the VS3200 Bond0 NICS, as according to HPE support there is a known issue with the VIP on the VS3200.
We are running 13.6.00.024..0 with patches: 136-004-00, 6-9-2018 14:20:58; 136-005-00, 6-9-2018 14:36:44; 136-007-00, 6-9-2018 16:35:47; 100-017-00, 6-9-2018 16:03:46; 136-001-00, 6-9-2018 13:40:45; 136-006-00, 6-9-2018 15:14:27; 136-003-00, 6-9-2018 13:29:36.
The system is currently a clean install and I am testing it. We have a 3 node FOC with 4 iscsi nics per node. I have currently connected them to the BondO IP adresses but I am seeing these events::
Windows
System event ID 20 error iScsiPrt Connection to the target was lost. The initiator will attempt to retry the connection.
System event ID 1 error iScsiPrt Initiator failed to connect to the target. Target IP address and TCP Port number are given in dump data.
System Event ID 46 MPIO Path 77030028 was removed from \Device\MPIODisk6 due to a PnP event. The dump data contains the current number of paths.
VS3200
E000E0101:EID_ISCSI_TRANSPORT_LINK_STATUS_DOWN - Events - StoreVirtual
- The iSCSI port on 'NEMHSA02B-SC2' is DOWN, iSCSI communication on this storage system is down.
E00000B01:EID_ISCSI_TARGET_DATA_ACCESS_STATUS_DEGRADED - Events - StoreVirtual
- There are DOWN iSCSI ports on target 'NEMMG02'
E00060500:EID_S_FAILOVER_STATUS_FAILED_OVER - Events - StoreVirtual
EID_S_FAILOVER_STATUS_FAILED_OVER
Today 5:28:27 pm2018-11-15T16:28:27ZSystem
Storage controller 'NEMHSA02B-SC2' is failed over.
- Storage controller 'NEMHSA02B-SC2' in storage pool 'NEMSP02' is failed over.
E00060207:EID_S_SERVER_STATUS_DOWN_NV - Events - StoreVirtual
EID_S_SERVER_STATUS_DOWN_NV
Today 5:28:27 pm2018-11-15T16:28:27ZSystem
Storage system 'NEMHSA02B-SC2' status = 'Down'.
- The storage system 'NEMHSA02B-SC2' status in storage pool 'NEMSP02' is 'Down'.
It eventually comes back up. My problem is that NodeB-SC1 was replaced and now I am seeing this on the NodeB-SC2. According to the supplier that installed it, they had to replace a controller also before getting it to work in this node.
I do not want to bring it back into production as I saw some really erratic system behavior when the degraded system (NodeB-SC1 down, failed over to NodeB-SC2 before the repair) was under load during back-ups, bringing down Windows 2012 server nodes, and VMs.
Is there a best practise for setting up ISCSI initiators without the VIP?
And to test it to make sure it is working properly?
TIA,
Fred
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-15-2018 10:43 PM - edited 11-15-2018 10:45 PM
11-15-2018 10:43 PM - edited 11-15-2018 10:45 PM
Re: SV3200 Best Practise setting up ISCSI initiator and MPIO
Checked this morning the logs, FOC no cluster events, system more iScsiPrt errors so I wanted to login to the SV3200.
No luck, all 4 Bond0 IP adresses are not responding.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-26-2018 10:26 AM - last edited on 11-26-2018 08:34 PM by Parvez_Admin
11-26-2018 10:26 AM - last edited on 11-26-2018 08:34 PM by Parvez_Admin
Re: SV3200 Best Practise setting up ISCSI initiator and MPIO
HI,
It appears, that there was a controller replacement on the Sv3200.
Usually there could be down iSCSI transport ports, which may need to be removed manually, since they still point to the old controller.
Also, if this is a scaled out system, that is two SV3200 in a cluster, its is best to use the VIP as the storage target in the host/server.
Otherwise, you can use the 4 x Bond0 IPs as the storage targets for a single unit.
The VIP issue was resolved with the 13.6 upgrades.
However, if a controller is randomly going down, we have to check the logs to see what going on.
For MPIO, for a Windows host, it is recommended to use Windows native MPIO. The windows events that you have mentioned, points to a controllers going offline for sometime or its ports flapping, which we can confirm after checking the logs.
Or call us at the HPE support number for assistance. : : 1-800-633-3600.
regards,
Sudip