- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Do NOT put the DSM/MPIO stuff on a VCB/VADP box. (...
StoreVirtual Storage
1752778
Members
5957
Online
108789
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-29-2010 09:41 PM
тАО11-29-2010 09:41 PM
Do NOT put the DSM/MPIO stuff on a VCB/VADP box. (When Upgrading to 9.0)
Hi People.
I run the CMC on a windows box that is also used for backing up our DMZ ESX farm (Commvault VADP).
Where I went wrong was by blindly following what the CMC was telling me to do before the upgrade which was to install the DSM/MPIO provider and the VSS provider. The upgrade went well, apart from a couple of the LUN's disappearing and re-apparing as the VIP moved around, and then all hell broke loose.
My ESX Servers started dropping into the greyed out 'not responding' state in vCenter, although it appeared all the VM's they were running were still up. After 2 days of troubleshooting with the help of HP and VMware I was finally pointed to VMware KB 1030129 which states 'HP Lefthand DSMs for MPIO can cause locking and LUN accessibility issues on LeftHand arrays'. After I removed the DSM on the windows box which had the LUNs presented all was well.
My point to the whole story: Do NOT put the DSM/MPIO provider on a windows box that has ESX Luns presented to it, even though it appears to be a pre-requisite (if you have the CMC on that windows box)
Anyway I hope this might save somebody making the same mistake I did and tearing their hair out for a couple of days.
BTW: The ESX Boxes are running 4.0U2, I am trying to ask if 4.1 with VAAI will still have this issue.
Cheers to All.
I run the CMC on a windows box that is also used for backing up our DMZ ESX farm (Commvault VADP).
Where I went wrong was by blindly following what the CMC was telling me to do before the upgrade which was to install the DSM/MPIO provider and the VSS provider. The upgrade went well, apart from a couple of the LUN's disappearing and re-apparing as the VIP moved around, and then all hell broke loose.
My ESX Servers started dropping into the greyed out 'not responding' state in vCenter, although it appeared all the VM's they were running were still up. After 2 days of troubleshooting with the help of HP and VMware I was finally pointed to VMware KB 1030129 which states 'HP Lefthand DSMs for MPIO can cause locking and LUN accessibility issues on LeftHand arrays'. After I removed the DSM on the windows box which had the LUNs presented all was well.
My point to the whole story: Do NOT put the DSM/MPIO provider on a windows box that has ESX Luns presented to it, even though it appears to be a pre-requisite (if you have the CMC on that windows box)
Anyway I hope this might save somebody making the same mistake I did and tearing their hair out for a couple of days.
BTW: The ESX Boxes are running 4.0U2, I am trying to ask if 4.1 with VAAI will still have this issue.
Cheers to All.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-30-2010 12:35 PM
тАО11-30-2010 12:35 PM
Re: Do NOT put the DSM/MPIO stuff on a VCB/VADP box. (When Upgrading to 9.0)
Hi Jon,
The Upgrade Now installer did not tell you to install the DSM MPIO and VSS providor. It told you that if you are using that software that it is a prerequisite to update that software FIRST before SECOND attempting an update of the SAN.
I had DSM MPIO installed with Windows Cluster Server 2008 R2 2 nodes and ran into this problem also. So did kghammond with Windows Cluster Server. It is not limited to ESX luns.
It is a painfull bug and in my case HP support did not know or acknowledged the cause.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-01-2010 08:08 PM
тАО12-01-2010 08:08 PM
Re: Do NOT put the DSM/MPIO stuff on a VCB/VADP box. (When Upgrading to 9.0)
Does anyone know if there are compatibility issues using vRanger's iSCSI offload feature with the HP Lefthand MPIO DSM?
We previously ran the HP MPIO DSM 8.5 on our vRanger box using the iSCSI offload. All iSCSI offload LUN's were mounted read-only. Everything seemed to work fine. We only had one NIC though so we were not doing MPIO.
We have moved vRanger to a new box and loaded 9.0 DSM on it and this box has four nic's for iSCSI. We have been debating about using MPIO on all four nic's for each LUN. We have about 50 LUN's so this could be painful. Or we were also considering doing a poor man's load balancing and just logging on one NIC per LUN and let the backup system load balance the nic's. As of right now we are just doing over the LAN backups.
Any thoughts or suggestions?
We previously ran the HP MPIO DSM 8.5 on our vRanger box using the iSCSI offload. All iSCSI offload LUN's were mounted read-only. Everything seemed to work fine. We only had one NIC though so we were not doing MPIO.
We have moved vRanger to a new box and loaded 9.0 DSM on it and this box has four nic's for iSCSI. We have been debating about using MPIO on all four nic's for each LUN. We have about 50 LUN's so this could be painful. Or we were also considering doing a poor man's load balancing and just logging on one NIC per LUN and let the backup system load balance the nic's. As of right now we are just doing over the LAN backups.
Any thoughts or suggestions?
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP