- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Problems with LVM running under 11.0 <-> EMC Clarr...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2005 06:18 PM
тАО06-08-2005 06:18 PM
Problems with LVM running under 11.0 <-> EMC Clarrion
we are using hpux 11.0 on our file server for a couple of years now. EMC wanted to update the flare code of our fc4700 clarrion to the newest
release.
After this updated we run into serious trouble.
We are using pvlinks to attach the lun's of the raid's. If I'm going to move a lv from one LUN to another the whole system starts to trespass between both paths heavily. Even a reboot of one storage processor was the result of a pvmove !!
All parameter are adjusted according to the
wishes of EMC, but hte behavior is still the same. EMC is working on the problem, but seem's not to know how to stop the extended tresspassing.
However I can not work on the file server
without running into serious trouble (one
file system was already corrupted !).
So has anybody knowledge in this field ?
Maybe one has had the the trouble and knows
some workaround ....
hardware: l-class, 2 A5158 Adapter, 2 Brocade 2800, 2 FC4700, ~10TB disk space 16 LUN'S (8 on each clarrion). Using hpux 11.0, lvm online jfs.
Thanks, Peer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2005 08:09 PM
тАО06-08-2005 08:09 PM
Re: Problems with LVM running under 11.0 <-> EMC Clarrion
upfront: I'm not familiar with the issue you have. However there are two thing you could check:
1. did you implement single initiator zoning on the Brocades? If not, this would be the first thing I'd suggest.
2. pvmove is similar to lvextend -m 1 and a subsequent lvreduce -m 0. So you might try such a two step approach instead of using pvmove and see it you get the same problem when simply adding or removing an lvm mirror.
Regards,
Bernhard
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2005 08:20 PM
тАО06-08-2005 08:20 PM
Re: Problems with LVM running under 11.0 <-> EMC Clarrion
you mean thezoning of the sp's and the hba's, right ?
Every hba is in one zone seeing only one
sp. As mentioned before it worked for nearly
four years WITHOUT a single fault ....
In the moment I'll will not test any further.
I'm waiting for a response of EMC. The risk is
much to high. I've always play with our MAIN
file server ...
Thanks, Peer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-09-2005 12:59 AM
тАО06-09-2005 12:59 AM
Re: Problems with LVM running under 11.0 <-> EMC Clarrion
Regards,
Bernhard
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-09-2005 02:17 AM
тАО06-09-2005 02:17 AM
Re: Problems with LVM running under 11.0 <-> EMC Clarrion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-09-2005 04:47 PM
тАО06-09-2005 04:47 PM
Re: Problems with LVM running under 11.0 <-> EMC Clarrion
I have some knowledge in this area, as we used both the FC4500 and the FC4700. However, we are now upgraded to CX600 (due to the somewhat inherent instability of the FC's) so I don't know if my knowledge is up-to-date.
Things to check :
- Your agent.config file on the host. Does it contain the "OptionsSupported AutoTrespass"-option ?
- Are you using PowerPath in combination with Navisphere Agent ? If so, things are pretty tricky. There's a manual specifically for that combination.
- How is your failovermode (navicli command) set ?
These three things interact with each other and the settings for each may cause the results you are describing. However, if you are having corruption you should have EMC on-site and they probably already checked these things ...
Oh yes, is the physical volume you are "pvmoving" visible on the same SP as the physical volume you are moving to ?
Hope this at least gives you a pointer to "where to start looking" ...
Regards,
Tom
P.S. EMC should have checked this before doing the upgrade, but is the combination of FC4700 / microcode / A5158A adapter still supported in their matrix ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-20-2005 07:19 PM
тАО06-20-2005 07:19 PM
Re: Problems with LVM running under 11.0 <-> EMC Clarrion
we now "solved" the problem switching from pv-links to powerpath. Everything was reconfigured the hpa driver, the navi client and the pdc were updated and nearly everything
is running smoothly now (one machine has a load
of 1 but the system is idle'ing 100% ...!, but
I can not find the reason. No pending IO or things like that.)
However, when we did the reconfiguration one
naviagent used old settings and switched the
settings to HP-Trespass. The powerpath was not
able to see the device anymore. We found the problem and fixed it, the device was accessible again, but the file systems on the LV were both corrupt (fsck -o full ...).
We lost again > 100GB. The file system was
not reachable, no IO nothing.
So why are the file systems broken ?
A few year's ago we had to play a lot with
a different FC-Raid and we often used the following trick to fix problems on the Raid. We simply disconnected th FC-Link - all
IO's were queued and after 1 or 2h we
reestabilished the link and everything
worked fine.
Any idea ?
EMC fault or HP fault ?
Bye, Peer