- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: LVM timeout and SANs
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-22-2006 01:52 AM
тАО02-22-2006 01:52 AM
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=362426
(I thought I should start a new one instead of hijacking that old thread! :))
Pat, (or whoever else is knowledgeable in this area)
I am just setting up my LUNs to be used on our servers and have been thinking about the LVM issue, discussed above.
Say I wanted to create a 100GB LVM, I had intended to just create one 200GB raw LUN with RAID1 on the SAN bringing the total useable space to 100Gigs, and then present it directly.
From your experiences I find that this could be hazardous. So I need to use LVM and increase the timeout value.
Now, Do I need to create two seperate 100GB raw LUNs now and then use lvm rather than the SAN to mirror them and give me the useable 100gigs?
Thanks as ever for your sage council
Guy
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-22-2006 01:55 AM
тАО02-22-2006 01:55 AM
Re: LVM timeout and SANs
In my experience, hardware mirroring outperforms software mirroring hands down, so I would say no, you don't want to use lvm to do your mirroring.
Pete
Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-22-2006 02:04 AM
тАО02-22-2006 02:04 AM
Re: LVM timeout and SANs
IF however you are using multple "controller centric" arrays (again say EVA) OR "cache centric arrays" (like the XP or Hitachi line) - then you will get the best performance by striping (not mirroring since each LUN presented is already protected on the SAN/array level) accross LUNS on different array controllers or pairs or arrays (EVAs). If you've 4 EVAs.. I'd stripe my lvol/volume on the host accross 4 disks with each disk coming from each EVA. Or 4 or 8 disks if coming from an XP.
Again for LVM - follow the recommended PV TOV values - if there are.
Hope this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-22-2006 04:09 AM
тАО02-22-2006 04:09 AM
SolutionNow to your question, The others really answered the question, so I am left to summarize. Two EVA's with host mirroring is the safest, though I still say never use raid0. I always use raid5, and don't have funds to mirror EVA's, and don't really care to do raid1 within the eva.
Now of course all the above experience is with a eva5K. I am demoing a eva8k with autopath, and autopath has a timeout value which I think does the same as lvm, but not sure.
I get my best perf creating 2-100 gn luns and preferring each to opposite controllers, and then host striping them together for a 200gb volume and fs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-22-2006 08:36 PM
тАО02-22-2006 08:36 PM
Re: LVM timeout and SANs
I had always intended to use RAID 1 from the SAN (which is a lone EVA 5000 - we can't afford to mirror ours either!)
What is tripping me up is Pat's mention of the timeout problem. This prob forces me to use LVM, not that I am averse to LVM per se you understand - but I wanted to step away from the old and move to the new, the EVA.
Can I get some clarification on what the problem is - just so I am doubly sure I understand it.
As I see it the prob is that if a disk fails in the san it can lead to loop failures where the SAN tries to find the failed disk and do self-diagnostics - this process takes longer than the default SCSI timeout value on the host and so data is lost!
If I am wrong with this assumption please tell me.
Now, what I am confused about is will the SAN not recognise that the failed disk is part of a RAID1 set and just use the other mirrored disk automatically? or is it that it has to go through these diagnostics BEFORE it swaps over to the other disk?
cheers
Guy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-28-2006 07:37 AM
тАО02-28-2006 07:37 AM
Re: LVM timeout and SANs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-28-2006 12:17 PM
тАО02-28-2006 12:17 PM
Re: LVM timeout and SANs
regarding Pats post:
- lvm mirroring / not mirroring has no effect on PV timeouts, expect the chance of a path/channel failure is 50% lower.
- man pvchange has a -t option that is able to set the timeout applies on a specific volume under LVM control. use
pvchange -t 180 /dev/rdsk/cXtYdZ to give a quite high timeout value that will cover i.e. a full core switch reboot on Your disk accesses to be able to sustain 'connectivity' during SAN issues.
As already stated, mirroring is better done in hardware, the only expection being *some* D/R scenarios (i.e. when You have no SRDF ;)
Pat also seems to be running a FC-AL (loop) config which is unfortunately prone to trespassing storms especially with HP-UX.
Unfortunately.
Adjusting MAX_FCP_REQUESTS to below vendor recommendation takes care of that and even *raises* performance for most people running FC-AL. :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-28-2006 08:41 PM
тАО02-28-2006 08:41 PM
Re: LVM timeout and SANs
Cheers
Guy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-01-2006 01:36 AM
тАО03-01-2006 01:36 AM
Re: LVM timeout and SANs
The rogue drive syndrome is a bad disk will start a storm on a backend loop from one of the ports on the defective disk drive. This storm causes the loop to fail leaving you a single loop to half the shelves on this backend loop. At this point I have seen 2 scenarios:
1) The defective drive will then begin a storm onthe remaining loop causing this to fail also. There are several drives in each loop called quorum disk, and when quorum is lost, both controllers of the eva will begin reboot sequence. During this reboot, yo will not have host access to the storage. Depending on firmware version, this may be seconds or minutes.
2) The defective disk will fail and disapear at which point the eva diagnostics perform a scan backend loops and will find the defect disk and attepmt to take ownership during which the drive will fail again. During this internal activity, host IO becomes spotty at best. It was 13 minutes for the eva to mark this drive bad and stablize itself. Some io will happen and other will not during this time frame. From experience oracle was vastly upset.
The 13 minute delay has been reduced to 50 seconds in 3.020, and I understand better in more current versions.
This behavior in the dual loop backend of the EVA5K is the nemisis of this product which HP recogized during the rebuild eva8K. They have added "E-PBC-IO" chips to the shelf io modules or the emu to bypass a shelf with such a bad drive, and then there has been drive firmware upgrades with just about every vcs update I have done with the exception of vcs 1.21.
Now a few weeks ago I did have 2 different evas (vcs 3.020) loose host access (one for 2 seconds and the other for 12 seconds)because of a single bad drive in each case. I have almost finished converting non lvm volumes to lvm volumes and the systems aattached to these eva's logged a few ems messages, but oracle did not even know this occurred. Including these latest 2 events, I think I have seen 4-5 in about the same number of years and I now have over a dozen eva 5K and 2 8k inbound.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-01-2006 01:59 AM
тАО03-01-2006 01:59 AM
Re: LVM timeout and SANs
Cheers
Guy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-01-2006 11:18 AM
тАО11-01-2006 11:18 AM
Re: LVM timeout and SANs
This info is quite late but may help someone else.
regarding increase size from EVA-3000 and not able to increase it on HP-UX
talked to support in 2005 and they gave a me script "vgmodify" to do just what you were asking for.
increase the LUN size on EVA-3000
from 5g to 10G
and from HP-UX command line.
#vgchange -a n vgtest
#./vgmodify ├в d 10g /dev/vgtest
#vgchange -a y vgtest.
works very well for me and save me ton of times.
Cheers,
Tom