Operating System - HP-UX
1751900 Members
5184 Online
108783 Solutions
New Discussion юеВ

Re: Parallel IO requests to SAN.

 
mark vosberg
Advisor

Parallel IO requests to SAN.

I have a database running on top of OnLine JFS/LVM connected to an EMC 4700 SAN. I am accessing a 6 disk SAN R5group using 128KB stripes.

My problem is that I am not getting parallel requests sent down to the san. I will see 100% busy on one LUN but the SAN disks are only around 15% busy. Certain queries have all thier data in one filesystem. I want hpux to allow multiple outstanding read requests on a filesystem.
I have set JFS read_nstream = 6
I used scsictl -m queue_depth=64
No effect. HP uses sdisk driver.

Any ideas how I can send multiple outstanding reads requests to the SAN to get all of its disks busy? I am assuming hpux is the hold up. It looks at the lun as dumb disk, sends a read request, waits for data, send next read.

I am hoping to avoid having to use lvm striping to balance the load across all 6 luns.
9 REPLIES 9
Steven E. Protter
Exalted Contributor

Re: Parallel IO requests to SAN.

Shalom mark,

I don't see how Online JFS is going to factor into this.

What you are looking for load balancing, which I don't think Online JFS or pvlinks are not designed to load balance across lunds or san disks.

I'd like to know what doc you are using as a guide for doing this.

If you have pvlinks alternate links set up and the SAN configured for load balancing that might work.

I might not be aware of the Online JFS feature, just have not heard of it being used in this way.

I think you are right that hpux is the holdup. Disk is disk. There are products from Veritas for JFS that may be designed to handle load balancing the way you wish.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Mridul Shrivastava
Honored Contributor

Re: Parallel IO requests to SAN.

Is there any other servers also who r using that SAN & storage.
Time has a wonderful way of weeding out the trivial
mark vosberg
Advisor

Re: Parallel IO requests to SAN.

We have some window servers but they are in different disk groups. My unix servers have the highest response time needs for random reads. I had hoped the by setting the disk queue depth for each lun to 64 would allow a large number of read requests to be sent down to the san at once. It still seems like hpux is sending one at a time.
Alain Tesserot
Frequent Advisor

Re: Parallel IO requests to SAN.

Mark,
I want you to try this.

Use dd to generate io to one LUN at a time and note the io bandwidth in Kbits to each of the 6 LUNS.

Now use dd to generate io to all 6 LUNS at the same time and note agragate io to all 6 LUNS in Kbits.
You only need about 60 seconds or so.

Depending on how you have configured your lvols and wheather LUNS are properly balanced on the array you should notice that agragate io equals io per LUN times 6.

Let us know what you find and give us more info on your setup.
TwoProc
Honored Contributor

Re: Parallel IO requests to SAN.

Well Mark, I think you already have the answer.

" I am hoping to avoid having to use lvm striping to balance the load across all 6 luns. "

I'd recommend using lvm striping to balance the load across all 6 luns.

We are the people our parents warned us about --Jimmy Buffett
Hein van den Heuvel
Honored Contributor

Re: Parallel IO requests to SAN.


>>> I am accessing a 6 disk SAN R5group

http://www.baarf.com/


>>> Certain queries have all thier data in one filesystem. I want hpux to allow multiple outstanding read requests on a filesystem.

May HPUX is perfectly willing, but you appplication/database never asks it?

You give no indication as to how you achieve parallel requests in the application.
Concurrent users?
Parallel queries servers?

Unless you explicitly request / organize multiple activities most databases will just do a single read, maybe with a read-ahead, and deal with the data before moving to the next chunk. This will never give much of a queue.

fwiw,
Hein.
Alzhy
Honored Contributor

Re: Parallel IO requests to SAN.

Mark,

If that is the behaviour you are seeing that only one LUN is busy.. then you ARE NOT striping your LVM volumes.

Can you post the output of vgdisplay -v please?

If indeed you are srtiping thos 6 EMC LUNs under LVM ---- how do you find out only one LUN is getting busy?
Hakuna Matata.
mark vosberg
Advisor

Re: Parallel IO requests to SAN.

I want to thank everyone for thier responses. It appears for my situation that I will need to use LVM distributed striping to increase my throughput to the san. I had hoped that increasing the Q depth for a lun would allow more outstanding read requests to be sent down to the SAN. But it did not seem to have much effect.

HP support believes striping would be best but they suggested seeing what the forum had to say. I plan to see if striping gives me a throughput increase. I will post my results on this thead when Thanx again everyone.
A. Clay Stephenson
Acclaimed Contributor

Re: Parallel IO requests to SAN.

I'll tell you what my general strategy is --- and it works essentially independent of the array. First of all, you will see minimal gains with extent-based striping simply because the smallest possible PE (1MB) is still too large to be an efficient stripe size and moreover using 1MB PE's severely limits the maximum size of the LUN's.

The key to getting good performance out of intelligent arrays is to throw the data at them just as fast as you can and let them worry about what physical disks are actually used. That being said, the more separate SCSI paths from your host to the array the better. For simplicity's sake, let's assume that you have 2 SCSI path's to your array. Suppose you need a 300GB VG. Rather than creating a single 300GB LUN, you create 2 150GB LUN's; 2 because you have 2 separate data paths to the array. More would be better still. LUN1 would use SCSI path A (alternate B) and LUN2 would use SCSI path B (alternate A). You then conventionally strip each LVOL in this VG across both LUN's typically in 64-128KB stripes. This will implicitly load balance independent of the number of LVOL's in the VG and will efficiently distribute the I/O.

Don't get hung up on the relatively small number of LUN's because the idea is that you only need as many LUN's as you have separate data paths. You might have many filesystems within this 2 LUN VG but you are still spreading the i/o well and not having to worry about what data is where. The downside to a small number of LUN's is that it APPEARS that you have bottlenecks but only because host-based performance tools see a tremendous volume of i/o going through what it sees as a very small number of disks --- nevermind that your 2 LUN's may be comprised of 20 physical disks.
If it ain't broke, I can fix that.