1851170 Members
3813 Online
104056 Solutions
New Discussion

VXFS read ahead question

 
SOLVED
Go to solution
John Oberlander
Frequent Advisor

VXFS read ahead question

Specs - RISC/9000 RP8420 running 11iv1 and DB2 universe db, cisco san, eva5000.

Question - We constantly have issues with IO and would like to try to turn off buffercache/vxfs read ahead to reduce the amount of IO to the eva. DB2 seems to be very random IO so the read ahead will be wrong most of the time anyways.

In my last HP perf class i was told i can turn this off, but i cannot decrease the read_pref_io less than 4k from the default of 65536.

Where do i turn this totally off?

Thanks,
John
9 REPLIES 9
Alzhy
Honored Contributor
Solution

Re: VXFS read ahead question

John,

Instead of messing with the FS tunables (are you using vxtunefs ?), why don't you try just mounting your DB filesystem(s) with DirectIO so it avoids double buffering?

We use:

log,largefiles,mincache=direct,convosync=direct

as our VxFs/OJFS mount option on our "cooked" DB storage for any Database storage.



Hakuna Matata.
Steven E. Protter
Exalted Contributor

Re: VXFS read ahead question

Shalom,

You can try to change it to zero. That will either be off or dynamic. Dynamic would be bad.

Perhaps reduce the buffer cache dbc_max_pct and dbc_min_pct Take those two numbers very low and close to one and other. Note that its very expensive in terms of CPU to let a system change this number and many databases do not benefit from large buffer caches. On your OS Oracle on cooked filesystems apparently does benefit from a large buffer cache, databases on raw filesystems do not benefit at all.

I/O problems should be appraoched as follows:

1) EVA configuration. This has nailed my and our customers very often.
2) OS patches and kernel configuration. SCSI patches are key and HP will want the system patched prior to providing support to the EVA.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Alzhy
Honored Contributor

Re: VXFS read ahead question

John, one culprit may be you have so much buffer cache allocated. Recipe is still to keep your buf cache between 800 MB to 1.6 GB of total memory - that is IF it's mostly DB serving that your machine does..

Couple that with DirectIO operation of your cooked filesystems and you should be good.

BTW, what VRAID Levels are you using on your EVA as storage for your DB? If your DB is really I/O intensive -- you may want to replacing each VDISK with VRAID1s -- which is easily done with an EVA and LVM/VxVM...
Hakuna Matata.
John Oberlander
Frequent Advisor

Re: VXFS read ahead question

Nelson...
Yes, vxtunefs..we've tried the direct IO and the performance was substancially worse. This change is per the HP performance class.

Steve..
It wont go less than 4K, ive tried 0. Also tried to set the read_nstream to 0 but 1 is the lowest amount.

Our db is not oracle, its Universe and requires large amount of buffercache to get our read hits at 90+. We currently have 6G of bc. dbc_max is 22%

System was just patched with DEC06 bundle, and a few misc patches per HP that were IO, or fcp related.

Our eva is oversubscribed, so were trying to reduce the amount of un-needed IO coming from the system. If we find out that the read-ahead does infact help, then well turn it back on. For now we would like it off.

Thanks,
John
John Oberlander
Frequent Advisor

Re: VXFS read ahead question

We've done the small bc and large bc. Large bc definatelly makes a huge difference for our performance. With 6G our read hits are at 95-99%, 6G seems to be our sweet spot for best performance. Direct IO does not work for us at all since were not using oracle. Universe does not have its own built in cache like oracle does.


Were using raid 5 now. I created a raid 1 lun within a new VG and were testing that this weekend.
Alzhy
Honored Contributor

Re: VXFS read ahead question

Hmmm,

DO you use very large filesystems on very large LUNS/VDisks from your EVAs? On on eclient of mine who has a single 1.6TB EVA VRAID1 disk serving as Informix Data Store - increasing scsi_max_qdepth to 128 helped tremendously...

Did you check if you've significant disk queing? (sar -d)

Hakuna Matata.
John Oberlander
Frequent Advisor

Re: VXFS read ahead question

We have 8 128G luns with a queue depth of 24. We tried more but it didn't help.

San numbers are great. <5 queue, ~10 avg serv time. It goes up once and a while when the syncer daemon kicks in to flush the writes.

All i need to know is how to turn off vxfs read ahead so we dont have data in the bc that we nescessarilly dont need.
Alzhy
Honored Contributor

Re: VXFS read ahead question

John .. do you mean you disk queue depths are <5 or <0.5 ? IF you are hitting queue depths of more than 1 on an EVA -- then it really appears your I/O woes may actually be your EVA...

Do you have EVAperf installed?
Hakuna Matata.
John Oberlander
Frequent Advisor

Re: VXFS read ahead question

Yes i know our eva is oversubscribed i said that in a earlier post. This is exactily why im trying to remove any IO coming from the system that we dont need. I was told read-ahead can be turned off, and its something we probally dont need since the database dosen't do ANYTHING sequential.