Operating System - HP-UX
1834788 Members
2504 Online
110070 Solutions
New Discussion

Re: Does OnLine JFS help?

 
SOLVED
Go to solution
Pavel Hampl
Occasional Advisor

Does OnLine JFS help?

Does buying of OnLine JFS solve the problem with too much time spending in the fs cache? Of course if I use it with mincache=direct mount option.

Here is an overview of situation:
We have L class HP-UX 11 with JFS filesystems, whose reside on an EMC storage. There is running only an Oracle Data warehouse. We have the problem with reading performance. Sometime is there reading throughput from the EMC 80MB/s, but mostly only around 20MB/s.
I investigated, that when I achieved big performances was wait state less then 10% (I observed it in Glance and it means that heavily reading processes are blocked on the fs cache). Usually is wait state of those processes between 50-70% (I know it is too much, but I am not able to decrease it).

Because I have no OnLine JFS than I am not able to switch off OS filesystem caching by mincache VXFS option. Actually has the fs cache size 80 MB on the system (dbc_max_pct=2, db_min_pct=2, nbuf=0, bufpages=0, phys. memory is 4GB).
7 REPLIES 7
harry d brown jr
Honored Contributor

Re: Does OnLine JFS help?

online JFS will not help you here. Online jfs allows you to "MANAGE" your filesystems on the fly, without unmounting and such. It will not enhance the performance of your io.


live free or die
harry
Live Free or Die
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: Does OnLine JFS help?

Hi Martin:

In almost all cases I have found a significant improvement using the OnlineJFS mount options -o mincache=direct,convosync=direct,nodatainlog,delaylog.

If you have a bit of spare disk you could actually test this by moving your data to raw devices. Raw and these mount option are essentially equivalent.

I would test it by choosing a few heavily used
database files and dd'ing each one to a raw device. Then rename the original database file and set up a symbolic link between the raw device and the database file. This way you can test the effects without making any Oracle changes and without having to spend any money.

The other side od this is that with these mount options, you can greatly decrease UNIX buffer cache and increase the size of the SGA buffers - where Oracle likes to do its cacheing.

From my perspective, OnlineJFS is worth the price simply because it makes an Admin's job much easier.



Regards, Clay
If it ain't broke, I can fix that.
Thierry Poels_1
Honored Contributor

Re: Does OnLine JFS help?

Hi,

if filesystem performance is a problem, you can always opt for raw device datafiles, which should give you maximum performance.

regards,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
Thierry Poels_1
Honored Contributor

Re: Does OnLine JFS help?

Hi again,

Veritas has a product "VERITAS Database Edition??? for Oracle on HP-UX" of which they claim that it is as fast as raw devices, but still gives you "plain" filesystems (comes with a price tag however).

regards,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
Roger Baptiste
Honored Contributor

Re: Does OnLine JFS help?

<>

If the processes are blocked
on FS cache,you can try increasing the Buffer cache size , rather than decreasing
or disabling them!!
2% is too less for a read-intensive database. I think,
that is the problem.

Increase dbc_max_pct to 10%
and min_pct to 5%. It would
give you a comfortable 400Mb of buffer cache which should
improve the performance.
I have a all-FS database
with similar configuration
running fine. I didn''t
need to use the adv vxfs options.
I would suggest you to
try this and monitor the
performance before going
on other routes.
Also, see whether the problem is with only specific
FS''s or is it consitently
across all FS's.

-raj
Take it easy.
harry d brown jr
Honored Contributor
Soren Morton
Advisor

Re: Does OnLine JFS help?

Out of curiosity what is your logical volume size? What is the block size for the filesystem? and how many IO's per second are you seeing on the filesystem?

The default block size is usually 1024, but we use 8192 since we generally have larger reads/writes. Check using fstyp -v /dev/vg??/lvol?? (It is the one marked f_frsize).
No Egos, No Politics, No Games