Operating System - HP-UX
1752565 Members
5637 Online
108788 Solutions
New Discussion юеВ

Re: Direct io (mincache and convosync) not working?

 
SOLVED
Go to solution
Danny Petterson - DK
Trusted Contributor

Re: Direct io (mincache and convosync) not working?

HI Duncan!

I have not logged a call yet.

However - the "-o cio" looks very interesting, and as the system is installed withOnlineJFS B.05.00.01, it would be worth a try (its the HA-edition of the OS). However, it tells me that I cant mount using "-o cio", as it is not licensed. Does this require some other license, than the normal OnlineJFS?

Greetings
Danny
Solution

Re: Direct io (mincache and convosync) not working?

Danny,

OK, so I have access to a test system now... so a couple of points:

1. I see the same in tusc as you do, regardless of what mount options are set on the filesystem, or the value of filesystemio_options, the datafiles are always opened O_DSYNC

2. However if you have the mount options mincache=direct,convosync=direct on your filesystem, I'm pretty confident that _any_ IO, regardless of open flags will use direct IO... here's how I established that:

- I checked how the filesystem was mounted:

# mount -p | grep oracle
/dev/vg01/lvol1 /oracle vxfs ioerror=mwdisable,largefiles,mincache=direct,delaylog,convosync=direct 0 0

- I ran a dd command of a large file on my VxFS filesystem to /dev/null (in my case an old oracle patch):

# timex dd if=/oracle/p6810189_10204_HPUX-64.zip of=/dev/null bs=1k count=100000
100000+0 records in
100000+0 records out

real 12.98
user 0.31
sys 5.06

- I then checked how the dd command was opening the file using tusc:

# tusc -s open dd if=/oracle/p6810189_10204_HPUX-64.zip of=/dev/null bs=1k count=100000 | grep /oracle

open("/oracle/p6810189_10204_HPUX-64.zip", O_RDONLY|O_LARGEFILE, 010000) .......................... = 3
100000+0 records in
100000+0 records out

- So no special flags in there...

- I then ran my dd in a loop to just get the real elaspsed time out of it:

while true
do
timex dd if=/oracle/p6810189_10204_HPUX-64.zip of=/dev/null bs=1k count=100000 2>&1
done | grep real

- This was giving me reasonably consistent times of around about 13-14s for each read of the file

real 13.84
real 14.21
real 13.76
real 12.76
real 12.55
real 16.87
real 14.01
real 13.10
real 13.06
real 13.27
real 13.08

If this were buffered IO you might expect the buffer cache to help bring this down, but no sign of that...

So then I altered the filesystem mount options from the command line to remove direct IO with the dd loop still running:

# mount -F vxfs -o remount,largefiles,delaylog /dev/vg01/lvol1 /oracle


And this is what happened:

real 13.08
real 13.50
real 12.63
real 14.29
real 13.11
real 13.06
real 15.42
real 13.54
real 13.33
real 12.77
real 13.07
real 13.20
real 14.48
real 5.77 <<= here's where I changed the mount options
real 2.44
real 2.30
real 2.38
real 2.29
real 2.30
real 2.31
real 2.29
real 2.29
real 2.29


As you can see, as soon as I changed the mount options to start using the buffer cache, the IO dropped significantly - that is consistent with what one would expect when moving from direct to buffered IO for most file operations (of course if the file were _really_ huge direct IO might end up being better, and the dd command doesn't have much in the way of its own buffers like Oracle has).

This to me proves that direct IO is working on my filesystem, _regardless_ of what flags are used on the open command.

If I find the time, I might take this a little further and use kitrace to prove that oracle itself is doing direct IO.

HTH

Duncan

I am an HPE Employee
Accept or Kudo

Re: Direct io (mincache and convosync) not working?

Danny,

What I meant to go on to say was that based on these findings, I'm trying to establish if the Oracle metalink article is innacurate for HP-UX - I'll report back what I find...

Regarding concurrent IO (cio) - you have:

OnlineJFS B.05.00.01

That is I am afraid VxFS 5.0

What you need is:

OnlineJFS B.05.01.01

Which is actually VxFS 5.0.1 ... the part number for this is B3929GB, if you have a support contract and system handle linked to your ITRC ID, you should be able to get a copy through Software Update Manager (see grey sidebar on left of any ITRC web page)

Confused? I am! Especially when the man page on 05.00.01 mentions the cio mount option!

I'm asking the product manager if we can get a bit of clarity on this versioning...

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Danny Petterson - DK
Trusted Contributor

Re: Direct io (mincache and convosync) not working?

Hi Duncan!

Thanks a lot for all your efforts, I really appreciate it. Your test is very accurate, tried the same thing with a very similar result.

As for OnlineJFS you are absolutely right, I did find the same information looking at previous posts, looking through all our media and licenses and reading some of the HP-UX marketing-stuff - and as you point out all in all quite confusing. But as the system is running 11.31, 1003, HA-edition, which, for some reason, comes with 05.00.01, it should be legit to install 05.01.01 on top of it, as the license should be included in HA, VSE and DC-edition of the OS.

I will keep you posted when I'm done testing the cio.

Again, lots of thanks for your time.

Kind regards
Danny
Jamen
New Member

Re: Direct io (mincache and convosync) not working?

Please do share the results of your D i/o and C i/o compare. So far, some customers have seen great improvements, and acheiving either the same or better than LVM raw performance.

Please let me know how I can help as the product manager for OnlineJFS>