- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Direct io (mincache and convosync) not working...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2010 09:30 AM
06-16-2010 09:30 AM
Have a 11.31-box, with somes Oracle Database. The DBA wants direct IO. Mounted the database-disk with mincache and convosync:
/oracle/ORADB1/databases on /dev/vg21/lvol2 ioerror=mwdisable,largefiles,mincache=direct,delaylog,nodatainlog,convosync=direct,dev=80210002
So far so good, however, this is what tusc tells me about direct IO:
[17520] open("/oracle/ORADB2/databases/XTL/oradata/test01.dbf", O_RDONLY|0x800, 01210) ................................................. = 17
[17520] open("/oracle/ORADB2/databases/XTL/oradata/test01.dbf", O_RDONLY|0x800, 07645) ................................................. = 17
[17520] open("/oracle/ORADB2/databases/XTL/oradata/test01.dbf", O_RDWR|0x800|O_DSYNC, 030) ............................................. = 17
[17520] open("/oracle/ORADB2/databases/XTL/oradata/test01.dbf", O_RDONLY|0x800, 01210) ................................................. = 17
[17520] open("/oracle/ORADB2/databases/XTL/oradata/test01.dbf", O_RDONLY|0x800, 07645) ................................................. = 17
[17520] open("/oracle/ORADB2/databases/XTL/oradata/test01.dbf", O_RDWR|0x800|O_DSYNC, 030) ............................................. = 17
...indicating that Direct IO is NOT used. And the DBA tells me that to.
How come? What to do?
Thanks in advance.
Greetings
Danny
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2010 10:48 AM
06-16-2010 10:48 AM
Re: Direct io (mincache and convosync) not working?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2010 11:05 AM
06-16-2010 11:05 AM
Re: Direct io (mincache and convosync) not working?
Sorry...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2010 11:19 AM
06-16-2010 11:19 AM
Re: Direct io (mincache and convosync) not working?
> Tim: Disregard my response.. I cannot find proof of the comment anywhere.. I must have been dreaming...
No, you're not :-) The OnlineJFS requirement for 'convosync' and 'mincache=direct' are documented in the 'mount_vxfs(1M)' manpages.
However, if I recall correctly, without OnlinejFS the mount would fail if these options were used. A 'cat /etc/mnttab' will show what mount options are in force.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2010 02:43 PM
06-16-2010 02:43 PM
Re: Direct io (mincache and convosync) not working?
Thanks for your answers - and its true, these mount-options only work with OnlineJFS, which is installed on the system - the line in my initial mail, showing the mount-options for the mountpoint, are in fact from the "mount"-command - showing that the convosync and mincache are already in effect.
But still - the tusc should return direct-io for the database-files, not O_RDONLY.
So - any ideas would be appreciated.
Thanks.
Danny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2010 11:19 PM
06-16-2010 11:19 PM
Re: Direct io (mincache and convosync) not working?
ARe you sure you are interpreting what you see correctly here?
Cos what I see is the same file (/oracle/ORADB2/databases/XTL/oradata/test01.dbf) being opened read-only (maybe as part of the header checks oracle does at startup), followed by it being opened read-write with the O_DSYNC flag. If you look at the mount_vxfs man page you will see when you mount with mincache=direct,convosync=direct then any files opened with O_DSYNC will bypass the filesystem buffer cache and do direct IO.
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 01:22 AM
06-17-2010 01:22 AM
Re: Direct io (mincache and convosync) not working?
Thanks a bunch for your reply.
Hm - you have a valid point there, no doubt. I was trusting a MetaLink-document 555601.1, which describes how to check this. This note says, that if a datafile is opened with direct IO, the open-statements from tusc should include O_DIRECT.
To kind of verify this, I tried to:
1 - create a database-file on a filesystem, mounted without convosync and mincache=direct. It still says O_DSYNC.
2 - Alter the database to not use direct IO - still, database-files on filesystems both with or without the aforementioned mountoptions is O_DSYNC.
I might misinterpretate something but initially this kind of shows that direct IO is not working.
Im pretty sure I'm missing something crucial here, I just don't know what.
Again, thanks a lot so far.
Greetings
Danny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 02:27 AM
06-17-2010 02:27 AM
Re: Direct io (mincache and convosync) not working?
Read the mount_vxfs man page carefully, and also have a look at what the VX_DIRECT caching advisory does when direct IO is enabled (see the vxfsio(7) man page)
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 03:00 AM
06-17-2010 03:00 AM
Re: Direct io (mincache and convosync) not working?
IF it succeeds to get the Oracle database-file opened using Direct IO, will tusc then show the O_DIRECT-flag?
Sorry for all the stupid questions.
Greetings
Danny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 03:00 AM
06-17-2010 03:00 AM
Re: Direct io (mincache and convosync) not working?
To get O_DIRECT, the application must use that in its open. I suppose open(2) could modify the system call based on the filesystem mount types.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 05:48 AM
06-17-2010 05:48 AM
Re: Direct io (mincache and convosync) not working?
Just read an Oracle metalink article (555601.1 if you or your DBA has access).
Now I don't have 100% faith in the accuracy of metalink articles, but this one does suggest you should see O_DIRECT in the open call...
So a couple of questions...
1. This is a tusc trace of a database writer process isn't it? (ora_dbw*_*)
2. What is the oracle parameter "filesystemio_options" set to??
Sorry I can't test this myself until next week...
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 05:54 AM
06-17-2010 05:54 AM
Re: Direct io (mincache and convosync) not working?
1) Yes, precisely
2) filesystemio_options has been tried with "setall" and "directio" - no difference.
Greetings
Danny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 06:36 AM
06-17-2010 06:36 AM
Re: Direct io (mincache and convosync) not working?
pfiles
say?
e.g. if your dbw PID is stil 17520
pfiles 17520
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 01:41 PM
06-17-2010 01:41 PM
Re: Direct io (mincache and convosync) not working?
Im also not so sure, what tusc should show, but I do know that the tusc output isnt directly corresponding with the mount output in the "opening statement".
I.e. in the mount output, "/oracle/ORADB1/databases" output is shown and in the tusc output, "/oracle/ORADB2/databases" output is shown, what unless the 2 directories are linked somehow, looks like different mountpoints.
Also normally for oracle databases, you only want direct IO for oracle database access and buffered IO for all "other access". And for direct IO, for only oracle database access, convosync=direct is enough.
NOTE: what definitely will show if direct IO is used, is kitrace output, but for being able to interprete the kitrace output you will need a call logged with HP support.
Greetz,
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 11:44 PM
06-17-2010 11:44 PM
Re: Direct io (mincache and convosync) not working?
is this open-flag still relevant with async in place?
If async is used, db_wr talks to the async driver,
which should do the effective IO
depending on the mount options.
I think you might just see the expected options in open, when you disable async and configure the old fashioned io_slaves???
(guessing)
Volker
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2010 11:55 PM
06-17-2010 11:55 PM
Re: Direct io (mincache and convosync) not working?
I don't think async can be in place... unless I'm mistaken async either happens on raw/ASM or when using ODM which requires VxVM.
this appears to be on a filesystem on an LVM logical volume, so I can't see that async could be used...
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-18-2010 12:32 AM
06-18-2010 12:32 AM
Re: Direct io (mincache and convosync) not working?
I really appreciate all the comments.
Duncan - pfiles for the databasewrite shows this for all database-files using this mountpoint;
flags = O_RDWR|O_DSYNC|O_LARGEFILE|O_EXCL
...and yes, its a vxfs-filessytem on a lvol.
Chris - We only use convosync=direct on mountpoints with database-files
Hope we are getting there :-)
Greetings
Danny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-18-2010 12:57 AM
06-18-2010 12:57 AM
Re: Direct io (mincache and convosync) not working?
is that still true? I thought that restriction
is outdated meanwhile. But I did not check
recently directly at Oracle.
Volker
2004: http://www.oracle.com/technology/deploy/performance/pdf/TWP_Oracle_HP_files.pdf
See 5.1.1.
On HP-UX, asynchronous IO is only supported with a raw device (raw disk partition or raw logical volume), although this will change with HP-UX 11i v3 (internally known as 11.31).
2006: http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=992511&admit=109447626+1274960107832+28353475
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-18-2010 01:54 AM
06-18-2010 01:54 AM
Re: Direct io (mincache and convosync) not working?
Yes and No!
I think that WP is possibly old enough to have expected 11iv3 to have AdvFS with I think supported async IO - of course we never did get AdvFS on HP-UX.
We do now have concurrent IO, with VxFS5.01 on 11iv3 which delivers near raw performance. I've not had the opportunity to understand CIO properly and understand if it is just another name for AIO, but it does sound similar to my (simple) mind. There's a WP on it here:
http://www.hp.com/go/ojfsperf
Of course in this case Danny isn't using cio, as it would have shown up as one of his mount options...
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-18-2010 01:55 AM
06-18-2010 01:55 AM
Re: Direct io (mincache and convosync) not working?
Sorry but I really can't do anything else now until I can get back to my test system in the UK - that will be next week.
Did you get a call logged with the RC?
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-18-2010 03:40 AM
06-18-2010 03:40 AM
Re: Direct io (mincache and convosync) not working?
I have not logged a call yet.
However - the "-o cio" looks very interesting, and as the system is installed withOnlineJFS B.05.00.01, it would be worth a try (its the HA-edition of the OS). However, it tells me that I cant mount using "-o cio", as it is not licensed. Does this require some other license, than the normal OnlineJFS?
Greetings
Danny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2010 02:28 AM
06-20-2010 02:28 AM
SolutionOK, so I have access to a test system now... so a couple of points:
1. I see the same in tusc as you do, regardless of what mount options are set on the filesystem, or the value of filesystemio_options, the datafiles are always opened O_DSYNC
2. However if you have the mount options mincache=direct,convosync=direct on your filesystem, I'm pretty confident that _any_ IO, regardless of open flags will use direct IO... here's how I established that:
- I checked how the filesystem was mounted:
# mount -p | grep oracle
/dev/vg01/lvol1 /oracle vxfs ioerror=mwdisable,largefiles,mincache=direct,delaylog,convosync=direct 0 0
- I ran a dd command of a large file on my VxFS filesystem to /dev/null (in my case an old oracle patch):
# timex dd if=/oracle/p6810189_10204_HPUX-64.zip of=/dev/null bs=1k count=100000
100000+0 records in
100000+0 records out
real 12.98
user 0.31
sys 5.06
- I then checked how the dd command was opening the file using tusc:
# tusc -s open dd if=/oracle/p6810189_10204_HPUX-64.zip of=/dev/null bs=1k count=100000 | grep /oracle
open("/oracle/p6810189_10204_HPUX-64.zip", O_RDONLY|O_LARGEFILE, 010000) .......................... = 3
100000+0 records in
100000+0 records out
- So no special flags in there...
- I then ran my dd in a loop to just get the real elaspsed time out of it:
while true
do
timex dd if=/oracle/p6810189_10204_HPUX-64.zip of=/dev/null bs=1k count=100000 2>&1
done | grep real
- This was giving me reasonably consistent times of around about 13-14s for each read of the file
real 13.84
real 14.21
real 13.76
real 12.76
real 12.55
real 16.87
real 14.01
real 13.10
real 13.06
real 13.27
real 13.08
If this were buffered IO you might expect the buffer cache to help bring this down, but no sign of that...
So then I altered the filesystem mount options from the command line to remove direct IO with the dd loop still running:
# mount -F vxfs -o remount,largefiles,delaylog /dev/vg01/lvol1 /oracle
And this is what happened:
real 13.08
real 13.50
real 12.63
real 14.29
real 13.11
real 13.06
real 15.42
real 13.54
real 13.33
real 12.77
real 13.07
real 13.20
real 14.48
real 5.77 <<= here's where I changed the mount options
real 2.44
real 2.30
real 2.38
real 2.29
real 2.30
real 2.31
real 2.29
real 2.29
real 2.29
As you can see, as soon as I changed the mount options to start using the buffer cache, the IO dropped significantly - that is consistent with what one would expect when moving from direct to buffered IO for most file operations (of course if the file were _really_ huge direct IO might end up being better, and the dd command doesn't have much in the way of its own buffers like Oracle has).
This to me proves that direct IO is working on my filesystem, _regardless_ of what flags are used on the open command.
If I find the time, I might take this a little further and use kitrace to prove that oracle itself is doing direct IO.
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2010 02:46 AM
06-20-2010 02:46 AM
Re: Direct io (mincache and convosync) not working?
What I meant to go on to say was that based on these findings, I'm trying to establish if the Oracle metalink article is innacurate for HP-UX - I'll report back what I find...
Regarding concurrent IO (cio) - you have:
OnlineJFS B.05.00.01
That is I am afraid VxFS 5.0
What you need is:
OnlineJFS B.05.01.01
Which is actually VxFS 5.0.1 ... the part number for this is B3929GB, if you have a support contract and system handle linked to your ITRC ID, you should be able to get a copy through Software Update Manager (see grey sidebar on left of any ITRC web page)
Confused? I am! Especially when the man page on 05.00.01 mentions the cio mount option!
I'm asking the product manager if we can get a bit of clarity on this versioning...
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-21-2010 12:39 AM
06-21-2010 12:39 AM
Re: Direct io (mincache and convosync) not working?
Thanks a lot for all your efforts, I really appreciate it. Your test is very accurate, tried the same thing with a very similar result.
As for OnlineJFS you are absolutely right, I did find the same information looking at previous posts, looking through all our media and licenses and reading some of the HP-UX marketing-stuff - and as you point out all in all quite confusing. But as the system is running 11.31, 1003, HA-edition, which, for some reason, comes with 05.00.01, it should be legit to install 05.01.01 on top of it, as the license should be included in HA, VSE and DC-edition of the OS.
I will keep you posted when I'm done testing the cio.
Again, lots of thanks for your time.
Kind regards
Danny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-21-2010 07:59 AM
06-21-2010 07:59 AM
Re: Direct io (mincache and convosync) not working?
Please let me know how I can help as the product manager for OnlineJFS>