- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: About IO_PERFORM
Operating System - OpenVMS
1753762
Members
4684
Online
108799
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-14-2008 02:12 PM
тАО02-14-2008 02:12 PM
Re: About IO_PERFORM
Roberto,
A simple thought experiment... work out the relative time costs of CPU, memory access and disk access. If we scale up a CPU register to register ADD operation and call it 1 second, how long will the other operations take? VERY rough numbers (worked out a few years ago, so potentially changed)
ADD operation 1 second
Chip Cache memory access 10-50 seconds
Main memory access 5-20 minutes
Disk access 8-12 MONTHS
So the cost of an I/O operation is WAY WAY dominated by the physical access to the disk. Yes, if used correctly, FASTIO can shave off CPU overheads, but if all that's doing is reducing a few minutes out of a timeframe of months, it's not really worthwhile.
If you can spend CPU time eliminating the need to go to the physical disk, your overall throughput will be higher. That's where things like caching and writebehind come in. RMS, RDB and the OpenVMS file system can automatically provide you with caching, which you'll lose if you replace RMS with your own low level code.
To improve write performance, you need to minimise the number of I/O operations that go all the way to disk. This sometimes needs to be traded off against data security - delaying a write, keeping data in memory extends the window of vulnerability to crashes and power fails.
As others have said, analyze your data flow, determine the bottle necks and use your knowledge of the data to move the bottle necks to more acceptable places.
Before diving into the depths of low level programming, make sure you're tried all the easy things, like making sure you've chosen the most appropriate data structures, enabling global buffers and choosing optimal bucket sizes.
A simple thought experiment... work out the relative time costs of CPU, memory access and disk access. If we scale up a CPU register to register ADD operation and call it 1 second, how long will the other operations take? VERY rough numbers (worked out a few years ago, so potentially changed)
ADD operation 1 second
Chip Cache memory access 10-50 seconds
Main memory access 5-20 minutes
Disk access 8-12 MONTHS
So the cost of an I/O operation is WAY WAY dominated by the physical access to the disk. Yes, if used correctly, FASTIO can shave off CPU overheads, but if all that's doing is reducing a few minutes out of a timeframe of months, it's not really worthwhile.
If you can spend CPU time eliminating the need to go to the physical disk, your overall throughput will be higher. That's where things like caching and writebehind come in. RMS, RDB and the OpenVMS file system can automatically provide you with caching, which you'll lose if you replace RMS with your own low level code.
To improve write performance, you need to minimise the number of I/O operations that go all the way to disk. This sometimes needs to be traded off against data security - delaying a write, keeping data in memory extends the window of vulnerability to crashes and power fails.
As others have said, analyze your data flow, determine the bottle necks and use your knowledge of the data to move the bottle necks to more acceptable places.
Before diving into the depths of low level programming, make sure you're tried all the easy things, like making sure you've chosen the most appropriate data structures, enabling global buffers and choosing optimal bucket sizes.
A crucible of informative mistakes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-15-2008 04:37 AM
тАО02-15-2008 04:37 AM
Re: About IO_PERFORM
Again, thanks for your advices....
I├В┬┤m reading all i books i found about performace management........
I├В┬┤m using MONITOR RMS, MONITOR IO, MONITOR DISK and SET FILE/STAT
I├В┬┤m starting to investigate bottlenecks and with MONITOR RMS for my target file i had this results
"Loc Buf Cache Hit Percent " --> 93
"Loc Buf Cache Attempt Rate "--> 60
The disk reponds very well , cause
"I/O Request Queue Length "--> < 1 always...
and
"I/O Operation Rate "for target disk is
> 2000...
If only i├В┬┤m inserting records in my relative file ( RECORD SIZE ID 302, bucket size 10 and not deferred write couse i need maximum consistency in data), i suppose that always would have veru local cache fault, cause every insert is new and i haven├В┬┤t that in the cache.
Wolud be possible that when i write a record on disk, rms recover serveral blocks ( just my bucket size ) and this blocks in local memory are valid for future inserts ?
Is this the reason for my hit in local cache ?
All inserts are with RRN = RRN + 1.
Always one after other....
Thanks to everybody in advance....
I├В┬┤m reading all i books i found about performace management........
I├В┬┤m using MONITOR RMS, MONITOR IO, MONITOR DISK and SET FILE/STAT
I├В┬┤m starting to investigate bottlenecks and with MONITOR RMS for my target file i had this results
"Loc Buf Cache Hit Percent " --> 93
"Loc Buf Cache Attempt Rate "--> 60
The disk reponds very well , cause
"I/O Request Queue Length "--> < 1 always...
and
"I/O Operation Rate "for target disk is
> 2000...
If only i├В┬┤m inserting records in my relative file ( RECORD SIZE ID 302, bucket size 10 and not deferred write couse i need maximum consistency in data), i suppose that always would have veru local cache fault, cause every insert is new and i haven├В┬┤t that in the cache.
Wolud be possible that when i write a record on disk, rms recover serveral blocks ( just my bucket size ) and this blocks in local memory are valid for future inserts ?
Is this the reason for my hit in local cache ?
All inserts are with RRN = RRN + 1.
Always one after other....
Thanks to everybody in advance....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-15-2008 05:58 PM
тАО02-15-2008 05:58 PM
Re: About IO_PERFORM
>> Again, thanks for your advices....
Good. Btw... say thanks with 'points' every now and then to give an indication as to which reply helped and which was 'thanks for playing'. :-)
>> I├В┬┤m reading all i books i found about performace management........
Excellent!
>> I├В┬┤m using MONITOR RMS, MONITOR IO, MONITOR DISK and SET FILE/STAT
Excellent.
Next step: T4 & and also my RMS_STATS or ANAL/SYS.. SHOW PROC/RMS=(FSB,BDBSUM)
http://h71000.www7.hp.com/freeware/freeware60/rms_tools/rms_stats.exe
Send Email for most recent sources
(hmmm, rms_stats expects a file opened shared, so that might not work)
>> "Loc Buf Cache Hit Percent " --> 93
ho hum.
>> "Loc Buf Cache Attempt Rate "--> 60
Low.
>> The disk reponds very well , cause
>> "I/O Request Queue Length "--> < 1 always...
Good news and bad news.
Good: Yeah it keeps up.
Bad: There probably is no concurrency at all.
Are the writer tasks using RAB$V_ASY = 1 + SYS$WAIT just before the $PUT to make sure the last write is done?
"I/O Operation Rate "for target disk is
> 2000...
That does not jive with the RMS stats above unless there are 2000/~60 =~ 30 active files.
>> If only i├В┬┤m inserting records in my relative file ( RECORD SIZE ID 302, bucket size 10 and not deferred write couse i need maximum consistency in data)
So you can fit 16 records in a bucket. Every 16 records you'll see 16 writes, 1 read and 17 cache attempts. That's 94% hit rate as observed.
However... please realize you are writing 10 blocks of data every time!
Going to 1 block buckets is no solution as this will cause a 1 read + 1 write per $PUT.
Going to $WRITE, with 2 block faked buckets will probably be optimal.
If the (excessively!?) stringent 1 write IO per $PUT is maintained. A real speedup would comre from postponing the writes to 1 per N records or 1 per M milliseconds, whichever comes first.
And a real-real speedup could come from a 'group commit' grouping writes from all streams for N milliseconds and then commiting all streams in 1 IO and giving an ACK back to each stream in the group.
>> i suppose that always would have veru local cache fault, cause every insert is new and i haven├Г ├В┬┤t that in the cache.
Depends a little on pre-allocation.
RMS will read the bucket and page-fault at that time.
>> Wolud be possible that when i write a record on disk, rms recover serveral blocks ( just my bucket size ) and this blocks in local memory are valid for future inserts ?
yes, but as indicated, RMS writes the whole bucket, including previously written records.
>> Is this the reason for my hit in local cache ?
See above.
>> all inserts are with RRN = RRN + 1.
Are you using 1 file per connection?
Are you using SHARING? APPEND (RAB$V_EOF=1) ?
Where do you get the first RRN?
Maybe you want a shared memory section + $updsec to bring it to the disk?
Good luck!
Hein.
Good. Btw... say thanks with 'points' every now and then to give an indication as to which reply helped and which was 'thanks for playing'. :-)
>> I├В┬┤m reading all i books i found about performace management........
Excellent!
>> I├В┬┤m using MONITOR RMS, MONITOR IO, MONITOR DISK and SET FILE/STAT
Excellent.
Next step: T4 & and also my RMS_STATS or ANAL/SYS.. SHOW PROC/RMS=(FSB,BDBSUM)
http://h71000.www7.hp.com/freeware/freeware60/rms_tools/rms_stats.exe
Send Email for most recent sources
(hmmm, rms_stats expects a file opened shared, so that might not work)
>> "Loc Buf Cache Hit Percent " --> 93
ho hum.
>> "Loc Buf Cache Attempt Rate "--> 60
Low.
>> The disk reponds very well , cause
>> "I/O Request Queue Length "--> < 1 always...
Good news and bad news.
Good: Yeah it keeps up.
Bad: There probably is no concurrency at all.
Are the writer tasks using RAB$V_ASY = 1 + SYS$WAIT just before the $PUT to make sure the last write is done?
"I/O Operation Rate "for target disk is
> 2000...
That does not jive with the RMS stats above unless there are 2000/~60 =~ 30 active files.
>> If only i├В┬┤m inserting records in my relative file ( RECORD SIZE ID 302, bucket size 10 and not deferred write couse i need maximum consistency in data)
So you can fit 16 records in a bucket. Every 16 records you'll see 16 writes, 1 read and 17 cache attempts. That's 94% hit rate as observed.
However... please realize you are writing 10 blocks of data every time!
Going to 1 block buckets is no solution as this will cause a 1 read + 1 write per $PUT.
Going to $WRITE, with 2 block faked buckets will probably be optimal.
If the (excessively!?) stringent 1 write IO per $PUT is maintained. A real speedup would comre from postponing the writes to 1 per N records or 1 per M milliseconds, whichever comes first.
And a real-real speedup could come from a 'group commit' grouping writes from all streams for N milliseconds and then commiting all streams in 1 IO and giving an ACK back to each stream in the group.
>> i suppose that always would have veru local cache fault, cause every insert is new and i haven├Г ├В┬┤t that in the cache.
Depends a little on pre-allocation.
RMS will read the bucket and page-fault at that time.
>> Wolud be possible that when i write a record on disk, rms recover serveral blocks ( just my bucket size ) and this blocks in local memory are valid for future inserts ?
yes, but as indicated, RMS writes the whole bucket, including previously written records.
>> Is this the reason for my hit in local cache ?
See above.
>> all inserts are with RRN = RRN + 1.
Are you using 1 file per connection?
Are you using SHARING? APPEND (RAB$V_EOF=1) ?
Where do you get the first RRN?
Maybe you want a shared memory section + $updsec to bring it to the disk?
Good luck!
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2008 03:10 AM
тАО03-12-2008 03:10 AM
Re: About IO_PERFORM
Hello, guys.....thanks for all your advices!!!
I wa on holidays, and i returned to my job last week. I had several meetings with other people in my organization. Finally was possible that application people change the application and then we can use the deferred write, because we have now a more robust recovery.
Speed has increased with this rms option.
We make an rms flush every block o messages, and not every messages like previous option.
As well, we update the lock ( we use a lock and other processes use blocking-ast to notice the insertion of a new register in the file ) every block of messages.
We are changed the bucket size to higher value....
We are investigating more to improve the application...
Thanks to everybody for all....
I wa on holidays, and i returned to my job last week. I had several meetings with other people in my organization. Finally was possible that application people change the application and then we can use the deferred write, because we have now a more robust recovery.
Speed has increased with this rms option.
We make an rms flush every block o messages, and not every messages like previous option.
As well, we update the lock ( we use a lock and other processes use blocking-ast to notice the insertion of a new register in the file ) every block of messages.
We are changed the bucket size to higher value....
We are investigating more to improve the application...
Thanks to everybody for all....
- « Previous
-
- 1
- 2
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP