- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: About perfomance with RMS relative files...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-13-2008 01:52 AM
10-13-2008 01:52 AM
Re: About perfomance with RMS relative files...
20 years ago I had a program using the record numbers of a relative file to walk thru the file. And back. Some kind of read-mode editor with no work file. You could jump to a certain line and continue scrolling there.
May be they keep record numbers in some kind of db for later usage.
fwiw
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 12:41 AM
10-16-2008 12:41 AM
Re: About perfomance with RMS relative files...
I´m doing my tests in a hardware without other activity, without users...
The test consist of allocating a relative file with an enough size to avoid extensions, always in the same disk, and always the file has 1 piece of allocation, with extens. I allocate about 21000 blocks and in this disk the average free allocation piece is 100000 blocks. Once I create the file and before start the test, I see if file has more that 1 piece of allocation, and always has 1 piece of allocation. I do 20000 write, with an ascending order of secuence. I take times 2 times to know how time it needed to do these writings.
I my test, i founded that the best bucket size is 2.
But i see a rare behaviour in my tests. I see in these 10 tests per bucket size, that I have 4,5 or 6 "good" times and 6,5 or 4 "bad" times.
For instance, with a bucket size of 4 ( remember that record size is 500 ), i have these stats:
20000 writes --> "good" times around 8,85 sgs
20000 writes --> "bad" times around 10,89 sgs
I have a difference of 2 seconds!!!!
The best rate is 2290 writes per second, and the worst rate is 1814 writes per second...
I found these different with all bucket sizes, always have 2 second or bit more in all my tests, between good an bad times.
The relative file has enabled writethrough cache attributes....
Which could be the reason of this behaviour ??? Something with XFC, which usefull data can i use in XFC traces ???
I accept any ideas about this behaviour.....
Thank you all in advance.....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 01:02 AM
10-16-2008 01:02 AM
Re: About perfomance with RMS relative files...
Benchmarking is both an art and a science. After working with various benchmarks for over thirty years, one particular item strikes my eye.
The variance in the results (2 seconds) is 20% of the total time. I would have two short term suggestions:
- increase the size of the benchmark (at least a factor of ten, if not more)
- run the benchmark of each bucket size multiple (say ten) times.
The question is whether the variation stays at two seconds (20%) or whether it changes. While using idle systems eliminates some variation, it does not eliminate ALL variation. Much of laboratory science is about reproducibility within a the techniques margin of error. As mentioned earlier, running benchmarks is most definitely an experimental science.
A full spreadsheet of the results from this more extensive benchmark series should be illuminating as to whether these variations are random noise, some facet of the bucket sizes, or some other effect not yet identified. Without the additional data, remote conclusions as to the source of the variation are at best educated guesses, if not random speculation.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 03:23 AM
10-16-2008 03:23 AM
Re: About perfomance with RMS relative files...
for (bks = 1; bks < 16; bks++) {
for (i = 1; i < max_loops; i++) {
run test;
wait 5
}
}
btw.. This _might_ illustrate the dangers of setting up these benchmarks. For a constant bucket size you did not accept variations.
But if, per chance, you had run it like:
for (i = 1; i < max_loops; i++) {
for (bks = 1; bks < 16; bks++) {
run test;
}
}
In that case you migh have just attributed the variance to the bucket size being tested.
So a bucket size of 2 was best (for the writer)?
By what variation? Would it help/hinder readers?
Like Bob I'd like to see some detailed, at least the 1 .. 16 average table.
Cheers,
Hein
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 09:14 AM
10-16-2008 09:14 AM
Re: About perfomance with RMS relative files...
And the EVA can see performance variances; that's one of the reasons at the heart of why HP recommends 16 or 32 block cluster factors. EVA has a second-level bit of "fun" lurking, as it's not obvious what else an EVA might be doing during a testing pass within a typical mixed FC SAN environment.
The usual background activities on an AlphaServer GS80 or most any other normally-configured multiprocessor can also contribute some variance to a performance test. Particularly on small runs.
But then 2 seconds variance in 10 isn't something I'd be looking at in much detail. An equivalent variance on a run that matches application reality -- 2,000,000? -- would warrant some investigation, if it's outside your performance window.
With testing using what I must assume is a representative data set (and what you're using as a data set here looks small given previous references to 2,000,000 entries), there had better be a gain between bucket sizes that justifies the tuning time and effort, or the approach is a loser. That -- or if the difference is significant -- then the configuration is likely itself close enough to the processing windows that the approach is soon to be a loser.
In performance, you do have to have good data and the tools that have been cited in this thread will help, but you also need a broader view; continuing to carefully tune and tweak and to dig an application deeper into an existing bad design is seldom a viable strategy for performance.
As for alternatives, most any reasonable Integrity Itanium will exceed an AlphaServer GS80 box, too. I'd test application performance against a bottom-end Integrity box here; against a port running on a reasonably configured Integrity rx2660 or Integrity rx3600 box.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 12:58 AM
10-21-2008 12:58 AM
Re: About perfomance with RMS relative files...
For all here is the table with my results :
first col id bucket size, and second one is number of seconds.
1 9,90
2 9,21
4 9,67
6 10,12
8 10,05
10 10,61
12 10,69
14 10,92
16 11,41
18 11,67
20 11,66
22 12,34
24 12,89
26 13,00
28 13,05
30 13,92
62 19,82
I have executed every test 10 times, and in ascending order, first with a bucket of 1 ( 10 times ) and finishing with 62 bucket size ( 10 times )....
Hein, thank you for advice me about waiting several seconds between every test. I´ll try to repeat tests in the next days, and i will a new data table.
I´m testing with 20000 insertions, in my next tests, i´ll increase the number of records to 100000 or 500000 regs...
Thanks....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 02:17 AM
10-21-2008 02:17 AM
Re: About perfomance with RMS relative files...
You state "Only inserts, no updates"
No deletions either? Read accesses during writes (by other processes)? Is the file opened with, or without sharing?
Just a suggestion: IF this file is non-shared and used just for storing this number of 500-bytes records, I think you could save the overhead and use a sequential file to store these records; convert it to relative when closed.
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 02:51 AM
10-21-2008 02:51 AM
Re: About perfomance with RMS relative files...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 03:22 AM
10-21-2008 03:22 AM
Re: About perfomance with RMS relative files...
I hope that at the very least, you have requested full sharing even though there is no active sharing. You want those bucket and records lock, just like in the real usage.
Thnaks for the concrete data. Like Hoff suggested, the variation is not big enough to worry about IMHO. There are other, bigger, not yet understood forces in play.
The variation for the reader may well be more significant, so what loks good for the writer in isolation may be worse for the system as a whole.
You also want to run a few sample (1, 4, 8?) and see the read/write IO counts, either with RMS_STATS, SET WATCH FILE, SDA PROCIO, or SDA SHOW PROC/RMS=FSB
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 03:57 AM
10-21-2008 03:57 AM
Re: About perfomance with RMS relative files...
And you could play with the max_write_cached_transfer_size of the unit to make sure your IO's get in the cache and not the big ones.
And is the battery ok because otherwise the cache will not be used.
fwiw
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 04:00 AM
10-21-2008 04:00 AM
Re: About perfomance with RMS relative files...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 05:50 AM
10-21-2008 05:50 AM
Re: About perfomance with RMS relative files...
Garbage in.. garbage out.
Why measure a random activity?
Sharing forces RMS to write out any update.
The results may well be dramatically different.
Performance is NOT easy.
Jeez what a waste of time.
Cheers,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 07:20 AM
10-21-2008 07:20 AM
Re: About perfomance with RMS relative files...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 07:46 AM
10-21-2008 07:46 AM
Re: About perfomance with RMS relative files...
Just what it says.
The input to he measurement process used is garbage.
It is a valid test for some problem, but NOT for the problem that you inidicated to be concerned with.
So the output result (the elapsed time table), is garbage as well... in the context of the concern about the performance of the production box.
It is a fine table and may help solve some problem, just not this problem.
In a feeble attempt at an analogy in car terms: you measured the time it took for a various bus sizes to pick up and deliver a given number of passengers... but you did not make the busses actually stop, and you did not put any other cars on the road.
Or as me personal quote in this forum has been saying for the past months
"Anything not worth doing is not worth doing well"
That's mostly an advise to myself, but it applies to others as well.
Grins,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-21-2008 01:26 PM
10-21-2008 01:26 PM
Re: About perfomance with RMS relative files...
I would ask "system management" how small they want it (or what they heard). 12 block = 6KB IS small already. If "system management" was told this causes "EVA has less stress with small ios" then what's the delay...they asked...you reduce it. I would change it to 8 and give it a run next day.
I would expect a tremendous change either way.
Is there really a performance problem with this application? If yes then the application should be changed to use a sequential file with fixed length 512 byte records. That should be easy to merge this change with the current relative file usage.
/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-22-2008 06:56 AM
10-22-2008 06:56 AM
Re: About perfomance with RMS relative files...
/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-22-2008 07:12 AM
10-22-2008 07:12 AM
Re: About perfomance with RMS relative files...
Imho Bob has to step way back to the basics.
The _assumption_ is that writing to the relative files is the slow down.
But maybe readers are holding (bucket) locks too long!
Bob write> The best rate is 2290 writes per second, and the worst rate is 1814 writes per second...
So, now, how many writes/second does the application need to be able to do?
This could be a nice little project to work on!
Cheers,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-22-2008 07:24 AM
10-22-2008 07:24 AM
Re: About perfomance with RMS relative files...
And how fast do block-level virtual I/O block writes go?
- « Previous
-
- 1
- 2
- Next »