- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: write io performance problem
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 01:16 AM
02-12-2003 01:16 AM
write io performance problem
Sar tells me that the write io disk is running at top speed.
I have attached a some sar data.
Could someone please tell me if there is anything tweakable left.
Any help would be greatly appriciated.
00:00:01 runq-sz %runocc swpq-sz %swpocc
00:15:00 1.8 100 0.0 0
00:30:00 1.9 100 0.0 0
00:45:00 2.1 100 0.0 0
01:00:01 2.0 100 0.0 0
01:15:00 2.0 100 0.0 0
01:30:00 2.2 100 0.0 0
01:45:00 1.9 100 0.0 0
02:00:01 1.9 100 0.0 0
02:15:00 2.2 100 0.0 0
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 01:41 AM
02-12-2003 01:41 AM
Re: write io performance problem
#sysdef | grep dbc_max_pct
What is the value ?
regards,
U.SivaKumar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 01:48 AM
02-12-2003 01:48 AM
Re: write io performance problem
dbc_max_pct 50 - - -
(mowdb016)/tmp # sysdef | grep dbc_min_pct
dbc_min_pct 5 -
RAM = 1280 MB
THX
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 01:56 AM
02-12-2003 01:56 AM
Re: write io performance problem
As I see at your sar output, you havent any
problem. The problem occur, when the
value of runc-sz > 4 or %swpocc >5, for
example. Anyway you can run: #sar -u
to receive an information - look at
column %WIO, if %WIO>7, you
have a I/O-bottleneck.
And also try using HPGlance - it is better
for tracing the system performance.
Regards,Stan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 01:59 AM
02-12-2003 01:59 AM
Re: write io performance problem
Increasing Dynamic Buffer Cache will improve Write IO operations to a great Extent trading memory space.
Increase the max_dbc_pct to 80% if you have enough memory space for other applications.
regards,
U.SivaKumar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 06:00 AM
02-12-2003 06:00 AM
Re: write io performance problem
Here is a result from sar -u:
HP-UX mowdb016 B.11.00 A 9000/810 02/12/03
00:00:01 %usr %sys %wio %idle
00:15:00 0 1 50 49
00:30:00 0 1 45 54
00:45:00 0 1 47 52
05:45:01 0 1 49 50
06:00:01 1 1 52 46
06:15:00 6 2 58 34
06:30:01 0 1 46 53
06:45:01 1 1 46 52
07:00:00 18 13 38 31
07:15:01 17 6 46 31
07:30:01 2 2 48 49
07:45:00 28 5 33 34
08:00:00 6 2 41 51
08:15:01 2 4 48 47
08:30:01 2 2 47 50
08:45:03 2 1 49 48
09:00:00 2 1 46 51
09:15:00 3 2 47 48
13:00:00 1 1 37 61
13:15:00 2 1 37 60
13:30:00 2 1 35 62
13:45:01 2 1 39 59
Average 3 2 44 51
Together with the dba we switched off 4 unused databases. I seems to look better now.
Accourding our mothly sar data the IO is the bottleneck in this machine.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 06:36 AM
02-12-2003 06:36 AM
Re: write io performance problem
Your original stats show run queue occupancy consistently at 100%:
00:00:01 runq-sz %runocc swpq-sz %swpocc
00:15:00 1.8 100 0.0 0
00:30:00 1.9 100 0.0 0
00:45:00 2.1 100 0.0 0
01:00:01 2.0 100 0.0 0
01:15:00 2.0 100 0.0 0
01:30:00 2.2 100 0.0 0
01:45:00 1.9 100 0.0 0
02:00:01 1.9 100 0.0 0
02:15:00 2.2 100 0.0 0
and some swapping going on. My take on this is that you need more CPU power and more memory.
Pete
Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2003 11:17 AM
02-12-2003 11:17 AM
Re: write io performance problem
There's lots you can do to increase your I/O to disk; but most will cost you some cash.
It looks like you're using JBOD (from you response times). Typical response times for newer disk arrays are 1-5ms, not >20ms; so upgrading your disk array would improve your speed dramatically.
Unfortunately, since you are doing a little swapping, and that run queue is full... as soon as you open up the disk bottleneck, it's probable that you will hit a processor and memory bottleneck.
What's your disk subsystem look like? JBOD? FC60? 12H? FC or SCSI? perhaps we can make some recommendations for improving that.
Good luck,
Vince
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-13-2003 06:00 AM
02-13-2003 06:00 AM
Re: write io performance problem
Thanks for your imput.
This is an old ( 1997 ) D380 with 1280 MB & 180Mhz cpu.
it holds 6 disks 2 * 4GB; 2 * 9 GB; 2 * 18 GB
The 18 GB disk are my Oracle disks, these are the fastest's disks. Al are raid 1. ( see ioscan.txt attached ) Also these are the c0t3d0 disks with I see as write io bottlenecks in sar -d.
I am playing with the dbc_min /max_pct
I calculated that the min value had to be 5.
minimum cache size in Mbytes, use the following formula: (number of system processes) * (largest file-system block size) / 1024.
To determine the value for dbc_min_pct, divide the result by the number of Mbytes of physical memory installed in the computer and multiply that value by 100 to obtain the correct value in percent.
What should I do to the dbc_max_pct???
I know the results, if any, are marginal but I'd like to keep this machine in the best performance mode possible.
Ta,
F
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-13-2003 07:50 AM
02-13-2003 07:50 AM
Re: write io performance problem
The more you spread the DB's I/O load over more drives, the faster it will get.
Depending on your disk utilization and LVM configuration, this may be very easy to do; but it can also mean a full backup and restore of all the application data.
Post back if you need more direction.
Good luck,
Vince
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-13-2003 08:34 AM
02-13-2003 08:34 AM
Re: write io performance problem
True, most every query will cause deactivation and page outs for many, many programs, but at least you don't have to upgrade the system. In other words, there nothing the opsystem or the disk design can do for an overloaded machine.
You need 8 to 12 Gbytes of RAM, 8 to 16 processors, fibre channel connections to a lare disk array with Gbytes of caching and your 24 instances of Oracle (which must be upgraded to 64bit versions) will perform as expected. Since the D-class computer can't be upgraded to these types of features, it's up to management to decide how much is the wait time worth. Having dozens of users wait for an underpowered machine every day versus a current technology machine (that will cost less than the original D-class, yet be 10x faster), that is the question.
The opsystem and I/O changes that might be made can't fix a bad design. And running 24 instances on a single machine without any backup (ie, a clustered Service Guard connection) is a massive risk. A single failure in the D-class takes everything down. Two or three L1000 systems each with 4Gb of RAM and a current technology disk array with Service Guard is a very cost effective and reliable solution. Performance will be unbelievable.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-13-2003 08:55 AM
02-13-2003 08:55 AM
Re: write io performance problem
I know, your are right/ But the management is thinking about a new system.
Also I don't care much. The machine is my baby and I try to keep it in tip top condition. Even if the dba's and the management are using her as a mule.
Vincent, ( and of course Bill, )
I am still concidering my dbc_max/ min_pct.
Should I set it equal? So that the buffer cache isn't dynamic?
Should I set to 250MB, something like 20?
Should I set it to 80% as was suggested earlier in this thread?
Should I concider playing with the "swapmem_on" parameter?
Thx again!
F
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2003 02:19 AM
02-17-2003 02:19 AM
Re: write io performance problem
For your database filesystems I would try a different approach:
You actually want Oracle to be doing the caching (SGA) as Oracle knows best how it's data is stored and accessed. In comparison the O/S can only do relatively 'dumb' caching.
Using a buffercache for your database files only means a lot of extra memory-to-memory copying and the larger your bc is, the more overhead UX has managing it...
You should mount your database filesystems with -o mincache=direct (I believe this requires OnlineJFS) and keep your bc to a minimum, leaving more available memory for the Oracle SGA.
Hth,
Stanley.