Operating System - HP-UX
1829243 Members
2367 Online
109987 Solutions
New Discussion

Re: adding memory to alieve disk bottleneck

 
SOLVED
Go to solution
Larry Basford
Regular Advisor

adding memory to alieve disk bottleneck

I have 9 servers that are (EMC) disk i/o bound
High cpu waiting on I/O
Not unusual for our application on UNIVERSE at end of month.

Management wants to throw memory at the problem. How do I take advantage of the extra memory. Add it all to buffer cache ?

I have 4GB now going to 16GB

Total VM : 972.9mb Sys Mem : 856.7mb User Mem: 768.8mb Phys Mem: 4.00gb

Desaster recovery? Right !
22 REPLIES 22
Victor BERRIDGE
Honored Contributor

Re: adding memory to alieve disk bottleneck

Hi,
I would yes add some memory but I wonder if now already reducing your cache buffer will not help since you say "at end of month" meaning big batches maybe? Bring it to more reasonable size as 500 MB to start with and give it a try then you have JFS tuning options, then why not stripe?

All the best
Victor
Pete Randall
Outstanding Contributor

Re: adding memory to alieve disk bottleneck

Larry,

Before throwing money at a very expensive solution, I would want to be very sure that it would help. Do you have Glance available? If so, check the buffer cache statistics in the Reports > System Info > System Tables report.

If you do go for more memory, you might want to increase buffer cache gradually and monitor it the same way to see how well it is being utilized.


Pete

Pete
harry d brown jr
Honored Contributor
Solution

Re: adding memory to alieve disk bottleneck

You are not going to be using dynamic buffer caching if you have bufpages = 204800, thus the values of dbc_min/max_pct's are not used.

I'd change bufpages = 0 (and nbuf=0) UNLESS the vendor has specified the ABSOLUTE use of bufpages= and to basically turn off dynamic buffer caching.

What kind of EMC disk array do you have and what kind of EMC monitoring/managing tools do you have available?

live free or die
harry d brown jr
Live Free or Die
Jeff Schussele
Honored Contributor

Re: adding memory to alieve disk bottleneck

This is a *typical* management response - Let's throw money at the problem and *hope* it goes away.
You REALLY need to use glance/gpm to look at the problem because high wio% is *frequently* due to crappy coding.
First you need to look at what the *rest* of the CPU usage is - system OR user.
If it's user - THEN well you might just need more horse power - NOT memory. IF it's system then the detective work needs to be done & this *will* take some work & time.
You need to look at the following MeasureWare (OVPA) metrics:

GBL_PRI_QUEUE
GBL_RUN_QUEUE
GBL_CPU_INTERRUPT_UTIL
GBL_CPU_CSWITCH_UTIL
CBL_CPU_SYSCALL_UTIL
PROC_CPU_CSWITCH_UTIL
PROC_CPU_SYSCALL_UTIL
PROC_CPU_SYS_MODE_UTIL

These - and others - can clue you in to whether more RAM is going to help.
And I *seriously* think that unless you're paging out NOW - it will.

My 2 cents,
Jeff

P.S. NON-IT mgmnt should *never* make expensive technical decisions BY THEMSELVES.
Sheesh - that's WHY they hire the techs in the FIRST place!
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
B. Hulst
Trusted Contributor

Re: adding memory to alieve disk bottleneck

Hi,

You mean adding memory in the EMC disk array
or in the server(s)?

In the server you can allocate more memory cache and buffers to the database for example.

In the EMC you can change the ratio of the read and write cache sizes already now with this 4GB.

But a little analyzing before you put the memory can't hurt.

It would be good to have statistics of the ¤Gb situation and then you can run the statistics again with 16Gb. And see if it really helps...

br,
B
TwoProc
Honored Contributor

Re: adding memory to alieve disk bottleneck

What kind of database is it?
We are the people our parents warned us about --Jimmy Buffett
Steven E. Protter
Exalted Contributor

Re: adding memory to alieve disk bottleneck

More memory never hurts.

Your database can allocate a larger cache and have more reads come out of memory instead of disk.

Its also important to see how the i/o is spread across the disks.

If you have a particular disk with lots of i/o on it, look at what sits on it. If there is a way to re-arrange things to balance the i/o thats a good idea.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Larry Basford
Regular Advisor

Re: adding memory to alieve disk bottleneck

MORE INFO
EMC 8530 96 drives 6,551.19GB
4GB cache

UNIVERSE database

N4000 servers 4x440 4GB mem

buffercache 800MB fixed
%rcache above 90%

Many database selects for reports are the cause of the high I/O
along with some EMC disk contention.


Desaster recovery? Right !
Larry Basford
Regular Advisor

Re: adding memory to alieve disk bottleneck

The EMC disks acn not go faster.
they are 80GB stripped metas.
10 spindles in each filesystem.
Desaster recovery? Right !
Jeff Schussele
Honored Contributor

Re: adding memory to alieve disk bottleneck

Of course I MEANT...

will NOT.

Ooooops,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
Vincent Fleming
Honored Contributor

Re: adding memory to alieve disk bottleneck

Everyone here is correct in that you need to determine what the bottleneck is before deciding on a course of action to correct the problem.

Your high wio% can be caused by many things...

Here's some tips:

Look at your FC or SCSI (however you're connecting to the EMC) bus utilization. If utilization is high, add some more busses.

Note that HP-UX likes a lot of LUNs. You have 80GB Metas, but how many? 1 per FS? If so, you would be better off not using MetaVolumes, and presenting several LUNs to HP-UX instead, letting LVM do the striping. The reason for this is that HP-UX does a better job managing the I/O queues when it has more queues to manage.

Take a close look at the LVM layout and MetaVolume layout of the EMC. If you place more than one FS on the same set of disks, they could be causing a lot of contention and thrashing the heads on those drives. Seperate your most highly used FS's onto different sets of disks.

Avoid mix logs (sequential access) and tablespaces (random I/O) on the same set of disks. If you need to, put only tablespaces that aren't used heavily with the logs.

Adding that memory to the database's SGA will likely (not guaranteed) lower the amount of disk I/O the database needs to do. This should at least help a little. Note that it typically doesn't solve problems like this, but it can help. Some database I/O needs to be synchronous (such as log writes), so it bypasses any SGA caching anyway...

If you have some performance monitoring software for the EMC, go and use it to make sure you're not overloading an internal bus or cpu, or causing a hot-spot on some of the drives.

I hope this helps,

Good Luck,

Vince

No matter where you go, there you are.
Bill Hassell
Honored Contributor

Re: adding memory to alieve disk bottleneck

Unfortunately, Universe is a VERY old database design and it does not have options to make use of more memory in each instance of the program. HOWEVER, you can make massive improvements in performance by increasing the Universe config parameter called MBUFS. If it is 100, make it 300 or 500, even 1000. But before you restart Universe apps, you must compute the increase needed for nfiles and maxfiles in the kernel.

Universe uses hundreds to thousands of files, depending on the database design. Since it was designed during the days when 128 megs of RAM = really big, the MBUFS value would control the number of file handles that could be opened at the same time inside the program. Then, as other files were needed, less-used files would be closed and new files opened. You probably notice that the system overhead is fairly high 20%-40%. That's because the files are being opened and closed hundreds of times per second. You can verify this with:

sar -a 2 5

This will produce a report of directory operations (lookup filenames, directory blocks read, etc). Normal numbers might be single and double digits, while a busy Universe system might have 4 digit numbers and higher (thousands). To reduce these numbers, you need to increase MBUFS dramatically.

So to compute the needed changes to the kernel, make maxfiles (number of simultaneously open files per process) at least MBUFS + 50. So if MBUFS is 500, then maxfiles should be 550 or higher. maxfiles is just a runaway program protection so you can set it to 1000 and forget it if you want--no extra memory is used.

Then you must increase nfiles using the formula:

NUMPROC=max_number_of_Universe_processes
nfiles = MBUFS * NUMPROC + NUMPROC * 50

In other words, the maximum number of Universe processes (NUMPROC) times the maximum number of files opened at the same time in each process plus about 50 files that are always opened in each process.

Don't be alarmed at the size of nfile. If you have a Universe license for 500 users (really, instances of Universe at the same time) and you make MBUF=500, then nfiles must be at least 500 * 500 + 500 * 50 or 275000 (that's 250 thousand). Don't worry, HP-UX can scale up to several million files opened at the same time. You may need to add just another 4Gb (8Gb total).

NOTE: The buffer cache has an asymptotic curve of performance. 200 megs is way better than 100 megs, 500 megs improves a bit more, 1000 megs shows little improvement and beyond 1000 megs, little improvement will be seen. Set the maximum DBC % to 500-700 megs and you'll be at the top of the performance improvement curve.


Bill Hassell, sysadmin
Ted Buis
Honored Contributor

Re: adding memory to alieve disk bottleneck

No one mentioned the possibility of increasing the queue depth. Striping across multiple HBAs will help this, but you can often improve further by using an ioctl command. I don't mean to dilute the message that you should measure first and understand the bottleneck before experimenting with possible cures, but there are many potential cures, because there are many potential problems. You can pay HP or others to do a performance analysis, if you don't want to do it yourself. Also, if you solve the I/O bottleneck, it sounds like the problem will shift to CPU. What this with less than a GB of total VM with Physical memory of 4GB. Are you even using all you have now?
What OS? What are your kernel parameters?
swapinfo -tam?
Mom 6
Larry Basford
Regular Advisor

Re: adding memory to alieve disk bottleneck

We installed an extra 12GB of mem
the kmtune was attached to the first message
We have and EMC 8530 dule path with power path and sar data ocasionally up to 50,000 I/O's sec (They are maxed)
There is no fixing that except with a new EMC D2000P which we will be getting next month.
Is there any way to use this extra 12GB of memory? More processors?
Tunable parameters?

I never found the MBUFS tunable in UNIVERSE
but
nfile 149984 - (320*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY+NSTRTEL))

swapinfo -tam
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4096 0 4096 0% 0 - 1 /dev/vg00/lvol2
reserve - 1506 -1506
memory 12758 488 12270 4%
total 16854 1994 14860 12% - 0 -
Desaster recovery? Right !
Bill Hassell
Honored Contributor

Re: adding memory to alieve disk bottleneck

You'll definitely need to find this variable. Universe won't use any of the extra memnory and you won't see any performance gains. It might also be called MBUF but the parameter is in the Universe configuration file and controls to maximum number of open files per instance of the program. The Universe docs will point out the value name. nfile in the kernel is large because you have a formula based on maxusers and maxusers is probably set to several hundred. Again, none of the kernel parameters will provide any significant improvement in speed. You need to reduce the directory and file open/close activity.


Bill Hassell, sysadmin
B. Hulst
Trusted Contributor

Re: adding memory to alieve disk bottleneck

Hi,

If you want to use the 12GB extra memory then set the nbuf value, regardless of the application. ;-)

(from the man pages)
nbuf:
The number of file-system buffer cache buffer headers. If both nbuf and bufpages are set to 0, the kernel allocates ten percent of available memory to buffer space. If only nbuf is 0, it will be computed from bufpages, assuming 4096 bytes per buffer. If both variables are non-zero, the kernel attempts to adhere to both requests, but if necessary, nbuf is changed to correspond to bufpages.

Regards,
Bob
Ted Buis
Honored Contributor

Re: adding memory to alieve disk bottleneck

Someone might check me on this, but it appears to me from your swapinfo output that you have 4GB of swap space. However with 16GB of RAM you need to enable psuedo-swap, which appears to be off, swapmem_on=0 now, should be one for psuedo-swap to be enabled. I don't think the system can use the RAM unless you have psuedo-swap enabled or create move physical device swap.
Mom 6
Larry Basford
Regular Advisor

Re: adding memory to alieve disk bottleneck

Thanks Ted,
Yes I did add the swapmem_on to 1 after adding the 16GB of ram.

Glad to know you are checking out my config that close.

Only thing I think that might help is the system is an N4000 and we added 2 cariers to the system. This increases the memory bandwidth.
Desaster recovery? Right !
Ted Buis
Honored Contributor

Re: adding memory to alieve disk bottleneck

You need 4 carriers total for maximum bandwidth to memory, most important if you have more than 4 CPUs. Ideally with memory spread out symmetrically across the carriers. Some would argue that buffer cache is the best approach, others might suggest a RAM disk for tmp space. You might want to review this web page. I have not tried any of them.

http://www.unixguide.net/hp/faq/5.3.2.shtml
Mom 6
Bill Hassell
Honored Contributor

Re: adding memory to alieve disk bottleneck

Just a note about the kernel parameter nbuf: read the Help On Context for nbuf and also for bufpages. You'll see that the interaction between these two parameters is extremely complicated. Always leave nbuf=0. Then use the bufpages kernel parameter to set the size of the buffer cache. If bufpages=0, then the dynamic buffer cache will be used. In your case, with so much memory the maximum dynamic buffer cache percentage will be used at all times. If you want to fix the cache at a particular size, set bufpages to the number of memory pages (4k each) to equal about 600-800 megs.


Bill Hassell, sysadmin
Steve Lewis
Honored Contributor

Re: adding memory to alieve disk bottleneck

I suspect that your database code is 32bit in which case any shared memory use will be limited to a system-wide 1Gb regardless of how much RAM you installed and how many shmmni you have. Hopefully it is 64 bit.

With 16Gb of memory I would gradually increase the dbc_max_pct, first up to 1600Mb, then more and more until the performance stops getting better. That is on 11i; for 11.00, you won't get better performance with larger values.
I really recommend that you go to 11i if you aren't already on it as the filesystem and buffer management is much better.

On the down-side, end of month processing traditionally does full-table scans, possibly several times over, which can wipe out a lot of the caching benefits of large buffers. Vincent's posting may become more relevent with that in mind. Check your avwait/avserv/busy values in sar -d 5 12 or use Glance.

Are you still using only HFS filesystems? I would recommend going to vxfs for performance and read the excellent filesystem and hp tuning guides referenced here:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=712921

This has some pertinent points regarding your values of ninode and vxfs_ninode.

Ted Buis
Honored Contributor

Re: adding memory to alieve disk bottleneck

Steve raises a good point. Memory Windows were implemented in 11.0 so that in some cases 32-bit applications could work around some of those limitations. There was a white paper at one time at docs.hp.com.
Mom 6