System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

Buffer and Fileache 11.31 IA

Stefan Huber
Frequent Advisor

Buffer and Fileache 11.31 IA

Hi

I have a BL860 with 16Gb memory.
I found out by glance that 2.9 GB are used for Buffer Cache and 3.0 GB are used for filecache.
So I tried to lower the filecache_max to 10% but it's still using that much cache...
The server is connected to a EVA and it seems to be, the kernel needs a bit of fine tuning.
On the box is a very old Cobol application running which uses flat files.
Any ideas?


I'm from Switzerland, but somehow ended up in Winnipeg
12 REPLIES
RickT_1
Valued Contributor

Re: Buffer and Fileache 11.31 IA

Viktor Balogh
Honored Contributor

Re: Buffer and Fileache 11.31 IA

Hi Stefan,

> So I tried to lower the filecache_max to 10% but it's still using that much cache...

The system only lowers the size of the filecache if it finds another purpose for that amount of memory. It won't be deallocated until it will be used up for something else.

Regards,
Viktor
****
Unix operates with beer.
Stefan Huber
Frequent Advisor

Re: Buffer and Fileache 11.31 IA

So there is no need to tune the kernel?
I'm from Switzerland, but somehow ended up in Winnipeg
RickT_1
Valued Contributor

Re: Buffer and Fileache 11.31 IA

Stefan,

From what I read, I would leave it like you have it. Then if the system needs the memory for something else, it can grab the half that you said it could have back.

Rick
Stefan Huber
Frequent Advisor

Re: Buffer and Fileache 11.31 IA

hmmm .... that's what I thought too.
Somehow I am running out of memory soon or later. This application is using that much file and buffer cache. It is now using 95% of memory. And I am thinking of mounting the volume with the direct I/O mount option.
This is a very very old application and it is using seeks, selects, reads and writes.
So there are around withing 2.5 minutes:
2.1 million reads
1.6 millions writes
700k open/close
1.5 million selects

I still believe that the system can be tuned.
I mean especially on the vxfs level (probably).

I'm from Switzerland, but somehow ended up in Winnipeg
RickT_1
Valued Contributor

Re: Buffer and Fileache 11.31 IA

Stefan,

Which OS are you running on your blade?
Stefan Huber
Frequent Advisor

Re: Buffer and Fileache 11.31 IA

Oh I completely forgot:

It 11.31 on IA.
I'm from Switzerland, but somehow ended up in Winnipeg
Stefan Huber
Frequent Advisor

Re: Buffer and Fileache 11.31 IA

And I read most of the guides regarding performance tuning and so on, but I would like to mention here... I never had a 11.31 fresh-up training and what the big differences are...
I'm from Switzerland, but somehow ended up in Winnipeg
Hein van den Heuvel
Honored Contributor

Re: Buffer and Fileache 11.31 IA



>> On the box is a very old Cobol application running which uses flat files.

How big are the flat files? Any chance that the cache can hold them completely?


>> And I am thinking of mounting the volume with the direct I/O mount option.

I suspect that the application makes intense use of the buffer cache, and if direct_io was enabled, then it would slow down to a crawl.
But you probably can not enable direct IO:

http://docs.hp.com/en/B2355-90684/vxfsio.7.html
"VX_DIRECT
Indicates that data associated with read and write operations is to be transferred directly to or from the user supplied buffer, without being cached. When this options is enabled, all I/O operations must begin on block boundaries and must be a multiple of the block size in length. The buffer supplied with the I/O operations must be aligned to a page boundary."


>> So there are around withing 2.5 minutes:
:
>> 700k open/close

I suspect that if you really want to make an impact you will have to stop the open and close behaviour and try to keep the file open all the time, or accross a larger number of operations: 'a batch of work'
Of course that would be an application change and not a quick kernel tune. Application changes might not be feasible and migth take a long time to get implemented/tested.

>> 2.1 million reads
>> 1.6 millions writes
That is 14,000 reads/second
What is the DISK IO rate during all of this?
I suspect this will tell you that the cache is doing its job, not just double-buffering and slowing things down.


Hope this helps some,
Hein van den Heuvel
Stefan Huber
Frequent Advisor

Re: Buffer and Fileache 11.31 IA

The biggest flatfiles are around 4 Gb.
The whole lvol (mounted to a EVA) is around 60 Gigs there are around 400 flatfiles.
The I/O rate is around 300/sec.

I've attached some performance stats...
My glance version is A.05.00, I think this is appropriate enough.

I assume, by adding more Memory to the box, that would help a lot.

I'm from Switzerland, but somehow ended up in Winnipeg
Dennis Handly
Acclaimed Contributor

Re: Buffer and Fileache 11.31 IA

>I found out by glance that 2.9 GB are used for Buffer Cache and 3.0 GB are used for filecache.

There is something seriously wrong if the buffer cache is that large on 11.31. There it should be vestigial and limited to a few cases of reading LVM metadata. The large size should be in the filecache.

>I tried to lower the filecache_max to 10% but it's still using that much cache.

It may take a while for the cache to be flushed to 10%.

>700k open/close

Wow!

>Oh I completely forgot:

(Not really, it's in the subject above. :-)

>Viktor: The system only lowers the size of the filecache if it finds another purpose for that amount of memory.

I would assume that if you order it to lower the size with filecache_max, it will eventually do so.


Stefan Huber
Frequent Advisor

Re: Buffer and Fileache 11.31 IA

Yeah, I in the beginning the filecache_max value was set to default, so I lowered it to 20%. But somehow it doesn't help.
In the night the whole application gets stopped for backups, but even then it doesn't free up the cache.
So I attached another file with the kernel settings, maybe somebody can have a quick look into it?
The application does small I/O's but a lot of it.
On LVM I saw the PE size is 4MB. And:
The filesystem is filled up to 94%, maybe this can explain why the inode cache doesn't cleaned-up.

I'm from Switzerland, but somehow ended up in Winnipeg