1836462 Members
2627 Online
110101 Solutions
New Discussion

cachefs

 
SupraTeam_1
Regular Advisor

cachefs

Hi,
I have a HP-Ux 11i workstation with cachefs who use by nfs data on "code share server".

I want to improve time perfomances, because I obtain time results much less good than in local file system.
Have you got an idea ?

Thanks
11 REPLIES 11
AwadheshPandey
Honored Contributor

Re: cachefs

this will help u
http://docs.hp.com/en/5992-0715/ch03s03.html

Regards,

Awadhesh
It's kind of fun to do the impossible
SupraTeam_1
Regular Advisor

Re: cachefs

No i know this doc. but nothing it's good to improve my perfomances on workstation.
A. Clay Stephenson
Acclaimed Contributor

Re: cachefs

Have you asserted the rpages option in youe cachefs mount? I would also use the local-access option; and from "code share server" I can probably infer that the contents of the NFS exported directory change very infrequently so you should also be able to safely bump up the actimeo and related values.

You should note that cachefs only helps on subsequent loads of the same executable; it can do nothing for the first time that the code is loaded. Another thing to be aware of is shared libraries. Not only should your executables use the cachefs but the shared libraries used by these executables should use the cachefs (or the local filesystem) which means that the SHLIB_PATH needs to reflect the cachefs mountpoint.
If it ain't broke, I can fix that.
SupraTeam_1
Regular Advisor

Re: cachefs

thanks for your answer, i will check for library linked ...

But can you explain how to "asserted the rpages option" and "use the local-access option" ?
A. Clay Stephenson
Acclaimed Contributor

Re: cachefs

I can but it would be much better if you simply do a "man mount_cachefs" where all the -o options are explained.
If it ain't broke, I can fix that.
SupraTeam_1
Regular Advisor

Re: cachefs

i search but i find nothing in mount_cachefs manual about local access and rpages ?
A. Clay Stephenson
Acclaimed Contributor

Re: cachefs

Well, I checked 2 11.11 boxes and an 11.23 box and my mount_cachefs man pages describe the rpages option. "rpages If specified when mounting a CacheFS file system, a binary will be read and populated in the cache the first time it is loaded. Subsequent access to the binary will be satisfied from the cache.".

Because you do not have this man page version, it suggests that you have not applied patch bundles very often. I suspect that if the rpages option is not listed in the man page then the option will not be recognized. In any event, you should probably apply a recent QPK bundle.
If it ain't broke, I can fix that.
SupraTeam_1
Regular Advisor

Re: cachefs

I will check this and I make a return
SupraTeam_1
Regular Advisor

Re: cachefs

Hi,
I check this but nothing better ...
SupraTeam_1
Regular Advisor

Re: cachefs

I update my system with this patches:

s700_800 11.11 mountall, Dev IDs enabler, iSCSI support PHCO_33205

s700_800 11.11 Buffer cache performance improvement patch PHKL_34926

s700_800 11.11 ONC/NFS General Release/Performance Patch PHNE_32477

I have a better result but it's not enough for what i do ...
A. Clay Stephenson
Acclaimed Contributor

Re: cachefs

I am inferring from your term "code share server" that you are using NFS for the executables. Where are the data that these executables act upon stored? Local or NFS? It may well be that your real problem are the data files and possibly lock contention.

I would divide your problem into separate pieces so that it becomes possible to analyze your problem better.
1) If the code (including shared libraries) is local, how is the performance?
2) If the code is NFS and the data are local how is the performance?
3) If both are local, how is the performance?
4) If both are NFS, how is the performance?

Ideally, you would profile your code to see where the actual bottlenecks are. It might also help to describe your application; I assume it is some sort of ananysis. Finite Element Analysis? Computational Dynamics? Statistics? Distributed Computing?
If it ain't broke, I can fix that.