Operating System - HP-UX
1752600 Members
4192 Online
108788 Solutions
New Discussion юеВ

Resource allocation @ SRD deploy

 
prakash_22
Occasional Advisor

Resource allocation @ SRD deploy

Hello All,

I'm having 2 rx7620 machines with each machine has 1 nPar and 4 vPar's in each. The sceanrio is that the two of the machine's should get the cores with TiCAP license when there is an additional need of resource. Im using gWLM to achive that. Everything works fine.

Then, i planned to upgrade the machines and softwares. Now, i'd one rx8640 machine where it has partioned with 2 nPar's each with 4 nPar's. Now, i can able to achive the same scenario as that of the one before. But the problem is that, now when i deploy the SRD, as soon as the deployment started, it tries allocate some resource's to one machine and return the un-necessary one's after say about 5 seconds. It didn't happen in the older step and i don't want this behaviour to happen in the new setup. I got the following log in the new machines syslog.

Jul 5 02:10:19 ap01 vparmodify[21473]: user root: /usr/sbin/vparmodify -p ap01 cpu::7
Jul 5 02:10:20 ap01 vparmodify[21473]: exit status 0
Jul 5 02:10:20 ap01 vmunix: 0/124 processor
Jul 5 02:10:20 ap01 vmunix: 2/121 processor
Jul 5 02:10:20 ap01 vmunix: 2/122 processor
Jul 5 02:10:20 ap01 vmunix: 2/123 processor
Jul 5 02:10:20 ap01 vmunix: 2/124 processor
Jul 5 05:10:20 ap01 sfd[2247]: started 'insf' to create device special files for newly found devices.
Jul 5 05:10:20 ap01 sfd[2247]: execution of 'insf' completed.
Jul 5 02:10:28 ap01 vparmodify[21508]: user root: /usr/sbin/vparmodify -p ap01 -d cpu::5
Jul 5 02:10:28 ap01 vparmodify[21508]: exit status 0

New machine configuration
gWLM - A.03.00.01.05
vPar - A.05.02
iCOD - B.11.31.08.02.00.127

Old Machine configuration
gWLM - A.02.50.00.04
vPar - A.04.02.10
iCOD - B.11.23.08.00.00.95

Please suggest me if there is any configuration to aviod this problem.

Thanks,
Prakash.A
5 REPLIES 5

Re: Resource allocation @ SRD deploy

Well waht do you have the resource allocation interval set to for the SRD you are having trouble with?

You can check this on the gWLM managament server using:

gwlm export --srd="..."

replacing "..." with the name of your shared resource domain.

Look for the value "interval=". It's expressed in seconds. I wouldn't consider setting this to less than 30 for test and less than 120 in production.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Mark Criss
New Member

Re: Resource allocation @ SRD deploy

Prakash,
I do not understand what you mean by the statement "it tries allocate some resource's to one machine and return the un-necessary one's after say about 5 seconds." Do you mean it's allocating TiCAP cores?

We probably need more information. For example, what are your policy details, and what are the allocation of cores that you are seeing and what you think is erroneous.

Without understanding the details, one thing that pops into my head is that you MUST have all of your TiCAP cores unassigned when you deploy the SRD. Otherwise, gWLM will assume they are licensed and continue to allocate them.

Thanks
Mark
prakash_22
Occasional Advisor

Re: Resource allocation @ SRD deploy

Hello,

Duncan - Yes, the resource allocation intervel is 30 sec.

Chris - I'll give little more explanation about the setup. This setup involves 3 machines (x,y,z). The SRD was created such that the x,y will be allocated resources based CPU utilization and z will have the fixed CPU allocation. Maximum CPU as system can have is 5. Initially all the machines have 2 cores and there is 2 cores with the TiCAP license.

During the test run, when the CPU utilization of x or y reaches 90%, one CPU from other machines (y or x) will be allocated. Even if this doesn't solve the resource, the TiCAP cores will be used.

I can be able to acehive the same. But the only concern i've is that, when i try to deploy the SRD, before the SRD deployment completion, there will be more than 7 CPU core are allocated for about 5 seconds. We didn't have this case in the earlier (version informations are given in the previous post) and syslog doesn't have any output in the earlier setup where as the new setup has some logs which are also given above.

I've attached the SRD and icapstatus output information.

I hope, i'd explained the setup better. If not clear, please let me know.

Please advice me on this.

Thanks,
Prakash.A
Bill Blanding
Occasional Advisor

Re: Resource allocation @ SRD deploy

Prakash -

If I'm understanding your setup correctly, you went from two SRDs both using TiCAP on two separate machines to two SRDs both using TiCAP on two different nPARs of the same machine. This is not a supported configuration. iCAP and TiCAP must be managed across all of the nPars of a machine, since it is the total number of active cores across all of the nPars which determines whether TiCAP is being consumed. It is also advantageous to have a single SRD in this case, since sharing of usage rights across nPars can reduce TiCAP consumption.

The spike in resource consumption which you noticed could just be normal gWLM operation. gWLM performs its initial resource allocation during deployment of the SRD. This first pass of allocation has only a short period of utilization measurement, so the initial utilizations may be inaccurate and thus affect the initial resource allocations. If this happens, it will be corrected during the next management interval. We could investigate this further if you sent me log files from all of the agents in the SRD, with the logging level set to FINEST.
prakash_22
Occasional Advisor

Re: Resource allocation @ SRD deploy

Hi,

Bill - No, i've only one SRD which will be used to manage the 3 machines resources( 2 - CPU utilization, 1 - Fixed). I'd attached my gwlmagant log with this.

Thanks,
Prakash