<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: a nice enigma! in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730623#M836263</link>
    <description>Tim&lt;BR /&gt;&lt;BR /&gt;How about nicing the process to what it should be and monitor it.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I use a script that picks up certain logings and nices them down as their routines can cause load problems on my main server.&lt;BR /&gt;&lt;BR /&gt;I am sure that you can modify it to monitor the nice value of your process.&lt;BR /&gt;&lt;BR /&gt;--------------------------------------------&lt;BR /&gt;#!/bin/ksh&lt;BR /&gt;# Automatically nice down the ftpbbs universe routines&lt;BR /&gt;######################################################&lt;BR /&gt;# PJFC 2001&lt;BR /&gt;######################################################&lt;BR /&gt;# Get parent pids&lt;BR /&gt;######################################################&lt;BR /&gt;q=`who -u | grep ftpbbs `&lt;BR /&gt;p=`who -u | grep ftpbbs | awk '{print $7, $15 }'`&lt;BR /&gt;######################################################&lt;BR /&gt;# Seperate each pid to a string&lt;BR /&gt;######################################################&lt;BR /&gt;a=`echo $p | awk '{print $1}'`&lt;BR /&gt;b=`echo $p | awk '{print $2}'`&lt;BR /&gt;######################################################&lt;BR /&gt;# Pick up pid of universe process and nice value&lt;BR /&gt;######################################################&lt;BR /&gt;y=`ps -efl | grep $a | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $4}'` # PID&lt;BR /&gt;z=`ps -efl | grep $a | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $8}'` # Nice value&lt;BR /&gt;######################################################&lt;BR /&gt;# Check nice value&lt;BR /&gt;if [ $z = 20 ]&lt;BR /&gt;######################################################&lt;BR /&gt;# If nice value = 20 then a restart has occured so nice it down&lt;BR /&gt;######################################################&lt;BR /&gt;then&lt;BR /&gt;renice -n 19 $y&lt;BR /&gt;fi&lt;BR /&gt;######################################################&lt;BR /&gt;# Do it all again for other ftpbbs login&lt;BR /&gt;######################################################&lt;BR /&gt;w=`ps -efl | grep $b | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $4}'`&lt;BR /&gt;x=`ps -efl | grep $b | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $8}'`&lt;BR /&gt;######################################################&lt;BR /&gt;# Check nice value&lt;BR /&gt;if [ $x = 20 ]&lt;BR /&gt;######################################################&lt;BR /&gt;# If nice value = 20 then a restart has occured so nice it down&lt;BR /&gt;######################################################&lt;BR /&gt;then&lt;BR /&gt;renice -n 19 $w&lt;BR /&gt;fi&lt;BR /&gt;echo "Renice ran "&lt;BR /&gt;exit 1&lt;BR /&gt;&lt;BR /&gt;---------------------------------------------&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Paula</description>
    <pubDate>Fri, 24 May 2002 12:01:52 GMT</pubDate>
    <dc:creator>Paula J Frazer-Campbell</dc:creator>
    <dc:date>2002-05-24T12:01:52Z</dc:date>
    <item>
      <title>a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730605#M836245</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I have two computers which are configured "exactly" the same (you know what I mean).  However, when I do "top" I sometimes see that one is using lots of "nice" CPU &amp;amp; virtually no "user" cpu &amp;amp; the other is reversed, namely lots of "user" &amp;amp; no "nice"!  It is not always consistent which only adds to the puzzle.&lt;BR /&gt;&lt;BR /&gt;My first thoughts were that my processes were suffering from priority degredation, which will only get worse with time.  However, I thought "nice" &amp;amp; HPUX priorities were seperate entities - could be wrong here -.&lt;BR /&gt;&lt;BR /&gt; o Can anyone set me straight on these issues?  Explain the issues at hand (in simple low number of sylable terms that management have a chance of understanding)?&lt;BR /&gt; o Is there a way of fixing the priorities of these processes (say 154 or something) or stop them degrading with time (I canot use rtprio or rtsched to give then a Real Time or POSIX priority [&amp;lt;127] as this will/may cause ServiceGuard failover at the busy periods!!! trust me, I HAVE seen this before).&lt;BR /&gt; o I do give all my advisers points, the more advice, the more points (check out my stats)&lt;BR /&gt;&lt;BR /&gt;Any takers&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Thu, 23 May 2002 18:53:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730605#M836245</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-05-23T18:53:22Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730606#M836246</link>
      <description>Assuming that you mean the hardware is identical, are they running the same applications/databases?  What software is on there?  Have you checked that the kernel parameters are the same?&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;mark</description>
      <pubDate>Thu, 23 May 2002 19:12:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730606#M836246</guid>
      <dc:creator>Mark Greene_1</dc:creator>
      <dc:date>2002-05-23T19:12:30Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730607#M836247</link>
      <description>Hi Tim,&lt;BR /&gt;&lt;BR /&gt;I believe the nice column is only showing values that have been "nice-altered" or deviate up or down from the default (20).&lt;BR /&gt;I could be wrong here.&lt;BR /&gt;Could be as simple as a good deal of users have started procs in the background which imposes a nice hit of 5, I think.&lt;BR /&gt;Most everything runs @ 20 by default except some of the logging daemons.&lt;BR /&gt;What do the actual process lines show - do you have a lot of procs that aren't 20?&lt;BR /&gt;&lt;BR /&gt;I usually only pay attentions to the user &amp;amp; system columns anyway - they tell the story.&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;Jeff&lt;BR /&gt;</description>
      <pubDate>Thu, 23 May 2002 19:30:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730607#M836247</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2002-05-23T19:30:03Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730608#M836248</link>
      <description>Thanks for the replys.&lt;BR /&gt;&lt;BR /&gt;1 - The computers are built identically, filesystems, kernels, software, patches, storage, network, cards in slots with same instance numbers the whole shooting match.  There are occasionally "slight" differences but this is normally administrator error.&lt;BR /&gt;2 - There are no "users" as such.  It runs an application that deals with phone call routing/process/handling, the only people who fiddle are administrators.&lt;BR /&gt;3 - By way of an example, there is a "daemon" process that is called "pmd".  On one box it had a nice value of 20 &amp;amp; the other 24.  These processes SHOULD (I will not discount admin error) start automatically using various "identical" configuratio files.&lt;BR /&gt;4 - The only majour variation is that the "services/daemon" (pmd etc) are/have/do  get restarted at different times, so one machine/set of daemons could have been running for 7 days un-touched, whereas the other may only have run for 1 day.&lt;BR /&gt;5 - There is a database (Informix) but this runs on it's own server/computer &amp;amp; is connected to via the network.&lt;BR /&gt;&lt;BR /&gt;Basically I'm 70% sure there is some priority degradation going on, BUT I thought this had nothing to do with the nice value, as I believe/understand they are seperate entities!  If I'm right this implies that someone may be using "renice" on running processes.  In which case I need to "re-educate" them urgently.  If I'm wrong, then I need to explain why processes seem to have a nice value of &amp;gt; 20, and hopefully fix it (if possible).&lt;BR /&gt;&lt;BR /&gt;Tim&lt;BR /&gt;&lt;BR /&gt;Any more suggestions&lt;BR /&gt;</description>
      <pubDate>Thu, 23 May 2002 20:06:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730608#M836248</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-05-23T20:06:06Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730609#M836249</link>
      <description>The system will nice a processes priority based on how long it has run, if it has been waiting for a while, if it has been waiting on IO, etc.  It is difficult to say what could be causing the nice'ing of the process, but this could certainly be it.  &lt;BR /&gt;&lt;BR /&gt;Also, you could have a processor failing, or some other type of bottleneck on the one system that you have not yet seen.  This could cause contention for the processes also.&lt;BR /&gt;&lt;BR /&gt;It's just a thought.&lt;BR /&gt;&lt;BR /&gt;Hope it helps&lt;BR /&gt;&lt;BR /&gt;John</description>
      <pubDate>Thu, 23 May 2002 20:12:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730609#M836249</guid>
      <dc:creator>John Payne_2</dc:creator>
      <dc:date>2002-05-23T20:12:12Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730610#M836250</link>
      <description>&amp;gt;&amp;gt;4 - The only majour variation is that the "services/daemon" (pmd etc) are/have/do get restarted at different times, so one machine/set of daemons could have been running for 7 days un-touched, whereas the other may only have run for 1 day. &amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;This, most likely, is your culprit.  Remember that the nice values are not intrinsic measurements of any one thing, they are relative values of processing time compared to the other processes on the system. Unless you are seeing other symptoms like swap issues or i/o binding, I wouldn't worry too much about it.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;Mark</description>
      <pubDate>Thu, 23 May 2002 20:14:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730610#M836250</guid>
      <dc:creator>Mark Greene_1</dc:creator>
      <dc:date>2002-05-23T20:14:30Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730611#M836251</link>
      <description>Hi (again) Tim,&lt;BR /&gt;&lt;BR /&gt; Well if an admin restarts the daemon from the command line &amp;amp; in the background using "&amp;amp;" it would be at a nice of 24.&lt;BR /&gt;I was incorrect "&amp;amp;" imposes a nice hit of 4 not 5.&lt;BR /&gt;That would be my educated guess &amp;amp; the solution would be to "instruct" the admins to not start/restart it using the executable but to run the startup script in /sbin/init.d....hopefully it has one.&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;Jeff</description>
      <pubDate>Thu, 23 May 2002 20:25:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730611#M836251</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2002-05-23T20:25:19Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730612#M836252</link>
      <description>Hi again&lt;BR /&gt;&lt;BR /&gt;Many thanks for the responses... Here is some more info attached in the nice.txt file.  I have supplied 3 things for each node&lt;BR /&gt; o top, showing how the CPU is split&lt;BR /&gt; o glance, showing pmd (The "Daddy" daemon proc)&lt;BR /&gt; o glance, showing pmd (The "Daddy" daemon proc) cummulatively&lt;BR /&gt;&lt;BR /&gt;From you answers there seems to be two possibilities:&lt;BR /&gt; 1 - INITIATION METHOD; node 1 was started as a background process and node 2 was not. As the nice value is inherited (I believe) this will explain the difference.&lt;BR /&gt; 2 - PRI &amp;amp; NICE DEGRADE; There is some priority degredation, which also degrades the nice value (which I did not think happened, but we live and learn).&lt;BR /&gt;&lt;BR /&gt;Unfortunately there is evedence for either as  o The "niced" node has had pmd running for over 7 weeks and the other has only been runnig for two weeks.&lt;BR /&gt;o A quick check by myself shows that all the procs I checked do indeed have a nice value of 24.  These are child procs of pmd&lt;BR /&gt;&lt;BR /&gt;I will be digging a bit deeper....&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Fri, 24 May 2002 08:13:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730612#M836252</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-05-24T08:13:29Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730613#M836253</link>
      <description>&lt;BR /&gt;Hi Tim,&lt;BR /&gt;&lt;BR /&gt;no 2 servers are identical. Just to do a quick check, is the output from;&lt;BR /&gt;&lt;BR /&gt;swlist -l fileset | wc -l&lt;BR /&gt;&lt;BR /&gt;The same on both servers ?&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;Stefan</description>
      <pubDate>Fri, 24 May 2002 08:33:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730613#M836253</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2002-05-24T08:33:56Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730614#M836254</link>
      <description>I did check the patch levels, I was going to bet the farm on them being the same, but.... they are not.  Our quality standards are slipping.  &lt;BR /&gt;&lt;BR /&gt;I also awarded you 3 points, in retrospect this should be more (7), sorry... put a dummy reply in and I'll give you 4 more...&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Fri, 24 May 2002 09:23:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730614#M836254</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-05-24T09:23:41Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730615#M836255</link>
      <description>&lt;BR /&gt;Hi Tim,&lt;BR /&gt;&lt;BR /&gt;aha, so they do have different numbers of filesets (patches+software) installed. The only way to ensure the software install is identical is to start by ensuring the same number of installed filesets. Just curious - how many filesets different were they ?&lt;BR /&gt;</description>
      <pubDate>Fri, 24 May 2002 09:32:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730615#M836255</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2002-05-24T09:32:35Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730616#M836256</link>
      <description>sn1c --&amp;gt; 1361&lt;BR /&gt;sn2b --&amp;gt; 1734&lt;BR /&gt;&lt;BR /&gt;I have checked "patches" which is probably more important and there are many a difference.  I'm not wholy convinced of the patch stuff, but I will dig a bit deeper.&lt;BR /&gt;&lt;BR /&gt;On a slightly different tack, I looked at another cluster running similar (but different version) software and found that despite the fact it had been running for some 7-8 weeks the priorities are 20.&lt;BR /&gt;&lt;BR /&gt;My current favorite is the background process as ALL the processes that are fathered by pmd have a nice value of 24 even the ones with a priority of 0 (zero)...&lt;BR /&gt;&lt;BR /&gt;Any more thoughts, any one, generosity is my middle name....&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Fri, 24 May 2002 09:53:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730616#M836256</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-05-24T09:53:48Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730617#M836257</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;From your 'top' and 'glance' samples, pmd is only active on the 2nd node.  It may be interesting to know which processes on the 1st node are consuming CPU "nicely".&lt;BR /&gt;&lt;BR /&gt;Also, if the patches on the two nodes are different, that *may* be the cause.  Have you also checked with 'swlist -l fileset -a state' if all patches are configured?&lt;BR /&gt;&lt;BR /&gt;Mladen</description>
      <pubDate>Fri, 24 May 2002 11:07:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730617#M836257</guid>
      <dc:creator>Mladen Despic</dc:creator>
      <dc:date>2002-05-24T11:07:55Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730618#M836258</link>
      <description>Hi Tim,&lt;BR /&gt;&lt;BR /&gt;Nice value don't change over time. They are set when a process start or ,indeed, inherited from the parent.&lt;BR /&gt;The thing that does change is priority (see top).When (Time Shared) processes run, they loose priority and regain priority as they wait their turn to run. A process's nice value is used as a factor in calculating how fast a process regains priority.&lt;BR /&gt;Priority queues:&lt;BR /&gt;-32 - -1 : Real time (POSIX)&lt;BR /&gt;0 - 127 HPUX real time (rtprio)&lt;BR /&gt;128 - 251 Time Share procs&lt;BR /&gt;252 - 255 Swapped processes&lt;BR /&gt;&lt;BR /&gt;HtH,&lt;BR /&gt;&lt;BR /&gt;Mark</description>
      <pubDate>Fri, 24 May 2002 11:27:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730618#M836258</guid>
      <dc:creator>Mark van Hassel</dc:creator>
      <dc:date>2002-05-24T11:27:48Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730619#M836259</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;You can also check the differences between the files /var/adm/sw/swagent.log on the two nodes.&lt;BR /&gt;Another useful check may be the output from 'kmtune'.  Any differences may point you further in terms of how the two nodes are different.&lt;BR /&gt;&lt;BR /&gt;As for the CPU utilization, can you list top 2 or 3 processes that consume most of the CPU on each system?&lt;BR /&gt;&lt;BR /&gt;Mladen</description>
      <pubDate>Fri, 24 May 2002 11:30:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730619#M836259</guid>
      <dc:creator>Mladen Despic</dc:creator>
      <dc:date>2002-05-24T11:30:42Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730620#M836260</link>
      <description>Tim&lt;BR /&gt;&lt;BR /&gt;Two identical machines running the same jobs for the usr / sys and nice to match would have to have "Exactly" the same processes running at the the same point of execuation at the same time. &lt;BR /&gt;&lt;BR /&gt;Even this is unlikly as the hardware throughput of devices CPU/ MEMORY/ETC whilst rated the same is not.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;So if you have processes "NICED"  exactly the same on both machines the value of nice from top  or glance will never match.&lt;BR /&gt;&lt;BR /&gt;Paula</description>
      <pubDate>Fri, 24 May 2002 11:35:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730620#M836260</guid>
      <dc:creator>Paula J Frazer-Campbell</dc:creator>
      <dc:date>2002-05-24T11:35:12Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730621#M836261</link>
      <description>pmd is running on both nodes if not (believe me) we would be in deeeeep do-do's.  I do apriciate that pmd does not run continously, it does very little (spawns, re-spawns, starts, halts &amp;amp; monitors it's children).  It may well not show much in the first Glance (immidiate) but you will see that it has consumed some 91.3 seconds of CPU since 11 May&lt;BR /&gt;&lt;BR /&gt;As far as the configured state of the software everything is "configured", there are a few items in the "installed" state, but I can explain these, nothing is "partial" or "corrupt"&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Fri, 24 May 2002 11:36:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730621#M836261</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-05-24T11:36:01Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730622#M836262</link>
      <description>Mladen - I do not need to do a top to tell you that it is fsdlexe | errord, they ALWAYS are top-of-the-pops, if not we're doing no work.&lt;BR /&gt;&lt;BR /&gt;I have however done the following&lt;BR /&gt;# ps -el | awk '$8=="24"{print $0}'&lt;BR /&gt;&lt;BR /&gt;This shows that ALL the processes started by pmd have a nice value of 24.  As I believe nice is an inhereted value I think this is damming evedence that someone either re-niced pmd or started the application as a background process.&lt;BR /&gt;&lt;BR /&gt;Paula - I'm not sure what you are saying.&lt;BR /&gt; a) No two machines are alike therefore you would not expect to see usr/nice the same.  I partially agree, but I would not expect to see the pattern in the nice.txt file which is totally reversed.&lt;BR /&gt; b) The machines are different, so the nice values will be different.  I disagree, I would expect to see a nice value of 20 across the board, it is the same software/binaries (with some minor exceptions)&lt;BR /&gt;&lt;BR /&gt;I'm still figuring that someone started the application in the background or re-niced pmd.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Fri, 24 May 2002 11:52:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730622#M836262</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-05-24T11:52:04Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730623#M836263</link>
      <description>Tim&lt;BR /&gt;&lt;BR /&gt;How about nicing the process to what it should be and monitor it.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I use a script that picks up certain logings and nices them down as their routines can cause load problems on my main server.&lt;BR /&gt;&lt;BR /&gt;I am sure that you can modify it to monitor the nice value of your process.&lt;BR /&gt;&lt;BR /&gt;--------------------------------------------&lt;BR /&gt;#!/bin/ksh&lt;BR /&gt;# Automatically nice down the ftpbbs universe routines&lt;BR /&gt;######################################################&lt;BR /&gt;# PJFC 2001&lt;BR /&gt;######################################################&lt;BR /&gt;# Get parent pids&lt;BR /&gt;######################################################&lt;BR /&gt;q=`who -u | grep ftpbbs `&lt;BR /&gt;p=`who -u | grep ftpbbs | awk '{print $7, $15 }'`&lt;BR /&gt;######################################################&lt;BR /&gt;# Seperate each pid to a string&lt;BR /&gt;######################################################&lt;BR /&gt;a=`echo $p | awk '{print $1}'`&lt;BR /&gt;b=`echo $p | awk '{print $2}'`&lt;BR /&gt;######################################################&lt;BR /&gt;# Pick up pid of universe process and nice value&lt;BR /&gt;######################################################&lt;BR /&gt;y=`ps -efl | grep $a | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $4}'` # PID&lt;BR /&gt;z=`ps -efl | grep $a | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $8}'` # Nice value&lt;BR /&gt;######################################################&lt;BR /&gt;# Check nice value&lt;BR /&gt;if [ $z = 20 ]&lt;BR /&gt;######################################################&lt;BR /&gt;# If nice value = 20 then a restart has occured so nice it down&lt;BR /&gt;######################################################&lt;BR /&gt;then&lt;BR /&gt;renice -n 19 $y&lt;BR /&gt;fi&lt;BR /&gt;######################################################&lt;BR /&gt;# Do it all again for other ftpbbs login&lt;BR /&gt;######################################################&lt;BR /&gt;w=`ps -efl | grep $b | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $4}'`&lt;BR /&gt;x=`ps -efl | grep $b | grep -v grep | grep -v sh | grep root | grep uv | awk '{print $8}'`&lt;BR /&gt;######################################################&lt;BR /&gt;# Check nice value&lt;BR /&gt;if [ $x = 20 ]&lt;BR /&gt;######################################################&lt;BR /&gt;# If nice value = 20 then a restart has occured so nice it down&lt;BR /&gt;######################################################&lt;BR /&gt;then&lt;BR /&gt;renice -n 19 $w&lt;BR /&gt;fi&lt;BR /&gt;echo "Renice ran "&lt;BR /&gt;exit 1&lt;BR /&gt;&lt;BR /&gt;---------------------------------------------&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Paula</description>
      <pubDate>Fri, 24 May 2002 12:01:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730623#M836263</guid>
      <dc:creator>Paula J Frazer-Campbell</dc:creator>
      <dc:date>2002-05-24T12:01:52Z</dc:date>
    </item>
    <item>
      <title>Re: a nice enigma!</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730624#M836264</link>
      <description>No processes just get niced for any reason.&lt;BR /&gt;&lt;BR /&gt;They have to be niced when they start by the command line or if someone changes them after they have started running.&lt;BR /&gt;&lt;BR /&gt;Nicing is a people thing, HPUX does not just nice processes because it feels like it.</description>
      <pubDate>Fri, 24 May 2002 12:08:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/a-nice-enigma/m-p/2730624#M836264</guid>
      <dc:creator>John Bolene</dc:creator>
      <dc:date>2002-05-24T12:08:54Z</dc:date>
    </item>
  </channel>
</rss>

