<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: server 100% in system mode in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417917#M14830</link>
    <description>hi sajeesh,&lt;BR /&gt;&lt;BR /&gt;i have see some similar problems under 2.0 and 2.1 kernels. you should not worry about this if there is a huge nice level. i think you have cpus where the kernel should do nice calls so the cpu's know that there is nothing to do, but the kerne don't make this calls. so the only thing is your system needs more power than needed. if your system runs fine hope that a newer kernel will fix this bug.&lt;BR /&gt;&lt;BR /&gt;best regards,&lt;BR /&gt;&lt;BR /&gt;johannes</description>
    <pubDate>Wed, 10 Nov 2004 07:15:36 GMT</pubDate>
    <dc:creator>Johannes Krackowizer_1</dc:creator>
    <dc:date>2004-11-10T07:15:36Z</dc:date>
    <item>
      <title>server 100% in system mode</title>
      <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417912#M14825</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I have a linux box with RHAS 2.1. &lt;BR /&gt;The configuration is 4x1.5 GHz CPU and&lt;BR /&gt;4 GB RAM.&lt;BR /&gt;&lt;BR /&gt;My top output showing server is 100% in system mode and load also high. Please see the output below.&lt;BR /&gt;&lt;BR /&gt;273 processes: 261 sleeping, 12 running, 0 zombie, 0 stopped                    &lt;BR /&gt;CPU0 states: 100.0% user,  0.0% system, 100.0% nice,  0.0% idle                 &lt;BR /&gt;CPU1 states: 100.0% user,  0.0% system, 100.0% nice,  0.0% idle                 &lt;BR /&gt;CPU2 states: 100.0% user,  0.0% system, 100.1% nice,  0.0% idle                 &lt;BR /&gt;CPU3 states: 92.0% user,  7.0% system, 88.0% nice,  0.0% idle                   &lt;BR /&gt;Mem:  3831076K av, 1600676K used, 2230400K free,       0K shrd,  215044K buff   &lt;BR /&gt;Swap: 2044056K av,       0K used, 2044056K free                  493872K cached &lt;BR /&gt;&lt;BR /&gt;Also my kernel parameter setting below.&lt;BR /&gt;net.ipv4.conf.default.rp_filter = 1                     &lt;BR /&gt;net.ipv4.ip_forward = 0                                 &lt;BR /&gt;kernel.sysrq = 0                                        &lt;BR /&gt;kernel.msgmax = 16384                                   &lt;BR /&gt;kernel.msgmnb = 32768                                   &lt;BR /&gt;kernel.msgmni = 2052                                    &lt;BR /&gt;kernel.shmmax = 2147483648                              &lt;BR /&gt;fs.file-max = 417145                                    &lt;BR /&gt;kernel.sem=250        32000   128     128               &lt;BR /&gt;net.ipv4.tcp_fin_timeout = 10                           &lt;BR /&gt;net.ipv4.tcp_keepalive_time=300                         &lt;BR /&gt;net.core.rmem_default = 262143                          &lt;BR /&gt;net.core.rmem_max = 262143                              &lt;BR /&gt;net.core.wmem_default=262143                            &lt;BR /&gt;net.core.wmem_max = 262143                              &lt;BR /&gt;net.ipv4.tcp_rmem = 4096        262143   262143         &lt;BR /&gt;net.ipv4.tcp_wmem = 4096        262143   262143         &lt;BR /&gt;net.ipv4.tcp_sack = 0                                   &lt;BR /&gt;net.ipv4.tcp_timestamps = 0                             &lt;BR /&gt;vm.freepages = 1024      2048    3072                   &lt;BR /&gt;net.ipv4.tcp_max_syn_backlog=8192                       &lt;BR /&gt;&lt;BR /&gt;Can somebody help what could be the problem.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Sajeesh&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Nov 2004 04:36:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417912#M14825</guid>
      <dc:creator>Sajeesh O.K</dc:creator>
      <dc:date>2004-11-09T04:36:32Z</dc:date>
    </item>
    <item>
      <title>Re: server 100% in system mode</title>
      <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417913#M14826</link>
      <description>Opps ! It is 100% in nice mode !</description>
      <pubDate>Tue, 09 Nov 2004 05:05:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417913#M14826</guid>
      <dc:creator>Sajeesh O.K</dc:creator>
      <dc:date>2004-11-09T05:05:19Z</dc:date>
    </item>
    <item>
      <title>Re: server 100% in system mode</title>
      <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417914#M14827</link>
      <description>do you have manu user's tasks? ["top" output]</description>
      <pubDate>Tue, 09 Nov 2004 05:32:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417914#M14827</guid>
      <dc:creator>Vitaly Karasik_1</dc:creator>
      <dc:date>2004-11-09T05:32:11Z</dc:date>
    </item>
    <item>
      <title>Re: server 100% in system mode</title>
      <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417915#M14828</link>
      <description>what are sthose 12 process runing ? What are they doing ?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Fred&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Nov 2004 07:43:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417915#M14828</guid>
      <dc:creator>Fred Ruffet</dc:creator>
      <dc:date>2004-11-09T07:43:17Z</dc:date>
    </item>
    <item>
      <title>Re: server 100% in system mode</title>
      <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417916#M14829</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;What is the application running on the system &amp;amp; what are the top process running , was it always doing since the application was loaded or recently have you made any changes by installing something or modifying the kernel parameters which might also affect the performace or on top of everything there might be a real resouce crunch wherein you might have to consider adding more resource all can be told with more details.&lt;BR /&gt;&lt;BR /&gt;Rgds&lt;BR /&gt;&lt;BR /&gt;HGN</description>
      <pubDate>Tue, 09 Nov 2004 08:48:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417916#M14829</guid>
      <dc:creator>HGN</dc:creator>
      <dc:date>2004-11-09T08:48:27Z</dc:date>
    </item>
    <item>
      <title>Re: server 100% in system mode</title>
      <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417917#M14830</link>
      <description>hi sajeesh,&lt;BR /&gt;&lt;BR /&gt;i have see some similar problems under 2.0 and 2.1 kernels. you should not worry about this if there is a huge nice level. i think you have cpus where the kernel should do nice calls so the cpu's know that there is nothing to do, but the kerne don't make this calls. so the only thing is your system needs more power than needed. if your system runs fine hope that a newer kernel will fix this bug.&lt;BR /&gt;&lt;BR /&gt;best regards,&lt;BR /&gt;&lt;BR /&gt;johannes</description>
      <pubDate>Wed, 10 Nov 2004 07:15:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417917#M14830</guid>
      <dc:creator>Johannes Krackowizer_1</dc:creator>
      <dc:date>2004-11-10T07:15:36Z</dc:date>
    </item>
    <item>
      <title>Re: server 100% in system mode</title>
      <link>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417918#M14831</link>
      <description>You're running HP daemons, right? You probably have several disks too...&lt;BR /&gt;&lt;BR /&gt;I just posted this in the "cmascsid process taking up all of CPU" thread. I bet it's the same...&lt;BR /&gt;&lt;BR /&gt;-------------------------------------------&lt;BR /&gt;&lt;BR /&gt;The explanation is somewhat involved...&lt;BR /&gt;&lt;BR /&gt;If you trace cma*d you'll find that it doesn't do anything but open the device, ioctl, close. Admittedly rather more times than should be necessary but that's just incidental bad design.&lt;BR /&gt;&lt;BR /&gt;You'll find the delay - and system time consumption - seems to happen on the close. From here you need a fairly good working knowledge of the Linux kernel...&lt;BR /&gt;&lt;BR /&gt;Ok? still with me then?&lt;BR /&gt;&lt;BR /&gt;Run oprofile for a while and you'll find the cpu time is being consumed by invalidate_bdev. Which is interesting :-).&lt;BR /&gt;&lt;BR /&gt;Invalidate_bdev is called from kill_bdev. Kill_bdev is called from the block device release code. Release is what happens on last close. Now the monitoring daemon is opening the unpartitioned disk device which it is pretty certain nothing else has open. (Off hand I'm not sure if even having an fs on the device counts as it being open. There are subtle differences and I *think* I'm right in saying that block device access and fs access is considered different at this level. Don't quote me or blame me!)&lt;BR /&gt;&lt;BR /&gt;So, each close triggers invalidate_bdev. Why is this so bad? Well, the idea is that when the last close happens on a device you need to flush any cached data because, with much PC HW, you can't be sure when the media gets changed. Invalidate_bdev isn't *meant* to be called often. It works by scanning through the entire list of cached data for block devices to find and drop data related to the device being closed. So it sucks system time and the amount is proportional to the amount of cached (from any device) data you have.&lt;BR /&gt;&lt;BR /&gt;WORKAROUND:&lt;BR /&gt;All you need to do is to make sure that each time the cma*d daemon closes the device it isn't the *last* close - i.e. some other process has the device open. The other process doesn't even need to *do* anything. Try something along the lines of:&lt;BR /&gt;&lt;BR /&gt;sh -c 'kill -STOP $$' &amp;lt; /dev/cciss/c0d0 &amp;gt; /dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&lt;BR /&gt;&lt;BR /&gt;Hope that's all clear! (As mud... :-) )&lt;BR /&gt;&lt;BR /&gt;(HP: As well as blind debugging I do Linux &amp;amp; OSS consultancy. I happen to know the answer to this one as it came up at a major investment bank...)</description>
      <pubDate>Sun, 05 Dec 2004 17:13:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/server-100-in-system-mode/m-p/3417918#M14831</guid>
      <dc:creator>Mike Jagdis</dc:creator>
      <dc:date>2004-12-05T17:13:31Z</dc:date>
    </item>
  </channel>
</rss>

