<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: kernel panic dumpfile in red hat linux 6.2 , use crash to analyse in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/kernel-panic-dumpfile-in-red-hat-linux-6-2-use-crash-to-analyse/m-p/6736053#M58807</link>
    <description>&lt;P&gt;This looks like a known panic fixed by&amp;nbsp;2.6.32-358 (RHEL 6.4) and later.&lt;/P&gt;&lt;P&gt;See:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;A href="http://h20564.www2.hp.com/hpsc/doc/public/display?docId=mmr_kc-0119858" target="_blank"&gt;Red Hat Enterprise Linux 6 - Kernel Panic Divide by Zero at thread_group_times+86&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 22 Apr 2015 19:12:08 GMT</pubDate>
    <dc:creator>Jason Orendorf</dc:creator>
    <dc:date>2015-04-22T19:12:08Z</dc:date>
    <item>
      <title>kernel panic dumpfile in red hat linux 6.2 , use crash to analyse</title>
      <link>https://community.hpe.com/t5/operating-system-linux/kernel-panic-dumpfile-in-red-hat-linux-6-2-use-crash-to-analyse/m-p/6691213#M58806</link>
      <description>&lt;P&gt;I have a dump file vmcore from red hat linux 6.2 .I use crash to analyse&lt;/P&gt;&lt;P&gt;I find there is something like "fmd_busiest_group" in bt .&lt;/P&gt;&lt;P&gt;I attach the command &amp;nbsp;log I analyse in crash.&lt;/P&gt;&lt;P&gt;Anyone can suggest &amp;nbsp;me how to explain and fix it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Below is the ouput for "bt" from crash .&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;KERNEL: ./vmlinux&lt;BR /&gt;DUMPFILE: ./vmcore [PARTIAL DUMP]&lt;BR /&gt;CPUS: 24&lt;BR /&gt;DATE: Sun Jan 4 21:25:44 2015&lt;BR /&gt;UPTIME: 43 days, 23:51:28&lt;BR /&gt;LOAD AVERAGE: 0.08, 0.02, 0.01&lt;BR /&gt;TASKS: 945&lt;BR /&gt;NODENAME: avayaacr.XXXXXXX.com.hk&lt;BR /&gt;RELEASE: 2.6.32-220.el6.x86_64&lt;BR /&gt;VERSION: #1 SMP Wed Nov 9 08:03:13 EST 2011&lt;BR /&gt;MACHINE: x86_64 (2294 Mhz)&lt;BR /&gt;MEMORY: 16 GB&lt;BR /&gt;PANIC: ""&lt;BR /&gt;PID: 0&lt;BR /&gt;COMMAND: "swapper"&lt;BR /&gt;TASK: ffff8804352e0080 (1 of 24) [THREAD_INFO: ffff88043530a000]&lt;BR /&gt;CPU: 17&lt;BR /&gt;STATE: TASK_RUNNING (PANIC)&lt;/P&gt;&lt;P&gt;crash&amp;gt; bt&lt;BR /&gt;PID: 0 TASK: ffff8804352e0080 CPU: 17 COMMAND: "swapper"&lt;BR /&gt;#0 [ffff880036823900] machine_kexec at ffffffff81031fcb&lt;BR /&gt;#1 [ffff880036823960] crash_kexec at ffffffff810b8f72&lt;BR /&gt;#2 [ffff880036823a30] oops_end at ffffffff814f04b0&lt;BR /&gt;#3 [ffff880036823a60] die at ffffffff8100f26b&lt;BR /&gt;#4 [ffff880036823a90] do_trap at ffffffff814efda4&lt;BR /&gt;#5 [ffff880036823af0] do_divide_error at ffffffff8100cfff&lt;BR /&gt;#6 [ffff880036823b90] divide_error at ffffffff8100be7b&lt;BR /&gt;[exception RIP: find_busiest_group+1477]&lt;BR /&gt;RIP: ffffffff81054ad5 RSP: ffff880036823c40 RFLAGS: 00010246&lt;BR /&gt;RAX: 0000000000000000 RBX: ffff880036823e64 RCX: 0000000000000000&lt;BR /&gt;RDX: 0000000000000000 RSI: ffff88003672f860 RDI: 0000000000000000&lt;BR /&gt;RBP: ffff880036823dd0 R8: ffff88003672fda0 R9: 0000000000000040&lt;BR /&gt;R10: 0000000000000001 R11: 0000000000000000 R12: 00000000ffffff01&lt;BR /&gt;R13: 0000000000015fc0 R14: ffffffffffffffff R15: 0000000000000000&lt;BR /&gt;ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018&lt;BR /&gt;#7 [ffff880036823dd8] rebalance_domains at ffffffff8105fc52&lt;BR /&gt;#8 [ffff880036823ea8] run_rebalance_domains at ffffffff81060153&lt;BR /&gt;#9 [ffff880036823ee8] __do_softirq at ffffffff81072161&lt;BR /&gt;#10 [ffff880036823f58] call_softirq at ffffffff8100c24c&lt;BR /&gt;#11 [ffff880036823f70] do_softirq at ffffffff8100de85&lt;BR /&gt;#12 [ffff880036823f90] irq_exit at ffffffff81071f45&lt;BR /&gt;#13 [ffff880036823fa0] smp_call_function_single_interrupt at ffffffff8102a255&lt;BR /&gt;#14 [ffff880036823fb0] call_function_single_interrupt at ffffffff8100bdb3&lt;BR /&gt;--- &amp;lt;IRQ stack&amp;gt; ---&lt;BR /&gt;#15 [ffff88043530bdb8] call_function_single_interrupt at ffffffff8100bdb3&lt;BR /&gt;[exception RIP: intel_idle+238]&lt;BR /&gt;RIP: ffffffff812c4a6e RSP: ffff88043530be68 RFLAGS: 00000202&lt;BR /&gt;RAX: 0000000000000000 RBX: ffff88043530bed8 RCX: 0000000000000000&lt;BR /&gt;RDX: 0000000000001b08 RSI: 0000000000000000 RDI: 000000000069985f&lt;BR /&gt;RBP: ffffffff8100bdae R8: 0000000000000005 R9: 000000000000006d&lt;BR /&gt;R10: 000d811401194129 R11: ffff88043530be78 R12: ffff880036830f40&lt;BR /&gt;R13: 0000000000000000 R14: 000d81122686c740 R15: ffff880036831040&lt;BR /&gt;ORIG_RAX: ffffffffffffff04 CS: 0010 SS: 0018&lt;BR /&gt;#16 [ffff88043530bee0] cpuidle_idle_call at ffffffff813f9f67&lt;BR /&gt;#17 [ffff88043530bf00] cpu_idle at ffffffff81009e06&lt;BR /&gt;crash&amp;gt; log | tail -10&lt;BR /&gt;[&amp;lt;ffffffff8100bdb3&amp;gt;] call_function_single_interrupt+0x13/0x20&lt;BR /&gt;&amp;lt;EOI&amp;gt;&lt;BR /&gt;[&amp;lt;ffffffff812c4a6e&amp;gt;] ? intel_idle+0xde/0x170&lt;BR /&gt;[&amp;lt;ffffffff812c4a51&amp;gt;] ? intel_idle+0xc1/0x170&lt;BR /&gt;[&amp;lt;ffffffff813f9f67&amp;gt;] cpuidle_idle_call+0xa7/0x140&lt;BR /&gt;[&amp;lt;ffffffff81009e06&amp;gt;] cpu_idle+0xb6/0x110&lt;BR /&gt;[&amp;lt;ffffffff814e5f43&amp;gt;] start_secondary+0x202/0x245&lt;BR /&gt;Code: d0 b8 01 00 00 00 48 c1 ea 0a 48 85 d2 0f 45 c2 41 89 40 08 66 90 4c 8b 85 e0 fe ff ff 48 8b 45 a8 31 d2 41 8b 48 08 48 c1 e0 0a &amp;lt;48&amp;gt; f7 f1 48 8b 4d b0 48 89 45 a0 31 c0 48 85 c9 74 0c 48 8b 45&lt;BR /&gt;RIP [&amp;lt;ffffffff81054ad5&amp;gt;] find_busiest_group+0x5c5/0xb20&lt;BR /&gt;RSP &amp;lt;ffff880036823c40&amp;gt;&lt;/P&gt;</description>
      <pubDate>Wed, 07 Jan 2015 02:25:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/kernel-panic-dumpfile-in-red-hat-linux-6-2-use-crash-to-analyse/m-p/6691213#M58806</guid>
      <dc:creator>chuiking</dc:creator>
      <dc:date>2015-01-07T02:25:16Z</dc:date>
    </item>
    <item>
      <title>Re: kernel panic dumpfile in red hat linux 6.2 , use crash to analyse</title>
      <link>https://community.hpe.com/t5/operating-system-linux/kernel-panic-dumpfile-in-red-hat-linux-6-2-use-crash-to-analyse/m-p/6736053#M58807</link>
      <description>&lt;P&gt;This looks like a known panic fixed by&amp;nbsp;2.6.32-358 (RHEL 6.4) and later.&lt;/P&gt;&lt;P&gt;See:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;A href="http://h20564.www2.hp.com/hpsc/doc/public/display?docId=mmr_kc-0119858" target="_blank"&gt;Red Hat Enterprise Linux 6 - Kernel Panic Divide by Zero at thread_group_times+86&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2015 19:12:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/kernel-panic-dumpfile-in-red-hat-linux-6-2-use-crash-to-analyse/m-p/6736053#M58807</guid>
      <dc:creator>Jason Orendorf</dc:creator>
      <dc:date>2015-04-22T19:12:08Z</dc:date>
    </item>
  </channel>
</rss>

