- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- BSoD 0x19 BAD_POOL_HEADER on DL380 G6
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-09-2009 11:19 AM
тАО06-09-2009 11:19 AM
BSoD 0x19 BAD_POOL_HEADER on DL380 G6
I am receiving a BSoD -- the bugcheck was: 0x00000019 (0x0000000000000021, 0xfffffa802f817000, 0x00000000000014d0, 0x0000000000000000). This error is occurring on one of my new DL380 G6 servers.
The server hardware is:
491316-001 - DL380 G6 performance model
+ 6 x 516423-B21 8 GB memory kits
+ 1 x 512485-B21 Advanced iLO
+ 2 x 507127-B21 300 GB SAS 6G HDs
+ 3 x 412648-B21 NC360T NICs
This server is running Windows Server 2008 Datacenter Edition Server Core SP2 x64. The Hyper-V server role and failover clustering, MPIO, and SNMP server features are installed. I have the ProLiant support pack ver 8.20 installed (I realize this is not the latest version as of today, upgrading is on the radar).
The server is connected to an HP MSA 2012i iSCSI SAN using MPIO. Many LUNs are presented, as Hyper-V likes each VM on its own LUN for Hyper-V clustering to work nicely.
I am working with Microsoft on getting to the bottom of this issue. They had me enable the driver verifier utility (verifier.exe) for all 3rd party drivers. Once I enabled driver verifier, the server went into an endless bluescreen loop, throwing the bugcheck: 0x000000c4 (0x0000000000000062, 0xfffff98007ff6fe0, 0xfffff98007ff6f40, 0x0000000000000008).
I've used the debugging tools for Windows to analyze the memory.dmp on the latest bugcheck. Here are the detailed results:
*******************************************************************************
* *
* Bugcheck Analysis *
* *
*******************************************************************************
DRIVER_VERIFIER_DETECTED_VIOLATION (c4)
A device driver attempting to corrupt the system has been caught. This is
because the driver was specified in the registry as being suspect (by the
administrator) and the kernel has enabled substantial checking of this driver.
If the driver attempts to corrupt the system, bugchecks 0xC4, 0xC1 and 0xA will
be among the most commonly seen crashes.
Arguments:
Arg1: 0000000000000062, A driver has forgotten to free its pool allocations prior to unloading.
Arg2: fffff98007e62fe0, name of the driver having the issue.
Arg3: fffff98007e62f40, verifier internal structure with driver information.
Arg4: 0000000000000008, total # of (paged+nonpaged) allocations that weren't freed.
Type !verifier 3 drivername.sys for info on the allocations
that were leaked that caused the bugcheck.
Debugging Details:
------------------
BUGCHECK_STR: 0xc4_62
IMAGE_NAME: cpqteam.sys
DEBUG_FLR_IMAGE_TIMESTAMP: 49a65b79
MODULE_NAME: cpqteam
FAULTING_MODULE: fffffa6004d34000 cpqteam
VERIFIER_DRIVER_ENTRY: dt nt!_MI_VERIFIER_DRIVER_ENTRY fffff98007e62f40
+0x000 Links : _LIST_ENTRY [ 0xfffff980`07ee6f50 - 0xfffff980`05036f40 ]
+0x010 Loads : 1
+0x014 Unloads : 0
+0x018 BaseName : _UNICODE_STRING "cpqteam.sys"
+0x028 StartAddress : 0xfffffa60`04d34000
+0x030 EndAddress : 0xfffffa60`04d6f000
+0x038 Flags : 1
+0x040 Signature : 0x98761940
+0x050 PoolPageHeaders : _SLIST_HEADER
+0x060 PoolTrackers : _SLIST_HEADER
+0x070 CurrentPagedPoolAllocations : 0
+0x074 CurrentNonPagedPoolAllocations : 8
+0x078 PeakPagedPoolAllocations : 0
+0x07c PeakNonPagedPoolAllocations : 0x19
+0x080 PagedBytes : 0
+0x088 NonPagedBytes : 0x1570
+0x090 PeakPagedBytes : 0
+0x098 PeakNonPagedBytes : 0x20910
DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT
PROCESS_NAME: System
CURRENT_IRQL: 0
LAST_CONTROL_TRANSFER: from fffff80001ca954d to fffff800018ba450
STACK_TEXT:
fffffa60`01da2788 fffff800`01ca954d : 00000000`000000c4 00000000`00000062 fffff980`07e62fe0 fffff980`07e62f40 : nt!KeBugCheckEx
fffffa60`01da2790 fffff800`01cb6557 : fffffa60`00000000 00000000`00000012 00000000`00000000 00000000`00000000 : nt!VerifierBugCheckIfAppropriate+0x3d
fffffa60`01da27d0 fffff800`01bd6b86 : fffff980`07e58f50 fffff980`07e58f50 00000000`00000000 00000000`00000018 : nt!ViCheckDriverUnloading+0xb7
fffffa60`01da2810 fffff800`01bd7711 : 00000000`00000000 00000000`00000080 00000000`00000000 fffff800`018b0200 : nt!VerifierDriverUnloading+0x26
fffffa60`01da2840 fffff800`01c96191 : 00000000`00000000 00000000`00000000 fffff980`07edce00 00000000`00000018 : nt!MmUnloadSystemImage+0x2d2
fffffa60`01da28b0 fffff800`018beee3 : fffff980`00466dc0 fffff980`07edce00 fffff980`07edce70 00000000`00000100 : nt!IopDeleteDriver+0x41
fffffa60`01da28e0 fffff800`01ab5806 : 00000000`00000010 00000000`00010286 00000000`00000000 00000000`00000018 : nt!ObfDereferenceObject+0x103
fffffa60`01da2970 fffff800`018beee3 : fffffa60`01da2860 fffff980`4dfe6fe0 00000000`00000016 fffff980`46686f80 : nt!IopDeleteDevice+0x36
fffffa60`01da29a0 fffff800`0198d571 : fffff980`4dfe6fe0 ffffffff`fcd58000 00000000`fffffff3 00000000`00000000 : nt!ObfDereferenceObject+0x103
fffffa60`01da2a30 fffff800`01c93aa4 : fffff980`01d10de0 00000000`00000000 00000000`00000002 00000000`00000018 : nt!PnpRemoveLockedDeviceNode+0x241
fffffa60`01da2a80 fffff800`01c93bc0 : 00000000`00000000 fffff980`01d10d01 fffff980`44bb8fe0 fffff800`3f051397 : nt!PnpDeleteLockedDeviceNode+0x44
fffffa60`01da2ab0 fffff800`01c98277 : 00000000`00000002 00000000`00000000 00000000`00000000 fffff980`00000000 : nt!PnpDeleteLockedDeviceNodes+0xa0
fffffa60`01da2b20 fffff800`01c988ac : fffffa60`00000000 00000000`00010200 fffffa60`01da2c00 00000000`00000000 : nt!PnpProcessQueryRemoveAndEject+0xbe7
fffffa60`01da2c70 fffff800`01b9790a : 00000000`00000001 fffff980`4abe6fe0 fffff980`4ad2ef40 fffff800`01ab7800 : nt!PnpProcessTargetDeviceEvent+0x4c
fffffa60`01da2ca0 fffff800`018c18c3 : fffff800`01ab7850 fffff980`4ad2ef40 fffff800`019f18f8 fffff980`002babb0 : nt! ?? ::NNGAKEGL::`string'+0x502d7
fffffa60`01da2cf0 fffff800`01ac4f37 : fffff980`4abe6fe0 00000000`00000000 fffff980`002babb0 00000000`00000080 : nt!ExpWorkerThread+0xfb
fffffa60`01da2d50 fffff800`018f7616 : fffffa60`01900180 fffff980`002babb0 fffffa60`01909d40 fffff980`003c4ca8 : nt!PspSystemThreadStartup+0x57
fffffa60`01da2d80 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x16
STACK_COMMAND: kb
FOLLOWUP_NAME: MachineOwner
FAILURE_BUCKET_ID: X64_0xc4_62_VRF_IMAGE_cpqteam.sys
BUCKET_ID: X64_0xc4_62_VRF_IMAGE_cpqteam.sys
Followup: MachineOwner
---------
As you can see, it looks as though the network teaming driver is causing the issue. I am still waiting for confirmation from Microsoft, but I am thinking of disabling NIC teaming until a fix is provided from HP.
I considered upgrading the network drivers and the network configuration utility, but the release notes/fixes do not list anything similar to that which I am experiencing.
Is anyone else seeing this issue? What do you think?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-15-2009 09:16 AM
тАО06-15-2009 09:16 AM
Re: BSoD 0x19 BAD_POOL_HEADER on DL380 G6
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2009 02:16 AM
тАО06-18-2009 02:16 AM
Re: BSoD 0x19 BAD_POOL_HEADER on DL380 G6
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2009 09:28 AM
тАО06-18-2009 09:28 AM
Re: BSoD 0x19 BAD_POOL_HEADER on DL380 G6
Can you please provide more specifics?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2010 06:57 AM
тАО08-26-2010 06:57 AM
Re: BSoD 0x19 BAD_POOL_HEADER on DL380 G6
http://support.microsoft.com/kb/974201
Also, the other thing you should try if you haven't already is to run the latest ProLiant Support Pack, check the boxes to "check ftp.hp.com", and install all available updates.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2010 07:03 AM
тАО08-26-2010 07:03 AM
Re: BSoD 0x19 BAD_POOL_HEADER on DL380 G6
If it's any consolation, that server has been running for 6 months now without the driver verifier enabled and has never blue screened.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2010 12:45 PM
тАО08-26-2010 12:45 PM
Re: BSoD 0x19 BAD_POOL_HEADER on DL380 G6
The iSCSI issue occurred with and without driver verifier enabled. I think that my error with cpqteam.sys went away once I updated my PSP. I also received another Hyper-V bugcheck with driver verifier enabled on this server, but Microsoft assured me that that nature of the Hyper-V bugcheck was such that it would never occur unless driver verifier was enabled, so they wanted me to disable it while they fixed the Hyper-V bug (to be rolled up in a later hotfix).