HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Kernel Panic on QLogic Fibre HBA
Operating System - Linux
1830734
Members
1807
Online
110015
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-14-2005 08:20 PM
11-14-2005 08:20 PM
Kernel Panic on QLogic Fibre HBA
Hi,
we are experiencing a problem with one of our DL585 servers where every so often it a failing with a Kernel Panic Exception. On checking the console it reports a stack trace from the qla2300 module when reading a fibre attached tape drive. I have checked on Google and have found some reference to this where people have increased the max queue depth from 16 to 64 which has cured the problem.
What I would like to understand though is whether I have actually enabled this. In my modules.conf I have the following settings :-
options qla2300 ConfigRequired=0 displayConfig=1 ql2xuseextopts=1 ql2xmaxqdepth=64 qlport_down_retry=64 qlogin_retry_count=16 ql2xfailover=1 ql2xlbType=0 ql2xexcludemodel=
I have checked /var/log/messages and I cannot see what options have been loaded to the module but the SCSI devices report :-
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:0:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:1:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:2:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:3:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:3:1): Enabled tagged queuing, queue depth 16.
How can I ensure that the correct options are being loaded? Is there potentially another problem here? The cards check out as working okay.
I am using the following driver http://h18004.www1.hp.com/support/files/server/us/download/23175.html
on cards :-
05:0d.0 Fibre Channel: QLogic Corp. QLA2312 Fibre Channel Adapter (rev 02)
Subsystem: QLogic Corp.: Unknown device 0100
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 32
I/O ports at 6000 [size=256]
Memory at f7ef0000 (64-bit, non-prefetchable) [size=4K]
Expansion ROM at [disabled] [size=128K]
Capabilities: [44] Power Management version 2
Capabilities: [4c] PCI-X non-bridge device.
Capabilities: [54] Message Signalled Interrupts: 64bit+ Queue=0/3 Enable-
Capabilities: [64] #06 [0080]
06:0e.0 Fibre Channel: QLogic Corp. QLA2312 Fibre Channel Adapter (rev 02)
Subsystem: QLogic Corp.: Unknown device 0100
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 36
I/O ports at 7000 [size=256]
Memory at f7ff0000 (64-bit, non-prefetchable) [size=4K]
Expansion ROM at [disabled] [size=128K]
Capabilities: [44] Power Management version 2
Capabilities: [4c] PCI-X non-bridge device.
Capabilities: [54] Message Signalled Interrupts: 64bit+ Queue=0/3 Enable-
Capabilities: [64] #06 [0080]
TIA
we are experiencing a problem with one of our DL585 servers where every so often it a failing with a Kernel Panic Exception. On checking the console it reports a stack trace from the qla2300 module when reading a fibre attached tape drive. I have checked on Google and have found some reference to this where people have increased the max queue depth from 16 to 64 which has cured the problem.
What I would like to understand though is whether I have actually enabled this. In my modules.conf I have the following settings :-
options qla2300 ConfigRequired=0 displayConfig=1 ql2xuseextopts=1 ql2xmaxqdepth=64 qlport_down_retry=64 qlogin_retry_count=16 ql2xfailover=1 ql2xlbType=0 ql2xexcludemodel=
I have checked /var/log/messages and I cannot see what options have been loaded to the module but the SCSI devices report :-
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:0:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:1:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:2:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:3:0): Enabled tagged queuing, queue depth 16.
Nov 14 22:37:03 skprod2 kernel: scsi(0:0:3:1): Enabled tagged queuing, queue depth 16.
How can I ensure that the correct options are being loaded? Is there potentially another problem here? The cards check out as working okay.
I am using the following driver http://h18004.www1.hp.com/support/files/server/us/download/23175.html
on cards :-
05:0d.0 Fibre Channel: QLogic Corp. QLA2312 Fibre Channel Adapter (rev 02)
Subsystem: QLogic Corp.: Unknown device 0100
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 32
I/O ports at 6000 [size=256]
Memory at f7ef0000 (64-bit, non-prefetchable) [size=4K]
Expansion ROM at
Capabilities: [44] Power Management version 2
Capabilities: [4c] PCI-X non-bridge device.
Capabilities: [54] Message Signalled Interrupts: 64bit+ Queue=0/3 Enable-
Capabilities: [64] #06 [0080]
06:0e.0 Fibre Channel: QLogic Corp. QLA2312 Fibre Channel Adapter (rev 02)
Subsystem: QLogic Corp.: Unknown device 0100
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 36
I/O ports at 7000 [size=256]
Memory at f7ff0000 (64-bit, non-prefetchable) [size=4K]
Expansion ROM at
Capabilities: [44] Power Management version 2
Capabilities: [4c] PCI-X non-bridge device.
Capabilities: [54] Message Signalled Interrupts: 64bit+ Queue=0/3 Enable-
Capabilities: [64] #06 [0080]
TIA
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-14-2005 11:43 PM
11-14-2005 11:43 PM
Re: Kernel Panic on QLogic Fibre HBA
Do you have the qlogic gui that ships with the card? It should have a screen for verifying you r settings are working.
SEP
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-15-2005 12:02 AM
11-15-2005 12:02 AM
Re: Kernel Panic on QLogic Fibre HBA
I have used the HPSanSurfer and it all checks out okay.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-16-2005 02:14 AM
11-16-2005 02:14 AM
Re: Kernel Panic on QLogic Fibre HBA
What kernel are you running? I have run into situations where the current HBA driver set isn't compatible with some older kernels...
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP