- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: [HELP]The AH401A driver's bug? How to optimize...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-07-2017 02:16 AM
тАО04-07-2017 02:16 AM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-07-2017 02:22 AM
тАО04-07-2017 02:22 AM
Re: [HELP]The AH401A driver's bug? How to optimize the fc I/O performance ?
n hpux 11v3 , we use one AH401A card test I/O performance.
when use one port connect to SAN, the bandwidth is 785M
bash-4.3# sar -H 1 100
HP-UX rx28 B.11.31 U ia64 04/07/17
16:57:45 ctlr util t-put IO/s r/s w/s read write avque avwait avserv %age MB/s num num num MB/s MB/s num msec msec
16:57:46 ciss0 5 3.62 1438 0 1438 0.00 3.62 1 0 0
fcd1 100 785.15 785 785 0 785.15 0.00 1 0 117
16:57:47 ciss0 6 3.55 1419 0 1419 0.00 3.55 1 0 0
fcd1 100 788.00 788 788 0 788.00 0.00 1 0 117
but when use two port connect to SAN, two port total bandwidth only 800M
# sar -H 1 100
HP-UX rx28 B.11.31 U ia64 04/07/17
16:58:26 ctlr util t-put IO/s r/s w/s read write avque avwait avserv %age MB/s num num num MB/s MB/s num msec msec
16:58:27 ciss0 5 3.55 1429 0 1429 0.00 3.55 1 0 0
fcd0 100 389.57 390 390 0 389.57 0.00 1 0 118
fcd1 99 399.50 400 400 0 399.50 0.00 1 0 115
16:58:28 ciss0 6 3.53 1418 0 1418 0.00 3.53 1 0 0
fcd0 100 395.00 395 395 0 395.00 0.00 1 0 116
fcd1 100 392.00 392 392 0 392.00 0.00 1 0 118
when disconnect fcd1, fcd0 port bandwidth is 787M
# sar -H 1 100
HP-UX rx28 B.11.31 U ia64 04/07/17
17:03:03 ctlr util t-put IO/s r/s w/s read write avque avwait avserv %age MB/s num num num MB/s MB/s num msec msec
17:03:04 ciss0 6 3.70 1461 0 1461 0.00 3.70 1 0 0
fcd0 100 791.01 791 791 0 791.01 0.00 1 0 59
17:03:05 ciss0 6 3.95 1549 0 1549 0.00 3.95 1 0 0
fcd0 100 787.00 787 787 0 787.00 0.00 1 0 60
there some thing wrong?
The fc DMA or queue need mofify? we imporv the disk queue to 32, but it's no use.
thanks for reply
bash-4.3# olrad -L
PCI-Express Slots Information
-----------------------------
Slot Path Link Max Max Link Pwr Occu Mode
Spd Link Link Width
Spd Width
0-3-4 0/0/0/3/0/0 2.5 5.0 x8 x8 Off No PCIe
0-5-5 0/0/0/5/0/0 2.5 5.0 x4 x4 Off No PCIe
0-6-6 0/0/0/6/0/0 2.5 5.0 x4 x4 Off No PCIe
0-7-3 0/0/0/7/0/0 2.5 5.0 x4 x4 Off No PCIe
0-8-2 0/0/0/8/0/0 2.5 5.0 x4 x4 Off No PCIe
0-9-1 0/0/0/9/0/0 2.5 5.0 x8 x8 On Yes PCIe
bash-4.3# ioscan -fnC fc
Class I H/W Path Driver S/W State H/W Type Description
=====================================================================
fc 0 0/0/0/9/0/0/0 fcd CLAIMED INTERFACE HP AH401A 8Gb Dual Port PCIe Fibre Channel Adapter (FC Port 1)
/dev/fcd0
fc 1 0/0/0/9/0/0/1 fcd CLAIMED INTERFACE HP AH401A 8Gb Dual Port PCIe Fibre Channel Adapter (FC Port 2)
/dev/fcd1
bash-4.3# fcmsutil /dev/fcd0
Vendor ID is = 0x1077
Device ID is = 0x2532
PCI Sub-system Vendor ID is = 0x103C
PCI Sub-system ID is = 0x3263
PCI Mode = PCI Express x8
ISP Code version = 5.6.7
ISP Chip version = 2
Topology = PTTOPT_FABRIC
Link Speed = 8Gb
Local N_Port_id is = 0x360100
Previous N_Port_id is = 0x360100
N_Port Node World Wide Name = 0x51402ec00179f189
N_Port Port World Wide Name = 0x51402ec00179f188
Switch Port World Wide Name = 0x200100c0dd24d124
Switch Node World Wide Name = 0x100000c0dd1f7477
N_Port Symbolic Port Name = rx28_fcd0
N_Port Symbolic Node Name = rx28_HP-UX_B.11.31
Driver state = ONLINE
Hardware Path is = 0/0/0/9/0/0/0
Maximum Frame Size = 2048
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
TYPE = PFC
NPIV Supported = YES
Driver Version = @(#) fcd B.11.31.1603 Dec 3 2015
bash-4.3# fcmsutil /dev/fcd1
Vendor ID is = 0x1077
Device ID is = 0x2532
PCI Sub-system Vendor ID is = 0x103C
PCI Sub-system ID is = 0x3263
PCI Mode = PCI Express x8
ISP Code version = 5.6.7
ISP Chip version = 2
Previous Topology = PTTOPT_FABRIC
Link Speed = 8Gb
Local N_Port_id is = 0x360400
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x51402ec00179f18b
N_Port Port World Wide Name = 0x51402ec00179f18a
Switch Port World Wide Name = 0x200400c0dd24d124
Switch Node World Wide Name = 0x100000c0dd1f7477
N_Port Symbolic Port Name = rx28_fcd1
N_Port Symbolic Node Name = rx28_HP-UX_B.11.31
Driver state = AWAITING_LINK_UP
Hardware Path is = 0/0/0/9/0/0/1
Maximum Frame Size = 2048
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
TYPE = PFC
NPIV Supported = YES
Driver Version = @(#) fcd B.11.31.1603 Dec 3 2015
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-08-2017 05:27 AM
тАО04-08-2017 05:27 AM
Re: [HELP]The AH401A driver's bug? How to optimize the fc I/O performance ?
bash-4.3# fcmsutil /dev/fcd0 ... Driver state = ONLINE ... bash-4.3# fcmsutil /dev/fcd1 ... Driver state = AWAITING_LINK_UP ...
It looks like the /dev/fcd1 port has no link to the SAN yet.
Unless the driver state is ONLINE, the port cannot work.
Please verify that all the fibre connectors are fully plugged in, and that the cables you're using are not kinked or otherwise damaged. Make sure the card transmitter side is connected to the receiver side at the SAN port, and vice versa.
Also verify that the SAN port the fcd1 is connected to is enabled and properly configured.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2017 06:50 PM
тАО04-09-2017 06:50 PM
Re: [HELP]The AH401A driver's bug? How to optimize the fc I/O performance ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2017 07:06 PM
тАО04-09-2017 07:06 PM
Re: [HELP]The AH401A driver's bug? How to optimize the fc I/O performance ?
when the fcmsutil /dev/fcd1 run, the /dev/fcd1 port has been closed .
тАЬ when disconnect fcd1, fcd0 port bandwidth is 787M
# sar -H 1 100 HP-UX rx28 B.11.31 U ia64
04/07/1717:03:03 ctlr util t-put IO/s r/s w/s read write avque avwait avserv %age MB/s num num num MB/s MB/s num msec msec
17:03:04 ciss0 6 3.70 1461 0 1461 0.00 3.70 1 0 0 fcd0 100 791.01 791 791 0 791.01 0.00 1 0 59 тАЭ
In the testя╝Мwe use 8 or 16 disk and each disk 4/8/16 thread, the test cmd is if=/dev/rdisk/diskXX of=/dev/null bs=2048k
when we use two AH401A card, and use one port for each card, two port total bandwidth 1500M
but only use one AH401A cardя╝М two port max bandwidth only 800~1000M
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-10-2017 03:31 AM
тАО04-10-2017 03:31 AM
SolutionLooks like you might be maxing out the slot bandwidth somehow.
What's your server model?
Even if the available PCIe slots are all PCIe x8, they might not necessary be all equal: for example, in the rx3600 server, there are 4 PCIe x8 slots, but 2 of them are faster than the others. The PCIe slots #5 and #6 each have a dedicated quad-rope connection to the system chipset; each rope can deliver 0.5 GB/s. The slots #3 and #4 have only a dual-rope connection, which is shared between the two PCIe slots, so the maximum total bandwidth over these two slots together is 1.0 GB/s.
So if your dual-port card is in a slower PCIe slot, and the other slow PCIe slot is empty, it would be getting a maximum bandwidth of 1.0 GB/s per slot, which seems to be pretty close to what you're experiencing.. If this is your problem, moving the card(s) to the faster slot(s) should provide a significant improvement.
In the rx3600 server, there is the "PCIe MPS optimization" (MPS = Maximum Payload Size) setting available in the EFI boot prompt.
In the EFI boot prompt, run "info io" or "ioconfig" without parameters to see the current MPS optimization status. The default value is "disabled", but if your server firmware is up to date, you can enable MPS optimization to improve PCIe performance. This would not be as significant as having the cards in optimal slots, but might be worth checking if you need maximum performance.
To enable MPS optimization in rx3600, interrupt the boot sequence to get to the EFI boot prompt, and type:
ioconfig mps_optimize on
A reboot is required to have the new setting take effect.
If your server model is different, find and study the "I/O Subsystem Block Diagram" in the documentation of your specific server model.