GreenLake Administration
- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- strange VA7110 performance results
Disk Enclosures
1846679
Members
4246
Online
110256
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2005 12:16 PM
08-02-2005 12:16 PM
strange VA7110 performance results
I have several servers dually attached to a VA7110.
There are 2 redundant fabrics and HBA0 in all servers is attached via one switch to which the VA controller-2 ("C2") is attached.
All HBA1s are attached to the switch to which VA C1 is attached.
I did some performance tests and got results opposite to what I would expect.
On each of 2 different servers, I contemporaneously started a 'dd' I/O input stream, each from a different LUN in the array (using raw physical device paths).
I did one test with all I/O via HBA0 (/dev/td0) and a second test with all I/O via HBA1.
results:
via HBA0 --> VA C2 :
.... lillyBr1:admin> portperfshow 3
.... 192m 96m 96m 0 0 0 0 0 392m
(about the average for a long trial)
via HBA1 --> VA C1 :
lillyBr2:admin> portperfshow 3
.... 157m 75m 81m 0 0 0 0 0 315m
(again, about the average for a long trial)
I got the same results on 2 other servers, as well.
Confusion:
1) I had been told in the past and the manual seems to imply that the performance hit for not going thru the "performance path" is not very large. Here we see a 18% hit !!!
2) The manual says that C1 is the owning controller in a VA7110 (and it makes sense). I would therefore think that the throughout should be higher going thru HBA1 which goes to VA C1. But, we see the opposite !!
Any thoughts??
bv
=======================
the following is "proof" that HBA0 goes to VA C2:
=======================
Lilly1 ## fcmsutil /dev/td0
... N_Port Port World Wide Name = 0x50060b00002525aa
... Hardware Path is = 0/2/1/0
Lilly1 ## armdsp -a lillyVA
... Controller At M/C2:
.... Front Port At M/C2.H1:
..... Port WWN:__________50060b0000233b3c
lillyBr1:admin> switchshow
... port 0: id N2 Online F-Port 50:06:0b:00:00:23:3b:3c
... port 2: id N2 Online F-Port 50:06:0b:00:00:25:25:aa
Here's the test:
First, HBA0:
Lilly1 ## get_LUN_dev 103 lillyVA
... lillyVA0 26 0/2/1/0.11.0.0.0.12.7 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c21t12d7 /dev/rdsk/c21t12d7
Lilly2 ## get_LUN_dev 104 lillyVA
... lillyVA0 36 0/2/1/0.11.0.0.0.13.0 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c17t13d0 /dev/rdsk/c17t13d0
Lilly1 ## dd if=/dev/rdsk/c21t12d7 of=/dev/null bs=4096k
Lilly2 ## dd if=/dev/rdsk/c17t13d0 of=/dev/null bs=4096k
Pine1 ## telnet lillybr1
lillyBr1:admin> portperfshow 3
... 185m 88m 97m 0 0 0 0 0 370m
... 196m 98m 98m 0 0 0 0 0 392m
... 191m 96m 95m 0 0 0 0 0 383m
now, HBA1:
Lilly1 ## get_LUN_dev 103 lillyVA
... lillyVA1 25 0/5/2/0.12.0.0.0.12.7 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c24t12d7 /dev/rdsk/c24t12d7
Lilly2 ## get_LUN_dev 104 lillyVA
... lillyVA1 35 0/5/2/0.12.0.0.0.13.0 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c19t13d0 /dev/rdsk/c19t13d0
Lilly1 ## dd if=/dev/rdsk/c24t12d7 of=/dev/null bs=4096k
Lilly2 ## dd if=/dev/rdsk/c19t13d0 of=/dev/null bs=4096k
Pine1 ## telnet lillybr2
lillyBr2:admin> portperfshow 3
... 157m 75m 81m 0 0 0 0 0 315m
... 157m 77m 79m 0 0 0 0 0 315m
There are 2 redundant fabrics and HBA0 in all servers is attached via one switch to which the VA controller-2 ("C2") is attached.
All HBA1s are attached to the switch to which VA C1 is attached.
I did some performance tests and got results opposite to what I would expect.
On each of 2 different servers, I contemporaneously started a 'dd' I/O input stream, each from a different LUN in the array (using raw physical device paths).
I did one test with all I/O via HBA0 (/dev/td0) and a second test with all I/O via HBA1.
results:
via HBA0 --> VA C2 :
.... lillyBr1:admin> portperfshow 3
.... 192m 96m 96m 0 0 0 0 0 392m
(about the average for a long trial)
via HBA1 --> VA C1 :
lillyBr2:admin> portperfshow 3
.... 157m 75m 81m 0 0 0 0 0 315m
(again, about the average for a long trial)
I got the same results on 2 other servers, as well.
Confusion:
1) I had been told in the past and the manual seems to imply that the performance hit for not going thru the "performance path" is not very large. Here we see a 18% hit !!!
2) The manual says that C1 is the owning controller in a VA7110 (and it makes sense). I would therefore think that the throughout should be higher going thru HBA1 which goes to VA C1. But, we see the opposite !!
Any thoughts??
bv
=======================
the following is "proof" that HBA0 goes to VA C2:
=======================
Lilly1 ## fcmsutil /dev/td0
... N_Port Port World Wide Name = 0x50060b00002525aa
... Hardware Path is = 0/2/1/0
Lilly1 ## armdsp -a lillyVA
... Controller At M/C2:
.... Front Port At M/C2.H1:
..... Port WWN:__________50060b0000233b3c
lillyBr1:admin> switchshow
... port 0: id N2 Online F-Port 50:06:0b:00:00:23:3b:3c
... port 2: id N2 Online F-Port 50:06:0b:00:00:25:25:aa
Here's the test:
First, HBA0:
Lilly1 ## get_LUN_dev 103 lillyVA
... lillyVA0 26 0/2/1/0.11.0.0.0.12.7 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c21t12d7 /dev/rdsk/c21t12d7
Lilly2 ## get_LUN_dev 104 lillyVA
... lillyVA0 36 0/2/1/0.11.0.0.0.13.0 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c17t13d0 /dev/rdsk/c17t13d0
Lilly1 ## dd if=/dev/rdsk/c21t12d7 of=/dev/null bs=4096k
Lilly2 ## dd if=/dev/rdsk/c17t13d0 of=/dev/null bs=4096k
Pine1 ## telnet lillybr1
lillyBr1:admin> portperfshow 3
... 185m 88m 97m 0 0 0 0 0 370m
... 196m 98m 98m 0 0 0 0 0 392m
... 191m 96m 95m 0 0 0 0 0 383m
now, HBA1:
Lilly1 ## get_LUN_dev 103 lillyVA
... lillyVA1 25 0/5/2/0.12.0.0.0.12.7 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c24t12d7 /dev/rdsk/c24t12d7
Lilly2 ## get_LUN_dev 104 lillyVA
... lillyVA1 35 0/5/2/0.12.0.0.0.13.0 sdisk CLAIMED DEVICE HP A6189B
..... /dev/dsk/c19t13d0 /dev/rdsk/c19t13d0
Lilly1 ## dd if=/dev/rdsk/c24t12d7 of=/dev/null bs=4096k
Lilly2 ## dd if=/dev/rdsk/c19t13d0 of=/dev/null bs=4096k
Pine1 ## telnet lillybr2
lillyBr2:admin> portperfshow 3
... 157m 75m 81m 0 0 0 0 0 315m
... 157m 77m 79m 0 0 0 0 0 315m
"The lyf so short, the craft so long to lerne." - Chaucer
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2005 02:33 PM
08-02-2005 02:33 PM
Re: strange VA7110 performance results
The controllers have a cheesy link on their backplane probably, or the processors are slow redirecting the traffic between that and the raiding calculations ehhehe instead of redesigning to keep up with the times.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2026 Hewlett Packard Enterprise Development LP