- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE 3PAR StoreServ Storage
- >
- Poor performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2013 12:50 PM
11-20-2013 12:50 PM
Hi,
We have extremely poor performance on our 4-node 3PAR 7400. It is not in production yet so I am pretty much alone on the system when testing.
I’m testing from a VM on vSphere ESXi 5.1 installed on a HP BL460c Gen8 Blade server. I’ve zoned 4 patch for the host to one node-pair and set multi-path to round robin in ESXi.
Write performance is extremely poor and unreliable. It “freezes up” under sustained writes and ESXi records latency in seconds. Read performance is better but not what I would expect. Example, my 48 DISK HP EVA can manage around 80mb/sec in 8KB Read and the 96 FC-DISK 3PAR only manage around 17mb/sec and the EVA is MAX loaded (3PAR is going to replace it )
Hi,
We have extremely poor performance on our 4-node 3PAR 7400. It is not in production yet so I am pretty much alone on the system when testing.
I’m testing from a VM on vSphere ESXi 5.1 installed on a HP BL460c Gen8 Blade server. I’ve zoned 4 patch for the host to one node-pair and set multi-path to round robin in ESXi.
Write performance is extremely poor and unreliable. It “freezes up” under sustained writes and ESXi records latency in seconds. Read performance is better but not what I would expect. Example, my 48 DISK HP EVA can manage around 80mb/sec in 8KB Read and the 96 FC-DISK 3PAR only manage around 17mb/sec and the EVA is MAX loaded (3PAR is going to replace it )
I’ve attached ATTO benchmarks for both systems.
What could be the cause and where should I look?
/Søren Emig
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2013 01:55 PM
11-20-2013 01:55 PM
Re: Poor performance
>What could be the cause and where should I look?
Have you used the statport and statpd commands to see if the array is busy?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2013 02:10 PM
11-20-2013 02:10 PM
Re: Poor performance
It dont look busy atl all
23:08:46 11/20/2013 r/w I/O per second KBytes per sec Svt ms IOSz KB
Port D/C Cur Avg Max Cur Avg Max Cur Avg Cur Avg Qlen
0:0:1 Data t 1 6 33 4 34 200 7.10 8.84 4.1 6.2 0
0:0:2 Data t 1 6 31 4 33 191 6.18 7.70 4.1 5.9 0
0:1:1 Data t 223 215 236 57319 55590 57470 0.92 0.93 256.9 258.6 0
0:1:2 Data t 217 214 224 56286 55556 57361 0.92 0.93 259.8 259.1 1
1:0:1 Data t 5 7 16 13 19 51 8.42 7.79 2.7 2.9 0
1:0:2 Data t 9 5 14 37 18 53 12.04 8.14 4.1 3.4 0
1:1:1 Data t 219 214 226 56712 55438 57352 0.92 0.92 259.4 259.0 0
1:1:2 Data t 219 214 229 57307 55586 57354 0.93 0.93 262.1 259.3 0
2:0:1 Data t 10 6 16 118 79 374 6.87 7.83 11.9 14.0 0
2:0:2 Data t 12 5 12 112 74 303 9.79 7.83 9.4 15.5 0
2:1:1 Data t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0
2:1:2 Data t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0
3:0:1 Data t 13 5 23 39 26 253 7.75 8.35 3.0 4.8 0
3:0:2 Data t 3 5 26 12 21 146 5.66 7.89 4.1 4.5 0
3:1:1 Data t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0
3:1:2 Data t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0
--------------------------------------------------------------------------------------
16 Data t 931 901 227963 222474 1.37 1.27 245.0 246.9 1
23:09:51 11/20/2013 r/w I/O per second KBytes per sec Svt ms IOSz KB Idle %
ID Port Cur Avg Max Cur Avg Max Cur Avg Cur Avg Qlen Cur Avg
0 1:0:1 t 0 1 3 0 5 12 0.00 8.14 0.0 3.6 0 100 99
1 0:0:1 t 0 1 2 0 3 7 0.00 8.50 0.0 3.4 0 100 99
2 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
3 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
4 1:0:1 t 0 2 4 0 4 14 0.00 8.20 0.0 2.2 0 100 99
5 0:0:1 t 1 1 3 78 18 78 7.12 8.38 53.2 14.8 0 99 99
6 1:0:1 t 0 0 3 0 1 5 0.00 7.95 0.0 1.7 0 100 100
7 0:0:1 t 0 0 1 0 1 4 0.00 6.75 0.0 4.1 0 100 100
8 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
9 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
10 1:0:1 t 0 1 3 0 4 12 0.00 8.54 0.0 4.0 0 100 99
11 0:0:1 t 0 1 3 0 9 56 0.00 7.32 0.0 8.3 0 100 99
12 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
13 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
14 1:0:1 t 2 2 3 8 7 9 9.20 8.36 4.1 3.0 0 99 99
15 0:0:1 t 1 1 3 72 16 72 4.34 8.69 73.7 19.2 0 100 99
16 1:0:1 t 0 0 1 0 1 4 0.00 7.19 0.0 3.6 0 100 100
17 0:0:1 t 0 1 2 0 2 8 0.00 7.42 0.0 3.2 0 100 100
18 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
19 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
20 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
21 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
22 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
23 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
24 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
25 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
26 1:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
27 0:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
28 1:0:2 t 0 2 3 0 5 12 0.00 6.71 0.0 3.1 0 100 99
29 0:0:2 t 0 1 2 0 9 48 0.00 7.23 0.0 7.9 0 100 99
30 1:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
31 0:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
32 1:0:2 t 2 2 5 8 6 17 8.23 9.24 4.1 3.3 0 99 99
33 0:0:2 t 0 1 2 6 3 8 6.10 7.79 12.3 4.1 0 100 100
34 1:0:2 t 0 0 2 0 1 5 0.00 8.91 0.0 2.3 0 100 100
35 0:0:2 t 0 0 1 0 0 1 0.00 9.40 0.0 1.0 0 100 100
36 1:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
37 0:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
38 1:0:2 t 0 1 3 0 2 11 0.00 10.97 0.0 2.7 0 100 100
39 0:0:2 t 0 1 4 0 3 12 0.00 8.82 0.0 2.4 0 100 99
40 1:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
41 0:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
42 1:0:2 t 0 2 4 0 4 13 0.00 7.24 0.0 2.7 0 100 99
43 0:0:2 t 0 0 1 0 1 4 0.00 8.03 0.0 3.6 0 100 100
44 1:0:2 t 0 0 2 0 1 5 0.00 8.59 0.0 2.6 0 100 100
45 0:0:2 t 0 1 2 0 2 4 0.00 5.39 0.0 2.5 0 100 100
46 1:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
47 0:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
48 1:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
49 0:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
50 1:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
51 0:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
52 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
53 2:0:1 t 0 0 1 0 1 4 0.00 8.68 0.0 2.6 0 100 100
54 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
55 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
56 3:0:1 t 0 0 2 0 1 8 0.00 7.51 0.0 3.1 0 100 100
57 2:0:1 t 2 1 2 13 16 64 9.30 7.63 6.4 20.2 1 99 99
58 3:0:1 t 0 1 3 0 2 12 0.00 8.42 0.0 3.4 0 100 100
59 2:0:1 t 0 1 2 0 2 13 0.00 6.58 0.0 3.6 0 100 100
60 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
61 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
62 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
63 2:0:1 t 1 1 2 4 11 70 4.89 6.72 4.1 17.7 0 100 100
64 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
65 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
66 3:0:1 t 0 0 1 0 1 4 0.00 3.82 0.0 4.1 0 100 100
67 2:0:1 t 0 0 2 0 1 8 0.00 9.59 0.0 3.1 0 100 100
68 3:0:1 t 0 0 2 0 1 8 0.00 9.50 0.0 4.1 0 100 100
69 2:0:1 t 1 0 2 1 0 2 6.02 6.56 1.0 0.9 0 99 100
70 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
71 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
72 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
73 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
74 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
75 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
76 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
77 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
78 3:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
79 2:0:1 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
80 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
81 2:0:2 t 3 1 3 17 28 96 4.41 5.95 5.6 19.7 0 99 99
82 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
83 2:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
84 3:0:2 t 0 0 1 0 1 4 0.00 7.50 0.0 2.6 0 100 100
85 2:0:2 t 0 0 2 2 1 8 5.19 8.73 4.1 4.1 0 100 100
86 3:0:2 t 0 0 1 0 1 4 0.00 6.91 0.0 2.3 0 100 100
87 2:0:2 t 2 1 2 57 10 57 9.56 7.46 28.9 17.7 0 98 100
88 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
89 2:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
90 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
91 2:0:2 t 0 0 1 0 1 4 0.00 4.93 0.0 4.1 0 100 100
92 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
93 2:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
94 3:0:2 t 0 0 1 0 1 4 0.00 6.30 0.0 4.1 0 100 100
95 2:0:2 t 0 1 3 0 25 101 0.00 5.08 0.0 44.4 0 100 100
96 3:0:2 t 0 0 2 0 1 4 0.00 12.11 0.0 1.7 0 100 100
97 2:0:2 t 1 0 1 56 8 56 4.46 4.46 56.8 56.8 0 100 100
98 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
99 2:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
100 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
101 2:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
102 3:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
103 2:0:2 t 0 0 0 0 0 0 0.00 0.00 0.0 0.0 0 100 100
---------------------------------------------------------------------------------------
104 t 17 32 321 220 6.96 7.84 18.7 6.9 1 100 100
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2013 02:34 PM
11-20-2013 02:34 PM
Re: Poor performance
>It don't look busy at all
Then it is your host, networking or switch.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2013 11:35 AM
11-21-2013 11:35 AM
Re: Poor performance
I've changed the ISLs between C7000 (4gbit brocade) and the main switches (8gbit brocade) to 4gb fixed, cleared the portstats. No increase in errors but the performance was still poor.
I've then decided to change the 3par ports (on the brocade) to 4gb fixed just for the fun of it and to my surprise this changed the game
IO meter for 32kb write 0% random is now
IOP: 10600
MBs pr sec: 332
Avarage time ms: 0,936
Max time ms: 39,976
If i increase the block size to 5mb transfer rate is about 1GB pr sec!
Maybe my understanding of how mix of FC port speed works is wrong? Our entire FC infrastructure was running 4GB until we got our 3par so I did not think twice about it.
We are about to install another c7000 with 8gbit FC and it would be a pity to force it down to 4GB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2013 11:19 PM
11-22-2013 11:19 PM
Re: Poor performance
I remember, there is some additional setting (fill word ) to be done on san switches to connect VC on 8G. i suggest you to check on VC user guide .
This statistics looks the storage is not under stress. I had a similar experience with bl685g7 blade servers. On heavy I/Os the near line LUNS from 3par get freeze & disconnected from server. This has been solved once we upgraded the 3par firmware to latest 3.1.2. I have noticed that, for the old servers like, DL380g6, rx2800, rx8640 the same Neal line LUNs work perfectly fine with old firmware and new firmware.
I have feeling that, the culprit is Virtual connect on new blade enclosure. What about you?
- Did you tried to do performance test on a rack mounted server?
- Check output of porterrshow in your san switch ?
- You can check the portperfshow from san switch as well.
- If nothing helped, as a best practice, I suggest to upgrade firmware on 3par, sanswitches, blade enclosure & blade server
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-24-2013 10:56 AM
11-24-2013 10:56 AM
Re: Poor performance
Just to clarify, I don’t have VC but ordinary 4Gbit Brocade Switches in the C7000 (We started out without the need for external switches).
I’ve read that best practice is to set the fillword to 3 on the 8Gbit ports and I did that. The fillword is 0 on all the 4Gbit ports but I also read somewhere that It’s relevant only on 8Gbit ports?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-24-2013 11:27 AM
11-24-2013 11:27 AM
Re: Poor performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-25-2013 01:47 PM
11-25-2013 01:47 PM
Re: Poor performance
Yes you are correct. it is applicable for only 8G switch .
As you said, once you put the 3par port speed on 4G, it is working perfect.
Then i suggest to dedicate two host ports for blade enclosure & keep it in 4G speed, while the other servers can be serverd though the 8G ports.
I have 3par 10400, 8G san switch , c7000 , 4G san interconnect serving bl890i4 blades which comes with 8GB hba cards.
let me know, sharing the configuration can help you or not ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-29-2013 06:05 AM
11-29-2013 06:05 AM
SolutionIt turns out that HP get there SFP produced in at least two different companies (Avago and Finisar) and label them with the same product number, they don’t have the same finish on the casing though
The only way to spot the difference is to get the information from CLI (supportshow).
(HP-A) SFPs require replacement (Avago)
(HP-F) is fine (Finisar)
I've created a CASE and this is a known issue but no information is public. They are going to fix this with a firmware but I got mine replaced.