- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- HPE MSA Storage
- >
- Re: MSA 2324fc G2 DC poor performance?
Categories
Company
Local Language
Forums
Discussions
- Integrity Servers
- Server Clustering
- HPE NonStop Compute
- HPE Apollo Systems
- High Performance Computing
Knowledge Base
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Knowledge Base
Forums
Discussions
- Cloud Mentoring and Education
- Software - General
- HPE OneView
- HPE Ezmeral Software platform
- HPE OpsRamp Software
Knowledge Base
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2009 03:08 AM
07-23-2009 03:08 AM
MSA 2324fc G2 DC poor performance?
first of all: this is my first SAN experience and I'm not familiarized yet with all terminology but I'm trying hard to learn as much as possible.
I have the following setup:
One MSA 2324fc G2 dual controller conected to 2 servers:
DB1: Win 2003 SP2 Enterprise Edition x86 - DL580 G3 trough HP FC2243 4Gb PCI-X 2.0 DC HBA
DB2: Win 2003 SP2 Enterprise Edition x64 - DL380 G5 trough HP FC2242SR PCI-e DC HBA (A8003A)
like this:
A1 -> port 0 DB1
A2 -> port 0 DB2
B1 -> port 1 DB1
B2 -> port 1 DB2
I've set up a raid 10 array of 14 disks witch is assignet to controller A and it will be accessible from DB2. Chunk size is 64k and NTFS cluster size is 64k. Partition is alligned with 128k offset. Queue Depth is 32 (default) The mapping is in the attachment.
My problem is in poor results from SQLIO radnom reads and writes. For example:
sqlio -kW -s120 -frandom -o32 -b8 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, -1961967296 counts per second
parameter file used: param.txt
file e:\sqlio_test.dat with 4 threads (0-3) using mask 0x0 (0)
4 threads writing for 120 secs to file e:\sqlio_test.dat
using 8KB random IOs
enabling multiple I/Os per thread with 32 outstanding
using specified size: 100 MB for file: e:\sqlio_test.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 10798.03
MBs/sec: 84.35
latency metrics:
Min_Latency(ms): 1
Avg_Latency(ms): 11
Max_Latency(ms): 102
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 0 0 0 0 0 0 0 0 22 53 19 3 1 0 0 0 0 0 0 0 0 0 1
-----------------------------------------------------------------------------------
sqlio -kR -s120 -frandom -o32 -b8 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, -1961967296 counts per second
parameter file used: param.txt
file e:\sqlio_test.dat with 4 threads (0-3) using mask 0x0 (0)
4 threads reading for 120 secs from file e:\sqlio_test.dat
using 8KB random IOs
enabling multiple I/Os per thread with 32 outstanding
using specified size: 100 MB for file: e:\sqlio_test.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 37452.77
MBs/sec: 292.59
latency metrics:
Min_Latency(ms): 1
Avg_Latency(ms): 3
Max_Latency(ms): 21
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 0 99 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
---------------------------------------------------------------------------------
and the same tests on the same server against a raid10 array of 6 internal drives:
----------------------------------------------------------------------------------
sqlio -kW -s120 -frandom -o32 -b8 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, -1961967296 counts per second
parameter file used: param.txt
file d:\sqlio_test.dat with 4 threads (0-3) using mask 0x0 (0)
4 threads writing for 120 secs to file d:\sqlio_test.dat
using 8KB random IOs
enabling multiple I/Os per thread with 32 outstanding
size of file d:\sqlio_test.dat needs to be: 104857600 bytes
current file size: 0 bytes
need to expand by: 104857600 bytes
expanding d:\sqlio_test.dat ... done.
using specified size: 100 MB for file: d:\sqlio_test.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 32788.44
MBs/sec: 256.15
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 3
Max_Latency(ms): 346
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 83 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
---------------------------------------------------------------------------------
sqlio -kR -s120 -frandom -o32 -b8 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, -1961967296 counts per second
parameter file used: param.txt
file d:\sqlio_test.dat with 4 threads (0-3) using mask 0x0 (0)
4 threads reading for 120 secs from file d:\sqlio_test.dat
using 8KB random IOs
enabling multiple I/Os per thread with 32 outstanding
using specified size: 100 MB for file: d:\sqlio_test.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 46077.23
MBs/sec: 359.97
latency metrics:
Min_Latency(ms): 1
Avg_Latency(ms): 2
Max_Latency(ms): 222
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 99 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
---------------------------------------------------------------------------------
Like you can see, performance of the msa is 3 times worse than of the 6 dirves raid 10 array with a Smart Array P400 controller
Do these numbers seems ok on any of the arrays?
What should I change to improve performance?
I'm open to any ideas and I'll any test any suggestions. Also, I'll provide any info I might missed on this post.
Thank you very much,
Alex
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2009 01:10 AM
07-24-2009 01:10 AM
Re: MSA 2324fc G2 DC poor performance?
sqlio -kR -s60 -fsequential -o8 -b256 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, -1961967296 counts per second
parameter file used: param.txt
file e:\sqlio_test.dat with 4 threads (0-3) using mask 0x0 (0)
4 threads reading for 60 secs from file e:\sqlio_test.dat
using 256KB sequential IOs
enabling multiple I/Os per thread with 8 outstanding
using specified size: 3000 MB for file: e:\sqlio_test.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 2815.08
MBs/sec: 703.77
latency metrics:
Min_Latency(ms): 2
Avg_Latency(ms): 10
Max_Latency(ms): 46
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 0 0 0 0 1 5 15 17 13 14 11 9 6 3 2 1 1 1 0 0 0 0 0
sqlio -kW -s60 -fsequential -o8 -b256 -LS -Fparam.txt
sqlio v1.5.SG
using system counter for latency timings, -1961967296 counts per second
parameter file used: param.txt
file e:\sqlio_test.dat with 4 threads (0-3) using mask 0x0 (0)
4 threads writing for 60 secs to file e:\sqlio_test.dat
using 256KB sequential IOs
enabling multiple I/Os per thread with 8 outstanding
using specified size: 3000 MB for file: e:\sqlio_test.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 1153.05
MBs/sec: 288.26
latency metrics:
Min_Latency(ms): 1
Avg_Latency(ms): 27
Max_Latency(ms): 72
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 6 9 5 77
Does these look normal? If the answer is yes, how can I optimize for random reads and writes because I'll use the array for an OLTP Sql 2k5 server (for Microsoft Dynamics AX 4)
Alex
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2009 11:21 PM
07-26-2009 11:21 PM
Re: MSA 2324fc G2 DC poor performance?
Best results:
Queue depth:64
Queue target: 0 (per LUN)
random reads:
Threads:32; DurationSeconds:120;IOsOutstanding:128
IOs_Sec MBs_Sec LatencyMS_Avg
4031 31 1011
random writes:
Threads:2; DurationSeconds:120;IOsOutstanding:1
IOs_Sec MBs_Sec LatencyMS_Avg
2631 21 0
sequential reads:
Threads:4; DurationSeconds:120;IOsOutstanding:8
IOs_Sec MBs_Sec LatencyMS_Avg
4659 36 6
sequential writes:
Threads:64; DurationSeconds:120;IOsOutstanding:128
IOs_Sec MBs_Sec LatencyMS_Avg
9053 71 1011
I don't know why sequential writes are better then sequential reads. I'll redo the tests. Comparing these results with those of internal drives (using the same file size and parameters for SQLIO) there is a gain of about 50% percent. Again, I'm a little bit confused, maybe someone has an idea about the actual performance of MSA2324fc G2 and can tell me if the performance I'm seeing is worse or not.
Alex
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-07-2010 10:32 AM
04-07-2010 10:32 AM
Re: MSA 2324fc G2 DC poor performance?
Do you resolve this poor perfomance?
I have the same problem,The MSA 2324fc firmware is M110R28-02.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-07-2010 12:27 PM
04-07-2010 12:27 PM
Re: MSA 2324fc G2 DC poor performance?
If I remember right, I considered it was a test file size problem (size of test file influenced greatly the results). I'm on M110R21 firmware. If you like, we can do same tests and compare results. I'll also update the firmware. The only problem is that my box is live in production and I can run some test during the night only and also can't change the arrays (I can test on a 14 disks RAID10 array). In a week or two I'll have another box available (I'm waiting for the shipment) and I could do additional testing if you are willing to repeat these test on your MSA so we compare results.
Have a nice day,
Alex
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-08-2010 02:16 PM
04-08-2010 02:16 PM
Re: MSA 2324fc G2 DC poor performance?
another member of itr forum help to see a light..
I solved my problem.
The problem that i had,was the device driver of the HBA.I installed the version 9.1.8.19 , you will find it here:
http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareIndex.jsp?lang=en&cc=us&prodNameId=3662827&prodTypeId=12169&prodSeriesId=3662826
&swLang=13&taskId=135&swEnvOID=4064
PROPS to MAPI !!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-08-2010 11:19 PM
04-08-2010 11:19 PM
Re: MSA 2324fc G2 DC poor performance?
I have another HBA: HP StorageWorks FC2242SR 4Gb PCIe DC Host Bus Adapter. I'll try to update drivers and redo the tests.
Thanks!
Alex
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-09-2010 12:45 AM
04-09-2010 12:45 AM
Re: MSA 2324fc G2 DC poor performance?
http://forums13.itrc.hp.com/service/forums/questionanswer.do?threadId=1387624
I hope you resolve your problem.
thanks.
At this moment i use firmware 110r28 ,this firmware fix a lot of problems.
Take a look: ftp://ftp.hp.com/pub/softlib2/software1/pubsw-linux/p1035565188/v60060/508849-006_msa-raid-relnotes.htm