MSA Storage
1753532 Members
6267 Online
108795 Solutions
New Discussion юеВ

Re: MSA1000 active/active performance 367MB/s?

 
SOLVED
Go to solution
IT Department_8
New Member

MSA1000 active/active performance 367MB/s?

We currently have a MSA1000 with one controller in Active/Passive configuration using firmware version 5.20. Using IOMeter we have been able to achieve 185MB/s throughput on VMWare 3.5 using 6 VMs spread out across two different VMWare hosts. This is fairly close to the 2Gb/s theoretical maximum for the Fiber connection, so we are fairly happy for what the MSA1000 cost us.

However, I read in the MSA1000 Quickspec that the MSA1000 with two controllers in Active/Active configuration can achieve up to 367MB/s! How is that possible? We are using a 2/8 fiber switch in the back of the MSA1000, with one port dedicated to the MSA1000 controller. How can the MSA1000 achieve over 200MB/s throughput with a 2GB Fiber connection? Am I supposed to purchase another 2/8 Fiber switch to get over 200MB/s performance, or is the quickspec only referring to some type of theoretical internal throughput?

David Grant
5 REPLIES 5
TTr
Honored Contributor
Solution

Re: MSA1000 active/active performance 367MB/s?

> MSA1000 with two controllers in Active/Active
With a second I/O module you get twice the throughput. So to be able to get that throughput out of the array a second 1/8 switch is needed.

Refer to pages 26-28 in

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00800933/c00800933.pdf?jumpid=reg_R1002_USEN
Patrick Terlisten
Honored Contributor

Re: MSA1000 active/active performance 367MB/s?

Hello David,

if you use the A/A firmware, you can distribute the load between the two controllers. Each controller has it's own 2 GB FC port.

Best regards,
Patrick
Best regards,
Patrick
Eric K. Miller
Advisor

Re: MSA1000 active/active performance 367MB/s?

Hi David,

Out of curiosity, what RAID configuration (number of drives, number of enclosures, what type of enclosures, speed of drives, etc.) did you use?

Also, do you have a brief overview of what parameters you used in I/O Meter (what test type, number of simultaneous transactions, etc.)

Thanks!

Eric
IT Department_8
New Member

Re: MSA1000 active/active performance 367MB/s?

Hi Erik,

The left side of the MSA1000 (bus 1 I suppose you would call it) contains 6 x 72GB 15K drives, and the right side of the MSA1000 (bus 2?) contains 7 x 36GB 15K drives. Both sets are in a RAID 5 configuration.

We tested using IOMeter on 6 Windows 2003 Server VMs and one Windows XP VM. 4 VMs were hosted on one DL380 G4 running VMWare 3.5 ESX and 3 were on another DL380 G4 running 3.5 ESX. The servers have HP branded Emulex 2GB Fiber cards (FCA2404) running the latest firmware, using the default driver, and are configured with the default driver settings for VMWare 3.5 ESX. The MSA is running A/P firmware 5.20. The MSA1000 Fiber switch 2/8 is running the latest firmware.

We used IOMeter 2006.07.27. Maximum sector size for the test file was 2048, which yields a 1MB test file under our configuration and ensures that the data will be served out of the MSA controller's cache, giving us the best chance to saturate the Fiber bus. I basically set up a test with a 128KB request size, 100% sequential, 0% random. I ramped up for 10 seconds and tested for 60 seconds. Everything else was pretty much default. I attached the resulting CSV file for you, which contains all the relevant info. The last few tests are on ones to pay attention to.

One the last tests you'll see is one that resulted in 206MB/sec. This is a test of 6 VMs using one of the DL380's internal SCSI bus (300 GB 15K x 6 on a RAID 5 configuration). The internal SCSI bus got up to 206MB/sec vs. the MSA1000 185MB/sec. Not bad for a a cheap 2Gbit SAN. If I really wanted to compare apples to apples I would have spread the VMs out over two DL380s using the each server's internal SCSI bus, and would have achived about 400MB/sec compared to the MSA1000's 185MB/sec, but I digress...
IT Department_8
New Member

Re: MSA1000 active/active performance 367MB/s?

I might also note that I configured 1 LUN per raid set (one LUN for the left side, one LUN for the right side), and that 3 VMs were using the first LUN and 3 VMs were using the second LUN.