StoreVirtual Storage
1748183 Members
3394 Online
108759 Solutions
New Discussion

Re: Which RAID is best for SSD array (10x 512Gb drives) in 3+ node VSA cluster?

 
ykretov
Occasional Visitor

Which RAID is best for SSD array (10x 512Gb drives) in 3+ node VSA cluster?

Hello everybody,

 

I am thinking about building 3+ HP VSA cluster using Network Raid10+1 protectoin (so two VSA nodes could be down at the same time). To minimize the waste of SSD capacity on each individual VSA node I'd rather go with simple RAID-0.

 

That brings me to a question - please share your experience with HP VSA (we have not got it yet):

- does VSA even support Raid-0 on its 5 virtual drives, or the choices are limited to Raid-5, 6 or 10 (as it is done for physical drives on HP LeftHand).

- is it possible to make one huge raid-0 on h/w controller and pass that 5TB Raid-0 to VSA a single virtual drive?

The alternative is - to make 5 raid-0 pairs of 512GB drives for h/w controller will present the hypervisor of 5 1TB virtual drives that could be used in VSA to make raid-0 (hopfully).

 

If everything works correctly, on a 3-node VSA cluster with 10 x 512Gb SSDs we would have 5TB of Raid-0 per node, and the total cluster capacity will be also 5TB due to network radi10+1 protection.

 

Which is OK for starters; adding a similar 4-th VSA node will add 5TB of capacity to the RAID-10+1 array still allowing two nodes be down at the same time.

 

==

What do you think: a) is it even possible to make such config; b) would it be better than traditional Raid-5 on each VSA node and single network raid-10 across 3 nodes?  My concern is - SSD is not to good to be used in Raid-5 (excessive IOPS due to parity).

 

Also network raid10+1 (two-way protection) is more scalable - each time you add 100% of signle node's capacity, which is best performing RAid-0...

 

Thanks!

 

3 REPLIES 3
ykretov
Occasional Visitor

Re: Which RAID is best for SSD array (10x 512Gb drives) in 3+ node VSA cluster?

UPDATE:

 

I just ran a first test - built a single node HP VSA cluster with 8 SSDs 256Gb each.

 

First, I made 4 RAID-0 pairs in h/w controller so ESXi 4.1 saw 4 drives 512 GB each that I passed to VSA guest VM as VMFS files on standard ESXi local storages (one file per each store).

 

The next twist was - I created virtual switch on ESXi box, enabled MTU 9000 jumbo frames; added couple of iSCSI kernel interfaces (with IPs) as well as guest VM VLANs for same iSCSI subnets.

 

Then I exported VSA partition back to the same VM head where VSA was running.

 

Then I installed another guest VMs CentOS 6.4 and Win2008R2 using the mounted iSCSI storage from VSA VM.

In other words, guest VMs were using SSD array of h/w box that was exposed via iSCSI protocol by another guest VM.

 

The good thing about ESX hypervisor was - VMXNET3 adapter in VSA was put in 10Gig mode along with jumbo frames.

The communication between VSA and ESX host (that mounts the iSCSI volume) happened inside the ESX virtual switch where both parties emulated 10gig adapters.

 

So far I ran partial performance tests that so far looked very promising: it looks like the performance on guest VM was only about two times worse compare to disk performance test that I ran directly on the physical box with directly controlled SSD array in OS.

 

I'll post another update after I make those tests across multiple platforms to have a better understanding what test program actually tests.

 

==

Another detail about VSA - there was no option to make local Raid-5 on a single VSA node. VSA sees 4 virtual drives as "virtual raids" that are coming from underlying OS (ESX). The only option is to combine those 4 drives into single volume (raid-0) so the overall setup was - multiple raid-0 made on SAS controller and final raid-0 made on VSA.

 

That leaves me the only option to make a redundant setup - enable network raid-10 +1 (mirror to two other nodes) in multi-node VSA cluster. That is probably the best option in terms of performance, only 3-node cluster is minimal; 4- or 5-node is preferred to avoid wasting too much of SSDs... 4-the and 5-the nodes added will contribute 100% of their capacity of raid-0 they locally have.

 

RonsDavis
Frequent Advisor

Re: Which RAID is best for SSD array (10x 512Gb drives) in 3+ node VSA cluster?

Adding a fourth node won't add 5TB of usable space. With RAID 10+1 you would need two extra copies of all data you put on the system, so for 10TB of usable space you need 30TB of raw space. 

You are going to add the fourth node and end up with 6.67 TB of usuable space. 

You can change the network RAID level on a per volume basis though, so you can do some volumes at RAID 10.

When you say only twice as slow, can you be more specific? What metrics are you taking?

 

oikjn
Honored Contributor

Re: Which RAID is best for SSD array (10x 512Gb drives) in 3+ node VSA cluster?

thanks for the update and keep them comming :)

 

one limitation w/ the VSAs is they only do raid0 on their drives so any redundancy on hardware has to be done before you pass the disks to the VSA.  I suspect the reason for this is because any raid would have to be done as software raid and given the restriction on vCPU's, I would imagine software raid performance would be terrible!