1752330 Members
5836 Online
108786 Solutions
New Discussion юеВ

Re: HP VSA / 4130

 
MarkGerrard
Occasional Contributor

HP VSA / 4130

Hi,

 

We are in the process of consolidating some of our older 7 x DL380G5 servers (running a mix of XenServer / 2008 R2 Hyper-V) over to 3 x DL380G8P as ancient and under utilised due to old redundant PCI DSS scoping. We are also planning on running the new servers under a VMWare Essentials Plus license (would HP VSA work better under Hyper-V than this limited VMWare license as we already have two Windows DC licenses with SA as although migrating stuff to Linux require Windows VM's also for number of years so would be a case of licensing extra DC server instead of VMWare but then would have cost of System Center as currently not managed effectively?).

 

Storage on existing servers is a mix of DAS 10K SAS drives and an Equalogic PS6000E (16 x 1TB SATA). DAS storage is tied to machine so these VM's cannot be moved about via hosts. The new servers will be 2 x 8 core Xeon, 112GB RAM and 16 x 450 SAS drives each (two controllers) and we are planning to use HP VSA to make these shared storage pool for VMWare.

 

I am thinking of running each server with a 7 (+1 Hot Spare) Raid 5 on one controller and 8 Raid 10 on other controller, giving 2.7GB RAID 5 and 1.8TB RAID 10 per node. I am assuming license these with two 3 x 4TB VSA licenses, licensing 6 x VSA's nodes across 3 servers. Is this correct?

 

From my calculations using network RAID 10 this will give me two storage pools approx 4TB RAID 5 and 2.7TB RAID 10 with the ability to sustain failure of one node. Is this correct? I am doing it this way so I can allocate different VM's to storage pool type based on requirements, or is there a better way?

 

We are planning on taking the Equalogic PS6000E across with us when we finish migration and we were planning on investing in a refurbished PS4100 or PS6100 to pair it with. That is until I came across the HP VS 4130 and pricing I can get for 8TB SATA new kit x 2 works out similar to refurb EQ kit for 1 and this is obviously a big plus factor in scaling it up in future (the new solution I am putting in is hopefully going to last 3-5 years).

 

Given that I will have our EQ kit to run in the new setup anyway will the HP appliances add any performance other benefits apart from new vs refurb that would be the cause to go to all HP VS stuff?

 

Thanks,

 

Mark

 

 

6 REPLIES 6
MarkGerrard
Occasional Contributor

Re: HP VSA / 4130

Just realised that instead of 2 x 3 4TB licenses, if using with HP VS 4130 would license each VSA node with 10TB license and then use adaptive storage Raid10 fast / Raid5 slow... This would give ability to have more than 3 nodes in the cluster. Would the 4130 be of use to replicate snapshots to?
a_o
Valued Contributor

Re: HP VSA / 4130

The StoreVirtual 4130 comes with 4 x 600GB SAS drives. With Raid 5, that's less than 1.8TB raw. With Storevirtual OS and metadata, I think you would be at about 1.4TB usable. To me, that's an entry level unit. I'm not even sure that it comes with a 10TB license. it might only come with a 4TB license.

I would go the VSA route. Especially because you've got all of the EQ stuff already. You would still have your EQ stuff direct-attached, but the LUNs on them would be available to all the POS and VOS you'll be running via ISCSI.

Are the DL380G8P multi-processor? If so, you should be able to run two VSA per DL380, in addition to your other VOSs. The 2 x 3 4TB VSA solution would be very economical for the amount of storage that you would be getting.

Lastly, WRT to the licensing of ESXi Essentials vs. Windows DC, it would seem to me that Windows solution might be better for you because you can apply the 2 DC licenses on to the DL380s and you can host an unlimited amount of Windows VOS instances and any amount of Linux VOS on those servers. If you go with EXSi, iirc, the licenses could only be applied to two instances of Windows VOSs.

a_o
Valued Contributor

Re: HP VSA / 4130

I see that the DL380's are multi -processor. I initially read it as they were multi-core.

Another point, IMO you should bump up the memory on your servers. 10-12Gigs would have to be dedicated to the VSAs.

Also, with 3 nodes, you would not be optimized capacity wise with NR10. NR5 might be better, depending on your needs.
Say you have 3 nodes, each with 2TB usable.
NR5 would give you 2 x 2TB LUNS, whereas NR10 would give you 3 x 1TB LUNS. i.e 4TB total vs. 3TB total capacity.

oikjn
Honored Contributor

Re: HP VSA / 4130

a_o can't say I follow the logic of your NR5 suggestion. given the write penalty on NR5 and the minimal space savings, I don't get why you would want to run NR5 over NR10 in any situation other than pure archive storage. As for the OS, windows has a definite advantage over esx in that it uses the HP DSM which is more efficient than the MPIO solution for esx... in the end, it shouldn't matter much here, but if you already have the licenses for MS, I see no reason to spend money on esx when it will give you no advantage (unless it helps you with management as the rest of your environment is esx).
a_o
Valued Contributor

Re: HP VSA / 4130

Ahhh,..I forgot about the lack of a LH DSM implementation on VMware. Yes, the lack of HP DSM would make me give the edge to Windows.

WRT NR5 vs NR10, I actually prefer and only use NR10 for my LUNs. Yes, there is a performance penalty with NR5.
But by my calculations, when you have an odd amount of NSMs, by creating NR5 only LUNs you can can get more storage capacity at the cost of performance.
Am I wrong in this belief?

I remember doing this calculation as I researched Lefthand years ago.


Again, given 3 nodes, each with 2TB usable.
NR5 would allow you t create 2 x 2TB LUNS. i.e. each 2TB LUN being striped across 3 nodes (consuming 1TB on each node) .
OTOH, using NR10 would give you 3 x 1TB LUNS. i.e. each 1TB LUN being mirrored across 2 nodes.
So, 4TB total vs. 3TB total capacity.

oikjn
Honored Contributor

Re: HP VSA / 4130

the number of nodes doesn't matter for NR10.  Its innefficient either way ;)  lol.

 

NR10 will just mirror the data onto any two nodes, so with three nodes, your data will reside on two of the three.  Its easy to figure out your net capacity as its simply 1/2 your raw capacity... if you have three nodes that each have 1TB usable space, you will end up with (1TB*0.5)*3 = 1.5TB usable NR10 space with three nodes.  And for each additional 1TB node, you would get an additional .5TB of usable NR10 space.

 

I definitely would stick to NR10 as the cost/benefit for NR5 is just not worth it, but YMMV and you can always test and migrate between NR5 and NR10 if you change your mind (assuming you have the raw SAN capacity to handle NR10.