StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

VSA Questions - Is It Suitable?

Paul Hutchings
Super Advisor

VSA Questions - Is It Suitable?

I'm hoping for some feedback and technical info on the VSA product please - thanks in advance for any replies.

Right now we have a primary SAN/ESX infrastructure (EMC AX4 FC though that's not especially relevent) which is to be replaced in a year or so's time - any money spent on expanding it now is dead money IMO.

We have a project coming up iminently that requires up to 5tb of storage capacity. Because our primary SAN is due to be replaced, I'm looking at options to let us put in that 5tb of capacity now, and that could either use it stand alone just for this project, or that could in time scale and form a part of whatever solution we look to replace our primary SAN with.

The Lefthand VSA looks appealing as it appears I can go and buy a HP server, stuff it with disks, install ESXi, buy a VSA license and I have up to 10tb of smart storage (snapshots etc.) now, and in a year or so's time if we were to look at Lefthand for our primary SAN, we "simply" add more nodes.

What I'm not clear on (assuming I've got the general ideas right)is the fine detail:

Overhead on a single node - let's assume after HW RAID I have 5x2tb VMDKs allocated to the VSA - how much is left usable after the VSA takes any overhead it needs?

How it scales - each VSA license is for up to 10tb, as I add nodes I presume my total "cluster" can grow to 20, 30tb as required?

How "Network RAID" and replication works - let's say I start with a single node, and then I add another node at the end of a network link and I want to replicate one node to another for DR rather than network raid/failover?

Performance - most vendors let you "tier" storage i.e. SAS for performance, slower near-line SAS or SATA for bulk, how does this work when you're dealing with a virtual SAN and where your VSA nodes may not all be identical hardware spec?

Snapshots - not too clear how these work with applications such as Exchange and SQL?

Roadmap - are HP committed to the VSA or is it likely to be pulled and only HP Lefthand appliances sold?

Licensing - AIUI you get everything included?

Sorry for so many questions, the HP Lefthand site seems to cover the general overview rather than the specifics.
Respected Contributor

Re: VSA Questions - Is It Suitable?

Let me try to take a stab at some of these for you:

Overhead - The overhead for a VSA is negligible: Here's one of my configurations:
1550GB * 3 = 4650GB (VMDK)
Raw Space (in Lefthand Console) = 4648GB
Usable Space = 4556GB
About 2% overhead between the raw VMDK and the space available in the cluster

Scalability - Yes - you can add VSA nodes just like physical modules to build a cluster. Just remember as you add nodes the cluster is a multiple of the lowest common denominator (both capacity and I/O). Best to try to keep the number of disks and capacity close. Put similar nodes in clusters together.

Network RAID = high availability and building a single cluster. All modules in a single cluster participate in Network RAID.
If you want to snapshot (remote IP Copy) for DR then you would create and use a separate cluster. A cluster can contain 1 or more modules.
I've used a cluster of 1 (like a SATA VSA) just to perform snapshots for DR and redundancy.

Performance - Just like physical nodes you should build your VSA clusters with "like performance and like capacity" modules. You would create tiering by having different clusters of VSAs. (for example: a 4 node VSA cluster of 12x450GB SAS, plus a 3 node cluster of 10x750GB SATA). Mixing different capacity and performance will drop a cluster to the slowest (and smallest) VSA node that is in the cluster, so this would be discouraged. On top of all this keep in mind that the VSA has an extra layer (virtualization) between SAN/iQ and the hardware, so you shouldn't really plan to use VSA clusters for moderate to high performance requirements. Designate any VSA, regardless of disk make-up, to (at best) your Tier 2 storage. I usually recommend customers use VSA clusters for DR/Testing/Sandbox/Staging/QA etc, use physical nodes for critical, and performance sensitive applications.

Snapshots - Best practices calls to use the VSS integration. (you initiate the snapshot from the OS calling a vshadow command from the Windows Solution Pack) This ensures the application (SQL/Exchange etc) quieces the data/logs for consistent backups. There are several tactics with VSS and/or snapshots, so the best methodology will depend on your strategies and requirements.

Roadmap - I'm an HP Partner, not corporate, however the VSA is one of the very unique features that really complements the Lefthand solution vs. the competition. I would find it unlikely they would take away a large competitive advantage that they have with this value-add. IMHO the virtualization of storage is an area that eventually all storage vendors will have to commit to, or be left behind. It was recently revealed that they are porting the VSA to Xen and Hyper-V, so that leads me to believe they are continuing to build stock in the VSA product.

Licensing - the VSA license behaves just like the physical modules, in that all the SAN features are enabled once you purchase a license. (Clustering, replication, snapshots, thin provisioning etc) Note: the most cost effective way to obtain VSA licenses today is to purchase the Virtualization San (2 - 12 disk physical modules) and it includes 10 licenses (total of 100TB) for the VSA.

I'd be happy to answer any other questions you may have. Feel free to drop by the website below:

Paul Drangeid
TeleData Consulting, Inc.
Paul Hutchings
Super Advisor

Re: VSA Questions - Is It Suitable?

Thanks Paul, really do appreciate that reply as it's very detailed.

I've downloaded the VSA demo, haven't installed it yet, but I'm picking my way through the SANiQ manual to familiarize myself with how Lefthand do things.

It's quite "interesting" to say the least.

I'm just starting to look at storage options for our primary SAN and this imminent need for "up to" 5tb of storage has of course presented all sort of options, hell it may be that once I know things like recovery objectives that a "dumb server" is sufficient.

What's peaked my interest here, is that it seems there's potential for the VSA to serve the immediate need, whilst reserving the option to just add VSA nodes, or, potentially physical Lefthand nodes and then use the VSA as a DR target.

I guess there's no black and white answer, but assuming we were looking at using new Proliant servers (say DL180's) with H/W RAID with BBWC, and they were dedicated to VSA not running production VMs as well, how much difference do you consider there to be between the VSA and the Lefthand hardware solution? I ask as other than the ESX layer, I'm not quite clear what's "special" about the dedicated hardware offering?

What has slightly put me off is that it appears a VSA can "only" have a single gb NIC?

Also it appears that management has to happen from within the iSCSI network, there doesn't appear to be a way to have a VSA use the iSCSI network for iSCSI but be on a vSwitch connected to the regular LAN for management?

Sorry for all the questions.
Respected Contributor

Re: VSA Questions - Is It Suitable?

There are a few considerations considering VSA vs physical nodes:

VSA vs Physical nodes will be pretty similar in total IOPs if you use comparable hardware (similar speed/number of drives, and similar RAID controller). There is a small bit of virtualization overhead in disk IO, but not much.

Latency can be a good deal higher with the VSA because the virtualization layer adds some latency while virtualizing the network stack. This would most notably present itself in a high load database/transactional systems where latency more greatly affects users experiences.

The NIC issue is a significant difference between the VSA and physical nodes.

Remember as you add modules to a cluster, each module only performs 1/nth of IO operations. So that can help alleviate the 1 iSCSI NIC limitation on the VSA by distributing the network load across several VSAs as you scale up your cluster.

On your last point regarding the management: You CAN add a 2nd vNIC to the VSA, but you cannot configure them as load balanced (bonded). But you can use 1 NIC on your LAN for management, and 1 NIC on the iSCSI network for iSCSI traffic. The physical modules can make use of bonded ethernet, and 10GbE as well. Your VSA VM can be assigned to a vSwitch that has multiple NICs (which can be configured for failover).

However the largest point IMO is support and performance. Since the VSA is a "bring your own virtualized hardware" solution you won't be able to have a soup-to-nuts support option by calling HP/Lefthand. There's always some additional latency overhead, and you don't have the full stack supported by the vendor. That's why I stress to customers that they be very strategic about what the expectations they have for the VSA vs physical nodes. Make sure your expectations are realistic (and don't violate your own internal vendor/support requirements) when planning your deployment of VSA clusters.

One other note - you may be able to "virtualize" your existing end-of-life SAN into a VSA so you can turn that storage into another VSA module that can interact (replicate etc) with your future HP/Lefthand storage infrastructure.
Paul Hutchings
Super Advisor

Re: VSA Questions - Is It Suitable?

I'd be grateful for any input from the HP/Lefthand guys on where they see the differences?

I ask as, if I were thinking of building dedicated (i.e. just running VSA) nodes using DL1xx or DL3xx and filling them with SAS disk in, say, RAID10, adding 4-6gb of RAM and a Smart Array with BBWC, and several physical NICs in a vSwitch, into a dedicated iSCSI LAN, what am I losing in performance terms?
Paul Hutchings
Super Advisor

Re: VSA Questions - Is It Suitable?

Anyone..? :-)
Trusted Contributor

Re: VSA Questions - Is It Suitable?

I've not quite read this whole thread yet... because it's a rather long read...

But to answer your VSA vs P4000 hardware question...

If you took a P4500 node running native SAN/iQ (the hardware SAN) and ran a set of performance tests against it... Then without touching the hardware config wiped out SAN/iQ, installed ESX, and then installed the VSA. This is what I'd predict you'd see.
(predict is really a silly word to use, I've actually done this)

The Management and feature set of the two is identical, with the exception that the VSA can't tell you anything about hardware or disk health. The VSA not being able to tell you when a disk has died and needs replacing should be considered a significant difference IMHO.

Perfomance wise the answer is workload specific, any low bandwith, small block random IO type workload (think databases) would produce almost the exact same # of IOPS, but the VSA would show a millisecond or two of higher latency.
Any high bandwidth, large block, sequential IO workload (think files copies and cloning) would show significant performance degradation in the VSA. Perhaps as bad as the VSA only being half as fast, but normally the VSA would be 3/4 as fast.

The significant difference in high bandwidth workloads is just due to how well virtual networks do (or don't really) load balance. The physical platforms are much better at utilizing physical networks.

The short answer would be to say the VSA is about 70% as fast as an identical physical platform and that the biggest differences are in hardware monitoring and sequential performance.

It should go without saying too that the physical platform is way easier to install because it comes as an appliance when the VSA needs ESX installed first and then itself to be setup correctly.

Hope that helps.
Adam C, LeftHand Product Manger
Trusted Contributor

Re: VSA Questions - Is It Suitable?

I just realized, since you're origional question was is it suitable...
I'd say yes.
It is most suitable for production use in smaller environments, say < 25 or 30 VMs, and remote offices.

I've seen customers make VSA do things that were never expected though. I've seen some super fast ones using 10Gb and SSD even.

It's usually people with the time, expertise, and need for a specific strategic benefit, that would go to that extent though.
Adam C, LeftHand Product Manger