Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

StoreVirtual VSA performance sucks when AO is enabled

slymsoft
Occasional Visitor

StoreVirtual VSA performance sucks when AO is enabled

Hi,

 

I deployed a Two-node StoreVirtal VSA cluster with every best practices I could find but I have some performance issues.

 

Here is my setup :

 

  • Two DL380p Gen 8 running vSphere 5.5 U2
  • Two tiers : RAID1 2 x 800 GB SATA SSD (Tier 0) + RAID5[7+1] 8 x 600GB SAS 10K (Tier 1)
  • Two StoreVirtual VSA 2014 up-to-date with Adaptive Optimization enabled
  • Two 10 Gigabit Ethernet ports per servers
  • Two H3C 5120  (IRF Stack 2 x 10 Gbps)

I've made a benchmark of the SSD RAID with great results in both RANDOM and SEQUENTIAL.

 

Same benchmark is very disappointing after installing StoreVirtual VSA :

 

Random performances are one third of what I had directly on SSD which is acceptable since the VSA have to replicate data (around 10K IOPS on random 4K).

 

However, Sequential write performances just sucks. I get 60 MBps compared to 500 MBps for sequential read.

 

What is really weird is that I obtain better performances on Sequential writes (150 MBps) when I disable Adaptive Optimization on a volume. I also tried NRAID0 and got same number (150 MBps).

 

 

Looks like replications between VSA are slowing down the writes access.

 

I checked all I could check but can't see what is wrong so here I am, asking for your help or any idea.

 

Thank you in advance.

 

 

 

 

 

4 REPLIES
a_o
Valued Contributor

Re: StoreVirtual VSA performance sucks when AO is enabled

What you're seeing is in line with what I've seen with StoreVirtual in general.  In fact your results are slightly better than what I've seen. I have a very simlar setup to yours - down to the 5120 and IRF configuration.

It would clear things up for you if you benchmarked a VSA with just SSDs or just the SAS 10K drives.

 

Note that VSAs have some overhead, so you can't compare their  performance to something running natively in the HyperVisor.

Also, it's well known that VSAs have artificial performance limitations placed on them so that they don't cannibalize HPs StoreVirtual hardware sales.  For instance, the CPU utilization is not maximized in VSAs, and they wont use extra resources (vCPUs, memory) if you allocate them to the VSA.

 

Lastly, I don't know exactly how you ran your benchmarks. But once configured for access using MPIO and DSM, your VSA system as  a whole should be very performant, especially since you're running on a 10Gig network.  For example, reads are made across the entire cluster, so that host A gets data from VSA X and host B gets it's data from VSA Y.  When coupled with MPIO/DSM, it should sing.


I'll suggest you also run your benchmark again.  This time  run multiple instances of ioMeter, each with multiple worker threads, and take the aggregate performance of the system. This should come as close to real world performance as you will get with StoreVirtual.

a_o
Valued Contributor

Re: StoreVirtual VSA performance sucks when AO is enabled

BTW, I should also have said that replication should not have much of an effect unless you're restriping LUNs to a new node or are running under a very heavy write load.

Sbrown
Valued Contributor

Re: StoreVirtual VSA performance sucks when AO is enabled

If you make a volume using only one VSA server do you see the same impact on performance? It might be related to the network raid and cache consistency overhead

slymsoft
Occasional Visitor

Re: StoreVirtual VSA performance sucks when AO is enabled

 

 

I made some new benchmarks on another infrastructure. I have the exact same impact on Hyper-V when AO is enabled.

 

All these tests are on Hyper-V except when mentionned otherwise. I used CrystalDiskMark (5 pass - 1 GB per profile unless mentionned otherwise).

 

Cache 80/20 (W/R) means the Raid card cache has been set to 80% for write / 20% for read cache.

 

First tests are directly on disks , the last tests are with VSA on these same disks.