StoreVirtual Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

Hp Lefthand Nas Performance

martynjeff
Occasional Visitor

Hp Lefthand Nas Performance

Hi Guys

 

I have recently become the administrator of a HP Lefthand P4500G2 (10 node) split site NAS.

 

As a general rule i like to run various tests to benchmark performance and make sure that it is running as well as it should.  I have ran various tests (iometer mostly) to benchmark our NAS and have found the following results

 

Win 2008 host directly connected to nas

 

Test name                                          Latency     Avg iops     Avg MBps     cpu load
Max Throughput-100%Read       20.45          2827          88                   17%
RealLife-60%Rand-65%Read     18.18         2247           17                     9%
Max Throughput-50%Read          32.15        1637            51                    23%
Random-8k-70%Read                  14.33          2723            21                    7%

 

I get very similar results when i run the same test inside a VM which is sitting on a datastore on the same NAS.

 

I was wondering if you could answer a couple of questions for me.

 

1) Since our physical Hosts are using 1Gb links (and our ESX hosts using 1Gb links bonded to the Vmkernel) i know our theoretical throughput isnt going to reach above 120MBps per link. however i am under the impression that if i tweak our MPIO (on the ESX Hosts) so that it is a "1 to Many" relationship with the NAS (as opposed to the "1 to 1" relationship we have at the moment (please see attached diagram of my enviroment) this will increase throughput as more than one data stream to the NAS can be acheived. Is this the case?

 

2) At the moment our NAS is setup as in the diagram attached, as you can see there was only 1 Virtual IP setup when it was installed so my question is in order to create a "1 to many" relationship between our host HBA and the NAS am i able to add more Virtual IP's to accomodate more data streams?

 

3) Even though i accept that with 1Gb links we are close to the theoretical limit of throughput i think (through consulting google etc) that the latency in the tests mentioned above is pretty high.

 

I found a good source of info here which the guy has 0 latency on a similar setup to mine.

 

http://www.jpaul.me/2011/10/tweaking-vmwares-round-robin-settings/

 

 

 

Please forgive any daft questions, this is the first time i have used HP Lefthand as usually it is a full FC SAN i am looking after so throughput and performance generally isnt limited by the physical infrastructure.

 

P.S. This thread has been moved from Network Attached Storage (NAS) (Enterprise) to HP StoreVirtual Storage / LeftHand. -HP Forum Moderator

 

 

4 REPLIES
martynjeff
Occasional Visitor

Re: Hp Lefthand Nas Performance

no one? :(
Sbrown
Valued Contributor

Re: Hp Lefthand Nas Performance

http://blogs.vmware.com/vsphere/2011/08/vsphere-50-storage-features-part-12-iscsi-multipathing-enhancements.html

 

1. you want to create an MPIO dual vswitch setup for the two nics to do round-robin - default is every 1000 packets, switch round-robin (this is critical since 1000 packets is generally too much for anything but linear bandwidth!). Setting the switching from 1000 to 1 switch per packet will greatly reduce mpio linear bandwidth and increated random access i/o performance. You have to pick a # that works for your workload.

 

Here is a good example of that:

http://www.jpaul.me/2011/10/tweaking-vmwares-round-robin-settings/

 

Just keep in mind I am not recommending to make changes to a production environment, you should thoroughly test this on a test segment to ensure no IO timeouts or switch problems (congestion) occurs!

martynjeff
Occasional Visitor

Re: Hp Lefthand Nas Performance

Hi Mate

 

thanks for the reply

 

the MPIO is set on the vmware side by 2 iscsi port groups bound to 2 seperate pnics.

 

the only thing i havent tried yet is the iops limit reduction,

 

thanks for the reply.

Sbrown
Valued Contributor

Re: Hp Lefthand Nas Performance

Yes - try to limit the iops to 1,4,16,32,64,256,512 to benchmark performance (random versus linear megabits) And latency!

 

You can also set it to number of bytes, but I do not recall how to do this.

 

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2069356

 

I have found setting the iops=1 resulted in much higher load but much faster random i/o performance, so a happy medium has to be considered since IOPS=1 resulted in massive drop in linear megabits per second!