- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Hp Lefthand Nas Performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2014 03:25 AM - last edited on 11-24-2014 04:29 AM by Lisa198503
11-21-2014 03:25 AM - last edited on 11-24-2014 04:29 AM by Lisa198503
Hp Lefthand Nas Performance
Hi Guys
I have recently become the administrator of a HP Lefthand P4500G2 (10 node) split site NAS.
As a general rule i like to run various tests to benchmark performance and make sure that it is running as well as it should. I have ran various tests (iometer mostly) to benchmark our NAS and have found the following results
Win 2008 host directly connected to nas
Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 20.45 2827 88 17%
RealLife-60%Rand-65%Read 18.18 2247 17 9%
Max Throughput-50%Read 32.15 1637 51 23%
Random-8k-70%Read 14.33 2723 21 7%
I get very similar results when i run the same test inside a VM which is sitting on a datastore on the same NAS.
I was wondering if you could answer a couple of questions for me.
1) Since our physical Hosts are using 1Gb links (and our ESX hosts using 1Gb links bonded to the Vmkernel) i know our theoretical throughput isnt going to reach above 120MBps per link. however i am under the impression that if i tweak our MPIO (on the ESX Hosts) so that it is a "1 to Many" relationship with the NAS (as opposed to the "1 to 1" relationship we have at the moment (please see attached diagram of my enviroment) this will increase throughput as more than one data stream to the NAS can be acheived. Is this the case?
2) At the moment our NAS is setup as in the diagram attached, as you can see there was only 1 Virtual IP setup when it was installed so my question is in order to create a "1 to many" relationship between our host HBA and the NAS am i able to add more Virtual IP's to accomodate more data streams?
3) Even though i accept that with 1Gb links we are close to the theoretical limit of throughput i think (through consulting google etc) that the latency in the tests mentioned above is pretty high.
I found a good source of info here which the guy has 0 latency on a similar setup to mine.
http://www.jpaul.me/2011/10/tweaking-vmwares-round-robin-settings/
Please forgive any daft questions, this is the first time i have used HP Lefthand as usually it is a full FC SAN i am looking after so throughput and performance generally isnt limited by the physical infrastructure.
P.S. This thread has been moved from Network Attached Storage (NAS) (Enterprise) to HP StoreVirtual Storage / LeftHand. -HP Forum Moderator
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-27-2014 04:45 AM
11-27-2014 04:45 AM
Re: Hp Lefthand Nas Performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-02-2014 08:02 AM
12-02-2014 08:02 AM
Re: Hp Lefthand Nas Performance
1. you want to create an MPIO dual vswitch setup for the two nics to do round-robin - default is every 1000 packets, switch round-robin (this is critical since 1000 packets is generally too much for anything but linear bandwidth!). Setting the switching from 1000 to 1 switch per packet will greatly reduce mpio linear bandwidth and increated random access i/o performance. You have to pick a # that works for your workload.
Here is a good example of that:
http://www.jpaul.me/2011/10/tweaking-vmwares-round-robin-settings/
Just keep in mind I am not recommending to make changes to a production environment, you should thoroughly test this on a test segment to ensure no IO timeouts or switch problems (congestion) occurs!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-03-2014 03:54 AM
12-03-2014 03:54 AM
Re: Hp Lefthand Nas Performance
Hi Mate
thanks for the reply
the MPIO is set on the vmware side by 2 iscsi port groups bound to 2 seperate pnics.
the only thing i havent tried yet is the iops limit reduction,
thanks for the reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-04-2014 06:49 AM
12-04-2014 06:49 AM
Re: Hp Lefthand Nas Performance
Yes - try to limit the iops to 1,4,16,32,64,256,512 to benchmark performance (random versus linear megabits) And latency!
You can also set it to number of bytes, but I do not recall how to do this.
I have found setting the iops=1 resulted in much higher load but much faster random i/o performance, so a happy medium has to be considered since IOPS=1 resulted in massive drop in linear megabits per second!