- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- Re: P2000 MPIO Poor Read Performance but Good Writ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2010 09:04 AM
тАО05-12-2010 09:04 AM
P2000 MPIO Poor Read Performance but Good Write Performance
SAN Storage: 6 x 300GB 10K in Raid 10
Switch: 5406zl (Have tested with all combinations of Jumbo Frames & Flow Control with the matching settings on the SAN and NICs)
Server NIC: NC382T (tried with and without ISCSI acceleration enabled)
Server OS: Windows Server 2008 R2
MPIO: Round Robin with Subset (Round Robin is not available)
Single 1GbE:
Max Read: ~130MBps
Max Write: ~125MBps
Dual 1GbE:
Max Read: ~130MBps
Max Write: ~240MBps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-11-2011 05:27 PM - edited тАО07-11-2011 05:30 PM
тАО07-11-2011 05:27 PM - edited тАО07-11-2011 05:30 PM
Re: P2000 MPIO Poor Read Performance but Good Write Performance
Hi, I'll second this. Getting similar performance in vmWare ESXi 4.1 with dual active paths. I haven't measured the write but the read seems limited to around 100-120MB/s. I am using a 8 disk RAID10 vDisk.
I am also using NC382T cards (dual cards with dual ports. Using active paths on different cards. Servers are both DL380 G6 and DL385 G5's.
Any solution?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-11-2011 11:51 PM
тАО07-11-2011 11:51 PM
Re: P2000 MPIO Poor Read Performance but Good Write Performance
Hi,
This is how TCP connections work, this has nothing to do with IOmeter or the P2000...
Once you setup a connection, from that moment on all trafic of that communication will pass through 1 NIC...
If you want to test the maximum throughput, start multiple IOmeter sessions on multiple servers at the same time... These are multiple sessions which will be spread accross the multiple interfaces of the P2000...
Kr,
Bart
If my post was useful, clik on my KUDOS! "White Star" !
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-12-2011 02:56 AM
тАО07-12-2011 02:56 AM
Re: P2000 MPIO Poor Read Performance but Good Write Performance
Hmm, I was told that atleast ESXi is supposed to be able to spread the IO across the interface. Whenever I watch the ESXi-hosts NIC's traffic rates they spread across both interfaces (with around 440Mb/s each, or 880Mb/s when only using a single interface).
Roundrobin is set to 1 IOPS with 64k reads/writes before going to the other NIC.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-12-2011 03:06 AM
тАО07-12-2011 03:06 AM
Re: P2000 MPIO Poor Read Performance but Good Write Performance
This is correct. ESXi spreads the traffic accross multiple NICs...
However: if you have only 1 VM with 1 interface with a ping, this is 1 session that will pass on 1 NIC... It is only with multiple VM's with multiple sessions to several clients that all the sessions will be spread by means of round robin...
Round Robin is not the same like Link Aggregation...
For round robin you don't have to configure anything on the switch, for link aggregation it is...
These are 2 different things...
If my post was useful, clik on my KUDOS! "White Star" !
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-12-2011 03:24 AM
тАО07-12-2011 03:24 AM
Re: P2000 MPIO Poor Read Performance but Good Write Performance
Yes, I understand that but this is using a datastore for benchmarkning with ISCSI.
According to the Best practice for P2000 and ESXi we are supposed to get increased performance when pathswitching at every IO.
Best practice for changing the default PSP option As a best practice, change the default PSP option for VMW_SATP_ALUA to VMW_PSP_RR for P2000 G3 SAN environments. Secondly, for optimal default system performance with the P2000 G3, configure the round robin load balancing selection to IOPS with a value of 1. This has to be done for every LUN using the command: esxcli nmp round robin setconfig --type тАЬiopsтАЭ --iops 1 --device naa.xxxxxxxxx
Also the DSS v6 has a howto for "fixing the ISCSI 1G limit"
http://www.open-e.com/solutions/open-e-dss-v6-mpio-vmware-esx-40/
There is also a thread on the VMWare forum regarding this and the solution was to do the 1 IOPS RR:
http://communities.vmware.com/message/1281730
I am mostly curious if this is a speed/performance limit somehow of the P2000 G3-box. Feel kinda fooled to have bought 4xISCSI SP's if this is the performance we should expect.