- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE 3PAR StoreServ Storage
- >
- 3PAR Iometer Ninja Stars IOPS disrepancy
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2017 08:22 AM - edited 09-07-2017 01:15 AM
09-06-2017 08:22 AM - edited 09-07-2017 01:15 AM
3PAR Iometer Ninja Stars IOPS disrepancy
Hello,
The problem is I am unable to get near the provided IOPS in the latest Ninja Stars tool. I get around half the expected bandwith and half the IOPS with IOMETER (16 workers, 8 outstanding IO`s, full disk). Increasing / Decreasing the Manager or Worker count doesnt change much up to the point when only latency increases.
For example, Full Write with 64K blocks results in half the expected results.
If it helps, SSMC shows that the host is 100% busy when IOMETER is run.
Do you have any advice on how to get better IOMETER results?
Has anyone managed to replicate Ninja Stars proposed results?
Could putting the FC card into a x16 PCIE slot improve performance x2?
THE SETUP:
3PAR 8400 storage array for testing, 400GB SSD disks in RAID 5, 3+1.
The host that the 100GB volume is exported to is a DL380 G9 with Windows 2012 R2 (No Hypervisor to reduce overhead)
No switches, direct attach. DL380 has a SN1000Q 16Gb FC card in a x8 riser slot. The 3Par ports are the standart 16G 0:0:1 and 1:0:1 ports. Latest firmware and drivers for everything. Round robin in MPIO, FC cables are in good condition.
-In addition, ESXi with Round Robin gets fairly worse results.
Thank you!
TL:DR- HOW TO GET PROPER IOMETER RESULTS?
!!! UPDATE!!! - I added an additional SN1000Q 16Gb card, and the results almost doubled. So my guess is that the bottleneck is definitely with the windows host server. And it has something to do with throughput per port. Any ideas gents?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2017 01:23 AM
09-12-2017 01:23 AM
Re: 3PAR Iometer Ninja Stars IOPS disrepancy
I tested also a 8400 AFA with IO Meter.
We need 6 Windows Hosts (8G FC) to satturate the Storage.
Every Host have ESX with 2x Windows VM IO Meter and RAW Disks.
After the 7. Host we get no more IOPS out of the Storage.
Every Storage SFP have own Buffers and CPU Core (some changes in 3.3.1), so using additional SFPs give more Performance.
Do you use the second SAS Ports (DP-1) by adding a Shelf?
