MSA Storage

Understanding Perfomance

 
osamamansoor
Advisor

Understanding Perfomance

Hi Experts,

We have recently installed/configured HP MSA2062 with the below configuration

TWO POOLS and each POOL has two Disk Group (RAID5 + Single Drive SSD for cache) and Two Volumes

TWO POOL---->TWO DISK GROUP--->>TWO VOLUMES

Most of our machines are on one volume called NOP-R5. 

Now i can see on one linux machine the iowait % of cpu is more than on older hardware like on older hardware the CPU iowait % is 0.1 to 1 however on MSA 2062 iowait % is 4 to 5 %.

 

Can someone help to see below chat of my MSA and tell me what should be the normal ranges of System IOPS, System Latency, and system Throughput

What is your opinion by seeing below chart regarding performance? Is my storage is normal?

Capture.JPG

 

 

7 REPLIES 7

Re: Understanding Perfomance

@osamamansoor 

Please provide more details,

>> how many total drives present in this system?
>> What type of drive? capacity, type and speed
>> Each VDG created with how many drives and with what RAID level?
>> What type of application you are using?
>> Whta is the old hardware with what you are doing comparison? Was it linear array or Virtual array?
>> How are you measuring iowait % of cpu? is it for the linux machine?

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

********************************************************************


I work for HPE
Accept or Kudo
osamamansoor
Advisor

Re: Understanding Perfomance

@SUBHAJIT KHANBARMAN_1 Thanks for reply.

here are answers of your questions.

>> how many total drives present in this system?

Total we have 12 SAS 10 K Drives, 2 SSD,  ===== 2 Spare Drive , 1 SSD assigned to POOL A and 1 SSD assigned to POOL B as Read Cache , 6 Drives SAS to POOL A and 4 Drives SAS to POOL B (Both RAID 5)
>> What type of drive? capacity, type and speed

10 K , SAS and each has a size of 1.8 TB, SSD 1.9 TB
>> Each VDG created with how many drives and with what RAID level?

6 = RAID5 (POOL A)

4 RAID5 (POOLB)
>> What type of application you are using?

It Oracle Enterprise Application Server (Oracle EBS 12.1.3) with Database
>> Whta is the old hardware with what you are doing comparison? Was it linear array or Virtual array?

It was IBM Chasis with same RAID Level (RAID5) IBM Blade Center HS23 its having internal storage connected through serial. 
>> How are you measuring iowait % of cpu? is it for the linux machine?

We are running top command and it shows 4 to 5 constant however on older hardware it was 0.1 

Re: Understanding Perfomance

@osamamansoor 

As per the performance graphs you have showed I think you are getting good performance only.

You have total 12 SAS 10k RPM FC drives which means total (12 x 150 backend IOPs) = 1800 IOPs system can perform well. As per the graph it shows max 1800 IOPs it's touching which is good.

If we see latency then we can see max 1000 microsecond or 10milisecond which is again very good.

Please note that you are using SSD as read ahead cache which means it only helps in read intensive operation. This cache will not help in write operation. In order to get read/write help you need to use SSD as Performance tier.

As per my understanding  IBM Blade Center HS23 is old product and I am sure it doesn't support features which MSA 2062 supports. I would suggest you to go through MSA 2062 quickspec and best practice whitepaper to get more details on this.

https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00094630enw

https://psnow.ext.hpe.com/doc/a00105260enw?jumpid=in_lit-psnow-red

Regarding the CPU iowait %, its better to involve HPE Performance Service specialist who can help you on this. You can refer below advisory to get details,

https://assets.ext.hpe.com/is/content/hpedam/a00063850enw

 

Hope this helps!
Regards
Subhajit

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!

********************************************************************


I work for HPE
Accept or Kudo
osamamansoor
Advisor

Re: Understanding Perfomance

@SUBHAJIT KHANBARMAN_1  again many many thanks for your reply.

I have some other queries regarding performance-based on SSD.

How can I see how my SSD (READ CACHE) is performing as I am unable to find any performance graph only specific to SSD. Is there any separate graph /report to measure SSD performance impact?

Also, i just check below Screen-Shots but unable to decipher the meaning regarding performance.

Are the below numbers GOOD or BAD, Performance degraded?

o1.jpg02.jpg03.jpg04.jpg

 

osamamansoor
Advisor

Re: Understanding Perfomance

can some experts please help me.

osamamansoor
Advisor

Re: Understanding Perfomance

@SUBHAJIT KHANBARMAN_1  Waiting for your valuable comments.

JonPaul
HPE Pro

Re: Understanding Perfomance

At the risk of going down the rabbit hole of performance....
The I/O Workload graph is a graph showing the localization of your data accesses.  The totality of your storage is divided up into 'buckets', each access to a 'chunk' of data is tracked and that bucket gets a counter incremented.  At the end of 24 hours the buckets get ranked. 100% on the I/O workload graph will show the 'width' of your I/O accesses on your data, 80% will show the 'width' of the top 80% of the buckets....  This is helpful when you are trying to size how much SSD (Really fast) storage you should have.  If the tiering engine is working well and 80%+ of your I/Os are within the SSD capacity, the chances are you are getting the fastest possible access to your data by having the most accessed data migrating to the SSD tier.
Looking from a high level, performance is a investigation of where your bottleneck is and adjusting to remove that bottleneck.  In your case the bottleneck is the drives/spindles.  In your pools, your WRITE performance is gated by the 6 spindles or the 4 spindles.  In the Best Practices guide, referenced above, it would recommend a single pool.  At that point, since you have a 2062 and a Performance Tier license, you could use your 2x SSDs as a Performance Tier (RAID 1) and get READ/WRITE performance from the SSDs.  As your system is open for added drives you could also investigate RAID level MSA-DP+ which will allow incremental expansion.  Unfortunately any storage reconfiguration to alleviate spindle bound is a major undertaking.  As pointed out earlier the response time looks to be very good (1000 microseconds == 1 ms) so your performance should be good.  When response times exceed 30ms you typically start seeing effects at the application layers.

I work for HPE