MSA Storage
1753324 Members
6330 Online
108792 Solutions
New Discussion юеВ

Re: MSA 2040 latency on raid 10

 
Ehermouet
Occasional Advisor

MSA 2040 latency on raid 10

Hello

 

we have mSA2040.

We have 2 volume, 1 on raid 5 for data and low VM and other volume on raid 10 (10 disk of 900gb 10k) for 4 VM database server.

 

we have some IO lantency., my veeam one report me latency each 15min... and user complain for some application latency... 

 

My Hyperv host are connected with HBA card on the MSA. 4 wires per hba

 

423 IOPS and 11 MB/s and my server is on latency...
 
i can send any other information if necessary.
 
tks
 
11 REPLIES 11
Ehermouet
Occasional Advisor

Re: MSA 2040 latency on raid 10

Here config file

 

=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2018.03.29 15:22:56 =~=~=~=~=~=~=~=~=~=~=~=

System Information
------------------
System Name: SANSP
System Contact: informatique
System Location: NOZAY
System Information: SAN
Midplane Serial Number: 
Vendor Name: HP
Product ID: MSA 2040 SAS
Product Brand: MSA Storage
SCSI Vendor ID: HP
SCSI Product ID: MSA 2040 SAS
Enclosure Count: 1
Health: OK
Health Reason: 
Other MC Status: Operational
PFU Status: Idle
Supported Locales: English (English), Arabic (╪з┘Д╪╣╪▒╪и┘К╪й), Portuguese (portugu├кs), Spanish (espa├▒ol), French (fran├зais), German (Deutsch), Italian (italiano), Japanese (цЧецЬмшкЮ), Korean (эХЬъ╡ньЦ┤), Dutch (Nederlands), Russian (╤А╤Г╤Б╤Б╨║╨╕╨╣), Chinese-Simplified (чоАф╜Уф╕нцЦЗ), Chinese-Traditional (ч╣БщлФф╕нцЦЗ)


Controllers
-----------
Controller ID: A
Serial Number: Confidential Info Erased
Hardware Version: 5.2
CPLD Version: 55
MAC Address: 00:C0:FF:26:E0:DA
WWNN: 500C0FF26FB49000
IP Address: 192.168.1.15
IP Subnet Mask: 255.255.255.0
IP Gateway: 192.168.1.253
Disks: 18
Virtual Pools: 0
Disk Groups: 2
System Cache Memory (MB): 6144
Host Ports: 4
Disk Channels: 2
Disk Bus Type: SAS
Status: Operational
Failed Over to This Controller: No
Fail Over Reason: Not applicable
Health: OK
Health Reason: 
Health Recommendation: 
Position: Top
Phy Isolation: Enabled
Controller Redundancy Mode: Active-Active ULP
Controller Redundancy Status: Redundant

Controllers
-----------
Controller ID: B
Serial Number: Confidential Info Erased
Hardware Version: 5.2
CPLD Version: 55
MAC Address: 00:C0:FF:26:E0:A1
WWNN: 500C0FF26FB49000
IP Address: 192.168.1.16
IP Subnet Mask: 255.255.255.0
IP Gateway: 192.168.1.253
Disks: 18
Virtual Pools: 0
Disk Groups: 2
System Cache Memory (MB): 6144
Host Ports: 4
Disk Channels: 2
Disk Bus Type: SAS
Status: Operational
Failed Over to This Controller: No
Fail Over Reason: Not applicable
Health: OK
Health Reason: 
Health Recommendation: 
Position: Bottom
Phy Isolation: Enabled
Controller Redundancy Mode: Active-Active ULP
Controller Redundancy Status: Redundant

Controller A Versions
---------------------
Storage Controller CPU Type: Gladden 1300MHz
Bundle Version: GL220R005
Base Bundle Version: G22x
Build Date: Thu Jan  7 17:12:17 MST 2016
Storage Controller Code Version: GLS220R08-01
Storage Controller Code Baselevel: GLS220R08-01
Storage Controller Loader Code Version: 27.016
CAPI Version: 3.19
Management Controller Code Version: GLM220R009-01
Management Controller Loader Code Version: 6.18.22216
Expander Controller Code Version: 3203
CPLD Code Version: 55
PRM CPLD Code Version: 6
Hardware Version: 5.2
Host Interface Module Version: 3
Host Interface Module Model: 5
Backplane Type: 7
Host Interface Hardware (Chip) Version: 2
Disk Interface Hardware (Chip) Version: 3
SC Boot Memory Reference Code Version: 1.2.1.10
CTK Version: No CTK present

Controller B Versions
---------------------
Storage Controller CPU Type: Gladden 1300MHz
Bundle Version: GL220R005
Base Bundle Version: G22x
Build Date: Thu Jan  7 17:12:17 MST 2016
Storage Controller Code Version: GLS220R08-01
Storage Controller Code Baselevel: GLS220R08-01
Storage Controller Loader Code Version: 27.016
CAPI Version: 3.19
Management Controller Code Version: GLM220R009-01
Management Controller Loader Code Version: 6.18.22216
Expander Controller Code Version: 3203
CPLD Code Version: 55
PRM CPLD Code Version: 6
Hardware Version: 5.2
Host Interface Module Version: 3
Host Interface Module Model: 5
Backplane Type: 7
Host Interface Hardware (Chip) Version: 2
Disk Interface Hardware (Chip) Version: 3
SC Boot Memory Reference Code Version: 1.2.1.10
CTK Version: No CTK present

Ports Media    Target ID         Status        Speed(A) Health     
  Reason                                            
  Action                                                                                                                                                                                                                                                  
-------------------------------------------------------------------------------
A1    SAS      500c0ff26fb49000  Up            12Gb     OK         
                                                    
                                                                                                                                                                                                                                                          

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              4              0

A2    SAS      500c0ff26fb49100  Up            12Gb     OK         
                                                    
                                                                                                                                                                                                                                                          

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              4              0

A3    SAS      500c0ff26fb49200  Up            6Gb      OK         
                                                    
                                                                                                                                                                                                                                                          

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              4              0

A4    SAS      500c0ff26fb49300  Disconnected  Auto     N/A        
  There is no active connection to this host port.  
  - If this host port is intentionally unused, no action is required.
  - Otherwise, use an appropriate interface cable to connect this host port to a switch or host.
  - If a cable is connected, check the cable and the switch or host for problems.  

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              0              0

-------------------------------------------------------------------------------
Ports Media    Target ID         Status        Speed(A) Health     
  Reason                                            
  Action                                                                                                                                                                                                                                                  
-------------------------------------------------------------------------------
B1    SAS      500c0ff26fb49400  Up            12Gb     OK         
                                                    
                                                                                                                                                                                                                                                          

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              4              0

B2    SAS      500c0ff26fb49500  Up            12Gb     OK         
                                                    
                                                                                                                                                                                                                                                          

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              4              0

B3    SAS      500c0ff26fb49600  Up            6Gb      OK         
                                                    
                                                                                                                                                                                                                                                          

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              4              0

B4    SAS      500c0ff26fb49700  Disconnected  Auto     N/A        
  There is no active connection to this host port.  
  - If this host port is intentionally unused, no action is required.
  - Otherwise, use an appropriate interface cable to connect this host port to a switch or host.
  - If a cable is connected, check the cable and the switch or host for problems.  

   Topo(C) Lanes Expected Active Lanes   Disabled Lanes 
   -----------------------------------------------------
   Direct  4              0              0

-------------------------------------------------------------------------------
Location   Serial Number         Vendor   Rev   Description Usage        Jobs 
  Speed (kr/min)  Size    Sec Fmt    Disk Group Pool    Tier Health     
------------------------------------------------------------------------------
1.1        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.2        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.3        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.4        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.5        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.6        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.7        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.8        HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       data       data    N/A  OK         
1.11       HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.12       HP       HPD3  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.13       HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.14       HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.15       HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.16       HP       HPD2  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.17       HP       HPD5  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.18       HP       HPD5  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.19       HP       HPD5  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
1.20       HP       HPD5  SAS         LINEAR POOL       
  10              900.1GB 512n       RAID10     RAID10  N/A  OK         
------------------------------------------------------------------------------
Status       Encl Slot Vendor  Model              Serial Number            
  Size     
---------------------------------------------------------------------------
Up           1    1    HP      EG0900JETKB        
  900.1GB
Up           1    2    HP      EG0900JETKB        
  900.1GB
Up           1    3    HP      EG0900JETKB        
  900.1GB
Up           1    4    HP      EG0900JETKB        
  900.1GB
Up           1    5    HP      EG0900JETKB        
  900.1GB
Up           1    6    HP      EG0900JETKB        
  900.1GB
Up           1    7    HP      EG0900JETKB        
  900.1GB
Up           1    8    HP      EG0900JETKB        
  900.1GB
Not Present  1    9    N/A     N/A                N/A                      
  N/A
Not Present  1    10   N/A     N/A                N/A                      
  N/A
Up           1    11   HP      EG0900JETKB        
  900.1GB
Up           1    12   HP      EG0900JEHMB        
  900.1GB
Up           1    13   HP      EG0900JETKB        
  900.1GB
Up           1    14   HP      EG0900JETKB        
  900.1GB
Up           1    15   HP      EG0900JETKB        
  900.1GB
Up           1    16   HP      EG0900JETKB        
  900.1GB
Up           1    17   HP      EG0900JEHMB        
  900.1GB
Up           1    18   HP      EG0900JEHMB        
  900.1GB
Up           1    19   HP      EG0900JEHMB        
  900.1GB
Up           1    20   HP      EG0900JEHMB        
  900.1GB
Not Present  1    21   N/A     N/A                N/A                      
  N/A
Not Present  1    22   N/A     N/A                N/A                      
  N/A
Not Present  1    23   N/A     N/A                N/A                      
  N/A
Not Present  1    24   N/A     N/A                N/A                      
  N/A
---------------------------------------------------------------------------
Name    Size     Free     Own Pref RAID   Class    Disks Spr Chk  Status Jobs 
  Job% Serial Number                    Spin Down SD Delay Sec Fmt   Health     
  Reason Action 
-------------------------------------------------------------------------------
RAID10  4496.3GB 847.2MB  B   B    RAID10 Linear   10    0   256k FTOL        
       Disabled  0        512n      OK         
                
data    5395.6GB 8388.6KB A   A    RAID50 Linear   8     0   1536kFTOL        
       Disabled  0        512n      OK         
                
-------------------------------------------------------------------------------
Name    Size     Free     Class    Pool    Tier % of Pool  Own Pref   RAID   
  Disks Spr Chk  Status Jobs      Job%      Serial Number                    
  Spin Down              SD Delay              Sec Fmt   Health     Reason 
  Action 
-----------------------------------------------------------------------------
RAID10  4496.3GB 847.2MB  Linear   RAID10  N/A  100        B   B      RAID10 
  10    0   256k FTOL                       
  Disabled               0                     512n      OK                
         
data    5395.6GB 8388.6KB Linear   data    N/A  100        A   A      RAID50 
  8     0   1536kFTOL                       
  Disabled               0                     512n      OK                
         
-----------------------------------------------------------------------------
Name    Serial Number                    Class    Total Size Avail    Snap Size 
  OverCommit  Disk Groups Volumes  Low Thresh  Mid Thresh  High Thresh  
  Sec Fmt   Health     Reason Action 
-------------------------------------------------------------------------------
RAID10  Linear   4496.3GB   847.2MB  0B        
  N/A         1           1        N/A         N/A         N/A          
  512n      OK                       
data    Linear   5395.6GB   8388.6KB 0B        
  N/A         1           1        N/A         N/A         N/A          
  512n      OK                       
-------------------------------------------------------------------------------
Encl Encl WWN         Name                  Location              Rack Pos  
  Vendor   Model            EMP A CH:ID Rev   EMP B CH:ID Rev   Midplane Type  
  Health     Reason Action 
-------------------------------------------------------------------------------
1    500C0FF026FB493C                                             0    0    
  HP       SPS-CHASSIS      01:063 3203       00:063 3203       2U24-6Gv2      
  OK                       
-------------------------------------------------------------------------------
SKU
---
Part Number: K2R84A
Serial Number: --
Revision: A2

FRU
---
Name: CHASSIS_MIDPLANE
Description: SPS-CHASSIS 2U24 6G SINGLE MIDPLANE
Part Number: 639410-001
Serial Number: Confidential Info Erased
Revision: L
Dash Level: 
FRU Shortname: Midplane/Chassis
Manufacturing Date: 2015-10-08 12:48:54
Manufacturing Location: Tianjin,TEDA,CN
Manufacturing Vendor ID: 0x017C
FRU Location: MID-PLANE SLOT
Configuration SN: Confidential Info Erased
FRU Status: OK
Enclosure ID: 1

FRU
---
Name: RAID_IOM
Description: HP MSA 2040 SAS Controller
Part Number: C8S53A
Serial Number: --
Revision: H2
Dash Level: 
FRU Shortname: RAID IOM
Manufacturing Date: 2015-09-24 14:22:47
Manufacturing Location: Tianjin,TEDA,CN
Manufacturing Vendor ID: 0x017C
FRU Location: UPPER IOM SLOT
Configuration SN: ---
FRU Status: OK
Enclosure ID: 1

FRU
---
Name: RAID_IOM
Description: HP MSA 2040 SAS Controller
Part Number: C8S53A
Serial Number: ----
Revision: H2
Dash Level: 
FRU Shortname: RAID IOM
Manufacturing Date: 2015-09-24 19:28:41
Manufacturing Location: Tianjin,TEDA,CN
Manufacturing Vendor ID: 0x017C
FRU Location: LOWER IOM SLOT
Configuration SN: ---
FRU Status: OK
Enclosure ID: 1

FRU
---
Name: POWER_SUPPLY
Description: FRU,Pwr Sply,595W,AC,2U,LC,HP ES
Part Number: 814665-001
Serial Number: ----
Revision: A
Dash Level: 
FRU Shortname: AC Power Supply
Manufacturing Date: 2015-09-11 16:48:54
Manufacturing Location: Zhongshan,Guangdong,CN
Manufacturing Vendor ID: 
FRU Location: LEFT PSU SLOT
Configuration SN: ----
FRU Status: OK
Original SN: ----
Original PN: 7001540-J000
Original Rev: AH
Enclosure ID: 1

FRU
---
Name: POWER_SUPPLY
Description: FRU,Pwr Sply,595W,AC,2U,LC,HP ES
Part Number: 814665-001
Serial Number: ----
Revision: A
Dash Level: 
FRU Shortname: AC Power Supply
Manufacturing Date: 2015-09-11 16:16:23
Manufacturing Location: Zhongshan,Guangdong,CN
Manufacturing Vendor ID: 
FRU Location: RIGHT PSU SLOT
Configuration SN: -----
FRU Status: OK
Original SN: ----
Original PN: 7001540-J000
Original Rev: AH
Enclosure ID: 1

FRU
---
Name: MEMORY CARD
Description: SPS Memory Card
Part Number: 768079-001
Serial Number: ----
Revision: 
Dash Level: 
FRU Shortname: Memory Card
Manufacturing Date: N/A
Manufacturing Location: 
Manufacturing Vendor ID: 
FRU Location: UPPER IOM MEMORY CARD SLOT
Configuration SN: -----
FRU Status: OK
Enclosure ID: 1

FRU
---
Name: MEMORY CARD
Description: SPS Memory Card
Part Number: 768079-001
Serial Number: ---
Revision: 
Dash Level: 
FRU Shortname: Memory Card
Manufacturing Date: N/A
Manufacturing Location: 
Manufacturing Vendor ID: 
FRU Location: LOWER IOM MEMORY CARD SLOT
Configuration SN: -----
FRU Status: OK
Enclosure ID: 1

Info: * Rates may vary. This is normal behavior. (2018-03-29 13:26:36)

Success: Command completed successfully. (2018-03-29 13:26:36)

# 

Re: MSA 2040 latency on raid 10

In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers, SAN Switch all need to be up to date with driver/firmware as well.

Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency. 

Typically, workloads can be defined by four categoriesтАФI/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft┬о SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.

The type of workload will affect the results of the performance measurement.

Check this below Customer Advisory and disable "In-band SES" ,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564

You can check the below Customer Advisory as well.........in many situations this helped to improve performance,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698

Check for ODX settings in Windows system. As per SPOCK it clearly says that Microsoft Offline Data Transfers (ODX) is not  supported with MSA2040.

https://h20272.www2.hpe.com/SPOCK/Content/ExportPDFView.aspx?Id=91895&typeId=2

Download the PDF and in page 4 you will get it's clearly mentioned that Microsoft Offline Data Transfers (ODX) is not  supported

If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times along with MSA log and log a HPE support case. They will help you.

# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics

One more suggestion, when you log any case or describe your technical issue that time more details required like Server model, what operating system installed, any Switch involved and running with what firmware, in Server all up to date or not specially HBA driver/firmware.......

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!


I work for HPE
Accept or Kudo
Ehermouet
Occasional Advisor

Re: MSA 2040 latency on raid 10

Hello

 

first tks for help.

 

in fact ODX is enable on my server and i disabled it.

1 of my HBA don't have the last firmware i do it too.

Server are ML350gen9 and ML350gen8p. OS is Win2012R2 directly connected to MSA2040 with HBA.

 

On statsyou can see attachement.

For firmware MSA and disk you can see attachement too.

 

tks advance

 

 

 

 

 

 

Ehermouet
Occasional Advisor

Re: MSA 2040 latency on raid 10

stats attachement

Re: MSA 2040 latency on raid 10

From MSA information I see Controller firmware need to update to GL225R003. Please follow this link,

www.hpe.com/storage/MSAFirmware 

 

Drive firmware need to update for Drive model: EG0900JETKB. Please follow the link,

https://support.hpe.com/hpsc/swd/public/detail?sp4ts.oid=null&swItemId=MTX_b0bfab641434402d9faca54455&swEnvOid=4184#tab1

I checked statistics output but they looks fine to me. There is no CPU load or any high latency seen at the time of capture of outputs.

Anyway they are just captured once. First update MSA firmware for all components, update Server all components driver/firmware. Then reboot both MSA and Server. After that check if you face any performance. That time you need to capture following output at least 10 to 15 times by giving 2 minutes gap,

# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics

 

Note: Before you work on any performance issue thumb rule is you need to make sure your hardware is error free and all are up to date with driver/firmware

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!


I work for HPE
Accept or Kudo
Ehermouet
Occasional Advisor

Re: MSA 2040 latency on raid 10

hello

 

i upgrade all firmware from server and msa2040.

 

result is the same, sometime we encoured some latency d├йtected from my veaam One.

 

i would try to insert 15k disque for read cache, but it don't want... I suppose i must have SSD Drive to do this ?

 

tks for help.

Re: MSA 2040 latency on raid 10

Yes you must use SSD to configure read cache and that will help you in improving read latency only

If you have configured your system as per best practice and firmware/driver all up to date in all systems in your setup but still you face performance problem then you need to involve specialist to check your environment

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!


I work for HPE
Accept or Kudo
Ehermouet
Occasional Advisor

Re: MSA 2040 latency on raid 10

Hello

 

tks for reply

 

I have now ssd drive.;; i try to follow guide, but my read cache option is grey, and i can't activate it.

 

do you have idea why ?

 

tks

 

 

Re: MSA 2040 latency on raid 10

Can you please check your Storage Pool if any Virtual Disk group exist or not because without any Virtual Disk Group you can't create read cache disk group.

Steps are straight-forward:

  1. First create a Disk-Group for the pool (you can't READ-CACHE without a back end storage)

  2. Then in the SMU (GUI) select _READ-CACHE and select the SSD(s)

You're done. 
 
I would start with just one SSD per pool (as per the best practices guide) and check your performance. 
 
 
Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!

I work for HPE
Accept or Kudo