- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- MSA 2040 latency on raid 10
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2018 06:20 AM
03-29-2018 06:20 AM
MSA 2040 latency on raid 10
Hello
we have mSA2040.
We have 2 volume, 1 on raid 5 for data and low VM and other volume on raid 10 (10 disk of 900gb 10k) for 4 VM database server.
we have some IO lantency., my veeam one report me latency each 15min... and user complain for some application latency...
My Hyperv host are connected with HBA card on the MSA. 4 wires per hba
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2018 06:27 AM - last edited on 04-06-2018 06:43 AM by Parvez_Admin
03-29-2018 06:27 AM - last edited on 04-06-2018 06:43 AM by Parvez_Admin
Re: MSA 2040 latency on raid 10
Here config file
=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2018.03.29 15:22:56 =~=~=~=~=~=~=~=~=~=~=~= System Information ------------------ System Name: SANSP System Contact: informatique System Location: NOZAY System Information: SAN Midplane Serial Number: Vendor Name: HP Product ID: MSA 2040 SAS Product Brand: MSA Storage SCSI Vendor ID: HP SCSI Product ID: MSA 2040 SAS Enclosure Count: 1 Health: OK Health Reason: Other MC Status: Operational PFU Status: Idle Supported Locales: English (English), Arabic (العربية), Portuguese (português), Spanish (español), French (français), German (Deutsch), Italian (italiano), Japanese (日本語), Korean (한국어), Dutch (Nederlands), Russian (русский), Chinese-Simplified (简体中文), Chinese-Traditional (繁體中文) Controllers ----------- Controller ID: A Serial Number:Confidential Info ErasedHardware Version: 5.2 CPLD Version: 55 MAC Address: 00:C0:FF:26:E0:DA WWNN: 500C0FF26FB49000 IP Address: 192.168.1.15 IP Subnet Mask: 255.255.255.0 IP Gateway: 192.168.1.253 Disks: 18 Virtual Pools: 0 Disk Groups: 2 System Cache Memory (MB): 6144 Host Ports: 4 Disk Channels: 2 Disk Bus Type: SAS Status: Operational Failed Over to This Controller: No Fail Over Reason: Not applicable Health: OK Health Reason: Health Recommendation: Position: Top Phy Isolation: Enabled Controller Redundancy Mode: Active-Active ULP Controller Redundancy Status: Redundant Controllers ----------- Controller ID: B Serial Number:Confidential Info ErasedHardware Version: 5.2 CPLD Version: 55 MAC Address: 00:C0:FF:26:E0:A1 WWNN: 500C0FF26FB49000 IP Address: 192.168.1.16 IP Subnet Mask: 255.255.255.0 IP Gateway: 192.168.1.253 Disks: 18 Virtual Pools: 0 Disk Groups: 2 System Cache Memory (MB): 6144 Host Ports: 4 Disk Channels: 2 Disk Bus Type: SAS Status: Operational Failed Over to This Controller: No Fail Over Reason: Not applicable Health: OK Health Reason: Health Recommendation: Position: Bottom Phy Isolation: Enabled Controller Redundancy Mode: Active-Active ULP Controller Redundancy Status: Redundant Controller A Versions --------------------- Storage Controller CPU Type: Gladden 1300MHz Bundle Version: GL220R005 Base Bundle Version: G22x Build Date: Thu Jan 7 17:12:17 MST 2016 Storage Controller Code Version: GLS220R08-01 Storage Controller Code Baselevel: GLS220R08-01 Storage Controller Loader Code Version: 27.016 CAPI Version: 3.19 Management Controller Code Version: GLM220R009-01 Management Controller Loader Code Version: 6.18.22216 Expander Controller Code Version: 3203 CPLD Code Version: 55 PRM CPLD Code Version: 6 Hardware Version: 5.2 Host Interface Module Version: 3 Host Interface Module Model: 5 Backplane Type: 7 Host Interface Hardware (Chip) Version: 2 Disk Interface Hardware (Chip) Version: 3 SC Boot Memory Reference Code Version: 1.2.1.10 CTK Version: No CTK present Controller B Versions --------------------- Storage Controller CPU Type: Gladden 1300MHz Bundle Version: GL220R005 Base Bundle Version: G22x Build Date: Thu Jan 7 17:12:17 MST 2016 Storage Controller Code Version: GLS220R08-01 Storage Controller Code Baselevel: GLS220R08-01 Storage Controller Loader Code Version: 27.016 CAPI Version: 3.19 Management Controller Code Version: GLM220R009-01 Management Controller Loader Code Version: 6.18.22216 Expander Controller Code Version: 3203 CPLD Code Version: 55 PRM CPLD Code Version: 6 Hardware Version: 5.2 Host Interface Module Version: 3 Host Interface Module Model: 5 Backplane Type: 7 Host Interface Hardware (Chip) Version: 2 Disk Interface Hardware (Chip) Version: 3 SC Boot Memory Reference Code Version: 1.2.1.10 CTK Version: No CTK present Ports Media Target ID Status Speed(A) Health Reason Action ------------------------------------------------------------------------------- A1 SAS 500c0ff26fb49000 Up 12Gb OK Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 4 0 A2 SAS 500c0ff26fb49100 Up 12Gb OK Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 4 0 A3 SAS 500c0ff26fb49200 Up 6Gb OK Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 4 0 A4 SAS 500c0ff26fb49300 Disconnected Auto N/A There is no active connection to this host port. - If this host port is intentionally unused, no action is required. - Otherwise, use an appropriate interface cable to connect this host port to a switch or host. - If a cable is connected, check the cable and the switch or host for problems. Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 0 0 ------------------------------------------------------------------------------- Ports Media Target ID Status Speed(A) Health Reason Action ------------------------------------------------------------------------------- B1 SAS 500c0ff26fb49400 Up 12Gb OK Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 4 0 B2 SAS 500c0ff26fb49500 Up 12Gb OK Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 4 0 B3 SAS 500c0ff26fb49600 Up 6Gb OK Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 4 0 B4 SAS 500c0ff26fb49700 Disconnected Auto N/A There is no active connection to this host port. - If this host port is intentionally unused, no action is required. - Otherwise, use an appropriate interface cable to connect this host port to a switch or host. - If a cable is connected, check the cable and the switch or host for problems. Topo(C) Lanes Expected Active Lanes Disabled Lanes ----------------------------------------------------- Direct 4 0 0 ------------------------------------------------------------------------------- Location Serial Number Vendor Rev Description Usage Jobs Speed (kr/min) Size Sec Fmt Disk Group Pool Tier Health ------------------------------------------------------------------------------ 1.1 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.2 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.3 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.4 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.5 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.6 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.7 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.8 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n data data N/A OK 1.11 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.12 HP HPD3 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.13 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.14 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.15 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.16 HP HPD2 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.17 HP HPD5 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.18 HP HPD5 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.19 HP HPD5 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK 1.20 HP HPD5 SAS LINEAR POOL 10 900.1GB 512n RAID10 RAID10 N/A OK ------------------------------------------------------------------------------ Status Encl Slot Vendor Model Serial Number Size --------------------------------------------------------------------------- Up 1 1 HP EG0900JETKB 900.1GB Up 1 2 HP EG0900JETKB 900.1GB Up 1 3 HP EG0900JETKB 900.1GB Up 1 4 HP EG0900JETKB 900.1GB Up 1 5 HP EG0900JETKB 900.1GB Up 1 6 HP EG0900JETKB 900.1GB Up 1 7 HP EG0900JETKB 900.1GB Up 1 8 HP EG0900JETKB 900.1GB Not Present 1 9 N/A N/A N/A N/A Not Present 1 10 N/A N/A N/A N/A Up 1 11 HP EG0900JETKB 900.1GB Up 1 12 HP EG0900JEHMB 900.1GB Up 1 13 HP EG0900JETKB 900.1GB Up 1 14 HP EG0900JETKB 900.1GB Up 1 15 HP EG0900JETKB 900.1GB Up 1 16 HP EG0900JETKB 900.1GB Up 1 17 HP EG0900JEHMB 900.1GB Up 1 18 HP EG0900JEHMB 900.1GB Up 1 19 HP EG0900JEHMB 900.1GB Up 1 20 HP EG0900JEHMB 900.1GB Not Present 1 21 N/A N/A N/A N/A Not Present 1 22 N/A N/A N/A N/A Not Present 1 23 N/A N/A N/A N/A Not Present 1 24 N/A N/A N/A N/A --------------------------------------------------------------------------- Name Size Free Own Pref RAID Class Disks Spr Chk Status Jobs Job% Serial Number Spin Down SD Delay Sec Fmt Health Reason Action ------------------------------------------------------------------------------- RAID10 4496.3GB 847.2MB B B RAID10 Linear 10 0 256k FTOL Disabled 0 512n OK data 5395.6GB 8388.6KB A A RAID50 Linear 8 0 1536kFTOL Disabled 0 512n OK ------------------------------------------------------------------------------- Name Size Free Class Pool Tier % of Pool Own Pref RAID Disks Spr Chk Status Jobs Job% Serial Number Spin Down SD Delay Sec Fmt Health Reason Action ----------------------------------------------------------------------------- RAID10 4496.3GB 847.2MB Linear RAID10 N/A 100 B B RAID10 10 0 256k FTOL Disabled 0 512n OK data 5395.6GB 8388.6KB Linear data N/A 100 A A RAID50 8 0 1536kFTOL Disabled 0 512n OK ----------------------------------------------------------------------------- Name Serial Number Class Total Size Avail Snap Size OverCommit Disk Groups Volumes Low Thresh Mid Thresh High Thresh Sec Fmt Health Reason Action ------------------------------------------------------------------------------- RAID10 Linear 4496.3GB 847.2MB 0B N/A 1 1 N/A N/A N/A 512n OK data Linear 5395.6GB 8388.6KB 0B N/A 1 1 N/A N/A N/A 512n OK ------------------------------------------------------------------------------- Encl Encl WWN Name Location Rack Pos Vendor Model EMP A CH:ID Rev EMP B CH:ID Rev Midplane Type Health Reason Action ------------------------------------------------------------------------------- 1 500C0FF026FB493C 0 0 HP SPS-CHASSIS 01:063 3203 00:063 3203 2U24-6Gv2 OK ------------------------------------------------------------------------------- SKU --- Part Number: K2R84A Serial Number: -- Revision: A2 FRU --- Name: CHASSIS_MIDPLANE Description: SPS-CHASSIS 2U24 6G SINGLE MIDPLANE Part Number: 639410-001 Serial Number:Confidential Info ErasedRevision: L Dash Level: FRU Shortname: Midplane/Chassis Manufacturing Date: 2015-10-08 12:48:54 Manufacturing Location: Tianjin,TEDA,CN Manufacturing Vendor ID: 0x017C FRU Location: MID-PLANE SLOT Configuration SN:Confidential Info ErasedFRU Status: OK Enclosure ID: 1 FRU --- Name: RAID_IOM Description: HP MSA 2040 SAS Controller Part Number: C8S53A Serial Number: -- Revision: H2 Dash Level: FRU Shortname: RAID IOM Manufacturing Date: 2015-09-24 14:22:47 Manufacturing Location: Tianjin,TEDA,CN Manufacturing Vendor ID: 0x017C FRU Location: UPPER IOM SLOT Configuration SN: --- FRU Status: OK Enclosure ID: 1 FRU --- Name: RAID_IOM Description: HP MSA 2040 SAS Controller Part Number: C8S53A Serial Number: ---- Revision: H2 Dash Level: FRU Shortname: RAID IOM Manufacturing Date: 2015-09-24 19:28:41 Manufacturing Location: Tianjin,TEDA,CN Manufacturing Vendor ID: 0x017C FRU Location: LOWER IOM SLOT Configuration SN: --- FRU Status: OK Enclosure ID: 1 FRU --- Name: POWER_SUPPLY Description: FRU,Pwr Sply,595W,AC,2U,LC,HP ES Part Number: 814665-001 Serial Number: ---- Revision: A Dash Level: FRU Shortname: AC Power Supply Manufacturing Date: 2015-09-11 16:48:54 Manufacturing Location: Zhongshan,Guangdong,CN Manufacturing Vendor ID: FRU Location: LEFT PSU SLOT Configuration SN: ---- FRU Status: OK Original SN: ---- Original PN: 7001540-J000 Original Rev: AH Enclosure ID: 1 FRU --- Name: POWER_SUPPLY Description: FRU,Pwr Sply,595W,AC,2U,LC,HP ES Part Number: 814665-001 Serial Number: ---- Revision: A Dash Level: FRU Shortname: AC Power Supply Manufacturing Date: 2015-09-11 16:16:23 Manufacturing Location: Zhongshan,Guangdong,CN Manufacturing Vendor ID: FRU Location: RIGHT PSU SLOT Configuration SN: ----- FRU Status: OK Original SN: ---- Original PN: 7001540-J000 Original Rev: AH Enclosure ID: 1 FRU --- Name: MEMORY CARD Description: SPS Memory Card Part Number: 768079-001 Serial Number: ---- Revision: Dash Level: FRU Shortname: Memory Card Manufacturing Date: N/A Manufacturing Location: Manufacturing Vendor ID: FRU Location: UPPER IOM MEMORY CARD SLOT Configuration SN: ----- FRU Status: OK Enclosure ID: 1 FRU --- Name: MEMORY CARD Description: SPS Memory Card Part Number: 768079-001 Serial Number: --- Revision: Dash Level: FRU Shortname: Memory Card Manufacturing Date: N/A Manufacturing Location: Manufacturing Vendor ID: FRU Location: LOWER IOM MEMORY CARD SLOT Configuration SN: ----- FRU Status: OK Enclosure ID: 1 Info: * Rates may vary. This is normal behavior. (2018-03-29 13:26:36) Success: Command completed successfully. (2018-03-29 13:26:36) #
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2018 12:18 PM - edited 04-18-2018 09:59 PM
03-29-2018 12:18 PM - edited 04-18-2018 09:59 PM
Re: MSA 2040 latency on raid 10
In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers, SAN Switch all need to be up to date with driver/firmware as well.
Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency.
Typically, workloads can be defined by four categories—I/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft® SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.
The type of workload will affect the results of the performance measurement.
Check this below Customer Advisory and disable "In-band SES" ,
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564
You can check the below Customer Advisory as well.........in many situations this helped to improve performance,
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698
Check for ODX settings in Windows system. As per SPOCK it clearly says that Microsoft Offline Data Transfers (ODX) is not supported with MSA2040.
https://h20272.www2.hpe.com/SPOCK/Content/ExportPDFView.aspx?Id=91895&typeId=2
Download the PDF and in page 4 you will get it's clearly mentioned that Microsoft Offline Data Transfers (ODX) is not supported
If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times along with MSA log and log a HPE support case. They will help you.
# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics
One more suggestion, when you log any case or describe your technical issue that time more details required like Server model, what operating system installed, any Switch involved and running with what firmware, in Server all up to date or not specially HBA driver/firmware.......
Hope this helps!
Regards
Subhajit
If you feel this was helpful please click the KUDOS! thumb below!
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2018 01:40 AM - last edited on 04-06-2018 06:51 AM by Parvez_Admin
03-30-2018 01:40 AM - last edited on 04-06-2018 06:51 AM by Parvez_Admin
Re: MSA 2040 latency on raid 10
Hello
first tks for help.
in fact ODX is enable on my server and i disabled it.
1 of my HBA don't have the last firmware i do it too.
Server are ML350gen9 and ML350gen8p. OS is Win2012R2 directly connected to MSA2040 with HBA.
On statsyou can see attachement.
For firmware MSA and disk you can see attachement too.
tks advance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2018 01:40 AM - last edited on 04-06-2018 07:20 AM by Parvez_Admin
03-30-2018 01:40 AM - last edited on 04-06-2018 07:20 AM by Parvez_Admin
Re: MSA 2040 latency on raid 10
stats attachement
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2018 02:10 AM - edited 04-18-2018 10:01 PM
03-30-2018 02:10 AM - edited 04-18-2018 10:01 PM
Re: MSA 2040 latency on raid 10
From MSA information I see Controller firmware need to update to GL225R003. Please follow this link,
www.hpe.com/storage/MSAFirmware
Drive firmware need to update for Drive model: EG0900JETKB. Please follow the link,
I checked statistics output but they looks fine to me. There is no CPU load or any high latency seen at the time of capture of outputs.
Anyway they are just captured once. First update MSA firmware for all components, update Server all components driver/firmware. Then reboot both MSA and Server. After that check if you face any performance. That time you need to capture following output at least 10 to 15 times by giving 2 minutes gap,
# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics
Note: Before you work on any performance issue thumb rule is you need to make sure your hardware is error free and all are up to date with driver/firmware
Hope this helps!
Regards
Subhajit
If you feel this was helpful please click the KUDOS! thumb below!
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-06-2018 03:44 AM
04-06-2018 03:44 AM
Re: MSA 2040 latency on raid 10
hello
i upgrade all firmware from server and msa2040.
result is the same, sometime we encoured some latency détected from my veaam One.
i would try to insert 15k disque for read cache, but it don't want... I suppose i must have SSD Drive to do this ?
tks for help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-06-2018 07:59 AM - edited 04-18-2018 10:04 PM
04-06-2018 07:59 AM - edited 04-18-2018 10:04 PM
Re: MSA 2040 latency on raid 10
Yes you must use SSD to configure read cache and that will help you in improving read latency only
If you have configured your system as per best practice and firmware/driver all up to date in all systems in your setup but still you face performance problem then you need to involve specialist to check your environment
Hope this helps!
Regards
Subhajit
If you feel this was helpful please click the KUDOS! thumb below!
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2018 12:59 AM
04-10-2018 12:59 AM
Re: MSA 2040 latency on raid 10
Hello
tks for reply
I have now ssd drive.;; i try to follow guide, but my read cache option is grey, and i can't activate it.
do you have idea why ?
tks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2018 04:38 AM - edited 04-18-2018 10:05 PM
04-10-2018 04:38 AM - edited 04-18-2018 10:05 PM
Re: MSA 2040 latency on raid 10
Can you please check your Storage Pool if any Virtual Disk group exist or not because without any Virtual Disk Group you can't create read cache disk group.
Steps are straight-forward:
First create a Disk-Group for the pool (you can't READ-CACHE without a back end storage)
Then in the SMU (GUI) select _READ-CACHE and select the SSD(s)
Regards
Subhajit
If you feel this was helpful please click the KUDOS! thumb below!
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2018 06:37 AM - edited 04-10-2018 06:48 AM
04-10-2018 06:37 AM - edited 04-10-2018 06:48 AM
Re: MSA 2040 latency on raid 10
tks for reply.
now we have linear volume. Maybe my problem is here ?
is it possible to migrate to virtual ? without lost data ?
tks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2018 07:06 AM - edited 04-18-2018 10:05 PM
04-10-2018 07:06 AM - edited 04-18-2018 10:05 PM
Re: MSA 2040 latency on raid 10
As informed you earlier read cache concept works with Virtual Disk group and Virtual Pool only. It's not applicable for Linear vdisk.
Coming to your other query, in order to migrate data from Linear volume to Virtual Volume you need to use Host based application file copy feature. Please follow best practice white paper and refer page no 43 (Scenario 3:)
https://h20195.www2.hpe.com/v2/GetPDF.aspx/4AA4-6892ENW.pdf
Hope this helps!
Regards
Subhajit
If you feel this was helpful please click the KUDOS! thumb below!
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
