- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- armmgr -J HighPerformance
Disk Enclosures
1752797
Members
5875
Online
108789
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-20-2003 05:14 AM
тАО05-20-2003 05:14 AM
armmgr -J HighPerformance
I am planning to put the resiliency level of our VA7110 to HighPerformance mode. Aside from the danger of filesystem inconsistency during crash, what are the things I need to negatively expect from that mode?
Is there any negative effect to applications using the disk array?
I have also the Kernel Async IO (KAIO) activated for the Informix DB, will those(HighPerformance on disk array & KAIO on Informix) work together for the better?
Is there any negative effect to applications using the disk array?
I have also the Kernel Async IO (KAIO) activated for the Informix DB, will those(HighPerformance on disk array & KAIO on Informix) work together for the better?
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-20-2003 06:04 AM
тАО05-20-2003 06:04 AM
Re: armmgr -J HighPerformance
this feature will stop the database writing to the disks in the 12h. the database we're talking about is the lun mapping. ie. on which physical location on which disk does this lun exist on.
imagine that the autoraid rebalances data because the volume of data is becoming important (ie data moves from raid1/0 into raid5 or vice versa) the lun physical locations modify. However, to you it means nothing as you just have a device file for that lun... you don't care physically where it is on the 12 disks in the 12h... however, the 12h obviously needs to know this and protect these maps (aka databases).!!
the maps are managed by the controllers, who modify and store them in memory what the man pages will call NVRAM. However, it's not REALLY NV, it is battery backed up. (each controller has 2 batteries to save this "nvram" on power off.)
Loose your battery power (which will happen after around 2 weeks of power off and you loose the battery backed up content.)
Usually this doesn't matter, on a propper shutdown (ie not pull out power plugs or switch off rack) this data is written to disk (and every 5 seconds in other arraymgr -J normal options.) every disk in the 12h and it's consistency is validated.
So it is okay to switch to high perf if you ALWAYS propper shutdown AND/OR you don't go on holidays for more than 2 weeks!!
Later,
Bill
imagine that the autoraid rebalances data because the volume of data is becoming important (ie data moves from raid1/0 into raid5 or vice versa) the lun physical locations modify. However, to you it means nothing as you just have a device file for that lun... you don't care physically where it is on the 12 disks in the 12h... however, the 12h obviously needs to know this and protect these maps (aka databases).!!
the maps are managed by the controllers, who modify and store them in memory what the man pages will call NVRAM. However, it's not REALLY NV, it is battery backed up. (each controller has 2 batteries to save this "nvram" on power off.)
Loose your battery power (which will happen after around 2 weeks of power off and you loose the battery backed up content.)
Usually this doesn't matter, on a propper shutdown (ie not pull out power plugs or switch off rack) this data is written to disk (and every 5 seconds in other arraymgr -J normal options.) every disk in the 12h and it's consistency is validated.
So it is okay to switch to high perf if you ALWAYS propper shutdown AND/OR you don't go on holidays for more than 2 weeks!!
Later,
Bill
It works for me (tm)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-20-2003 06:06 AM
тАО05-20-2003 06:06 AM
Re: armmgr -J HighPerformance
My advice is to use Normal Resiliency.
A few firmware generations ago, High Performance Resiliency did provide better performance for many workloads, but no longer. In High Performance mode the write cache algorithm is a pure LRU, meaning that a dirty write cache pages stays in the write cache until forced out (written to the disks) by demand for a free page. The write cache acts like a FIFO. Not good.
In Normal Resiliency mode, write cache pages are written to the disks (but remain in write cache for reads or re-writes) if they have not been changed in the last 4 seconds. This algorithm is more likely to have ???ready??? pages for new writes, but still keeps the data available for re-write and read hits. Much better.
The good news is that you can easily experiment with this setting. It can be changed on-line. Using the statistics gathered by the array and made available thru the armperf Command View command, you can compare the cache-hit rate for both settings. Study the armperf ARRAY command. The statistic you want to measure is the ratio of total write commands vs the write commands that completed in less the 2.55MS. Commands that complete in less the 2.55MS can only be accomplished thru a cache ???hit???. Ignore the statistic labeled cache hits??? it is not a reliable method of determining cache hits, despite its name!
A few firmware generations ago, High Performance Resiliency did provide better performance for many workloads, but no longer. In High Performance mode the write cache algorithm is a pure LRU, meaning that a dirty write cache pages stays in the write cache until forced out (written to the disks) by demand for a free page. The write cache acts like a FIFO. Not good.
In Normal Resiliency mode, write cache pages are written to the disks (but remain in write cache for reads or re-writes) if they have not been changed in the last 4 seconds. This algorithm is more likely to have ???ready??? pages for new writes, but still keeps the data available for re-write and read hits. Much better.
The good news is that you can easily experiment with this setting. It can be changed on-line. Using the statistics gathered by the array and made available thru the armperf Command View command, you can compare the cache-hit rate for both settings. Study the armperf ARRAY command. The statistic you want to measure is the ratio of total write commands vs the write commands that completed in less the 2.55MS. Commands that complete in less the 2.55MS can only be accomplished thru a cache ???hit???. Ignore the statistic labeled cache hits??? it is not a reliable method of determining cache hits, despite its name!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-20-2003 06:07 AM
тАО05-20-2003 06:07 AM
Re: armmgr -J HighPerformance
oh yea, the problem isn't really a case of filesystem corruption, unless write cache is lost (which is also stored on NVRAM) the real problem is LUN mapping loss which translates to entire PV loss. You can however in worst case scenario, if you are on recent f/w + s/w get the 12h to recover those maps by scanning every single bit on every single disk, but this is a process that can take at least more than 24 hours with a fully loaded 12h!! at least I never bothered waiting more than that, but have been told it works!
Later,
Bill
Later,
Bill
It works for me (tm)
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP