- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Simultaneously SnapClone & Perf. Issue
Disk Enclosures
1748268
Members
3682
Online
108760
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-14-2010 11:51 AM
тАО02-14-2010 11:51 AM
Simultaneously SnapClone & Perf. Issue
Hi
I need to clarify a question about SnapClone creation and Schedule to my customer since last week they experienced a loss of performance and in some cases disk access issues reported in the log files of a HPUX and Wintel systems, during a SnapClone creation with SSSU scripting.
The scenario
2 EVA 8000 with VCS 6220 & CVE 9.1 replicated against each other in a CA configuration with two DWDM ( 2GB ) links between sites,
Each one have 2 DG one of FC disk and the other with FATA disk ( all of them with the latest FW Bundle ).
No Hardware issues occurred during the operation.
No CA issues registered during the operation
Unfortunately there are no Performance data collected.
The customer operative
They launch with a SSSU very basic and simple script ( Copy paste in a DOS window of the SMA ) 5 SnapClone creation with the next time interval , reported in the EVA┬┤s Log.
00:23:29 First Clone inside FATA GroupOrigin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:26:00 Second Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:32:15 Third Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:39:48 Fourth Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:43:00 Fifth Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
All of them managing by the same Controller ( As reported by the Eva Navigator Tool).
Afther that the performance & disk access issues begins in various systems although their records were in the FC Disk Group. When the SnapClone operations finish ( Around 04:30) stop the incidents.
The client acknowledges that the operative was not adequate but want to HP answer some questions
Can be this operative the cause of the issues, and if it is there are some public advisory related.
How is the maximum number and size of concurrent SnapClones recommended in a environment like this ( FATA Disk Group ). The EVA StorageWorks Replication Solutions Manager Administrator Guide only Said:
"Snapclone best practices:
Minimize the number of concurrent snapclone operations (use fewer virtual disks). Organize clone
operations into consistency groups of virtual disks, and then create snapclones of the consistency
groups sequentially"
Is the copy-paste in a DOS window a supported procedure in this case.
Have Hp a SSSU script templates to use instead a rudimentary copy-paste of a basic creation and presentation command
Is the use of RSM in this cases highly recommended
Regards
Juan Antonio Mas
I need to clarify a question about SnapClone creation and Schedule to my customer since last week they experienced a loss of performance and in some cases disk access issues reported in the log files of a HPUX and Wintel systems, during a SnapClone creation with SSSU scripting.
The scenario
2 EVA 8000 with VCS 6220 & CVE 9.1 replicated against each other in a CA configuration with two DWDM ( 2GB ) links between sites,
Each one have 2 DG one of FC disk and the other with FATA disk ( all of them with the latest FW Bundle ).
No Hardware issues occurred during the operation.
No CA issues registered during the operation
Unfortunately there are no Performance data collected.
The customer operative
They launch with a SSSU very basic and simple script ( Copy paste in a DOS window of the SMA ) 5 SnapClone creation with the next time interval , reported in the EVA┬┤s Log.
00:23:29 First Clone inside FATA GroupOrigin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:26:00 Second Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:32:15 Third Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:39:48 Fourth Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:43:00 Fifth Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
All of them managing by the same Controller ( As reported by the Eva Navigator Tool).
Afther that the performance & disk access issues begins in various systems although their records were in the FC Disk Group. When the SnapClone operations finish ( Around 04:30) stop the incidents.
The client acknowledges that the operative was not adequate but want to HP answer some questions
Can be this operative the cause of the issues, and if it is there are some public advisory related.
How is the maximum number and size of concurrent SnapClones recommended in a environment like this ( FATA Disk Group ). The EVA StorageWorks Replication Solutions Manager Administrator Guide only Said:
"Snapclone best practices:
Minimize the number of concurrent snapclone operations (use fewer virtual disks). Organize clone
operations into consistency groups of virtual disks, and then create snapclones of the consistency
groups sequentially"
Is the copy-paste in a DOS window a supported procedure in this case.
Have Hp a SSSU script templates to use instead a rudimentary copy-paste of a basic creation and presentation command
Is the use of RSM in this cases highly recommended
Regards
Juan Antonio Mas
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-17-2010 12:36 PM
тАО02-17-2010 12:36 PM
Re: Simultaneously SnapClone & Perf. Issue
Hi Juan,
Personally I don't think how the EVA gets the instruction (SSSU,RSM, etc) is significant. The commands are working but causing a performance issue which suggests a heavily loaded array. They might want to select a better time of day for the snapclones, if possible. It sounds like the copy operations are just pushing its limits so I would recommend running EVAperf and going from there.
Personally I don't think how the EVA gets the instruction (SSSU,RSM, etc) is significant. The commands are working but causing a performance issue which suggests a heavily loaded array. They might want to select a better time of day for the snapclones, if possible. It sounds like the copy operations are just pushing its limits so I would recommend running EVAperf and going from there.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-23-2010 08:15 AM
тАО02-23-2010 08:15 AM
Re: Simultaneously SnapClone & Perf. Issue
Without performance data, it is hard to say where the problem might be, but I would recommend using a mirrorclone instead. You will still take an initial I/O hit when you initially setup the mirrorclones, but no additional I/O has to take place when you fracture the mirrorclones so they can be presented to the desired hosts; resynching is fast too once you are done.
I would do this in an SSSU script, and consider the use of the multimirror or multisnap command if the disks are all part of a single application.
I would do this in an SSSU script, and consider the use of the multimirror or multisnap command if the disks are all part of a single application.
check out evamgt.wetpaint.com and evamgt google group
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP