HPE GreenLake Administration
- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Ensuring consistency in DR group split
HPE EVA Storage
1827295
Members
4143
Online
109717
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2008 06:05 AM
05-19-2008 06:05 AM
Ensuring consistency in DR group split
Hello everyone. I am very familiar with EMC and HDS arrays, and just recently (3 months ago) took a position supporting EVAs.
My situation is this...
I have a three node Oracle RAC cluster that has 37 luns presented from an EVA 5000 (vcs 3.028). I have been replicating asynchronously with CA to our remote site via FCIP (single 1Gbit link).
With this many luns, and having multiple database instances with table spaces spread across a large number of these luns, it is impossible to gaurantee consistency when I suspend replication and split the DR groups. I realize that with asynch, in a failure situation, I am not going to be gauranteed consistency, but with DR groups, I am at least able to gaurantee that write ordering is preserved for the luns in the DR group.
My problem is this...
I essentially need to either extend the amount of vdisks allowed in a DR group, or somehow gaurantee consistency when I split multiple groups. The problem is a timing issue. It would help if I could do a concurrent split instead of consecutive split of DR groups.
Any help is greatly appreciated.
My situation is this...
I have a three node Oracle RAC cluster that has 37 luns presented from an EVA 5000 (vcs 3.028). I have been replicating asynchronously with CA to our remote site via FCIP (single 1Gbit link).
With this many luns, and having multiple database instances with table spaces spread across a large number of these luns, it is impossible to gaurantee consistency when I suspend replication and split the DR groups. I realize that with asynch, in a failure situation, I am not going to be gauranteed consistency, but with DR groups, I am at least able to gaurantee that write ordering is preserved for the luns in the DR group.
My problem is this...
I essentially need to either extend the amount of vdisks allowed in a DR group, or somehow gaurantee consistency when I split multiple groups. The problem is a timing issue. It would help if I could do a concurrent split instead of consecutive split of DR groups.
Any help is greatly appreciated.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-20-2008 06:44 AM
05-20-2008 06:44 AM
Re: Ensuring consistency in DR group split
Mike,
In a database environment, it's impossible to guarantee the consistency of replicated volume (either SYNC or ASYNC) because of the open files in database. So if you go with SYNC replication and cut off at any point of time, open files will not be usable at remote storage.
To overcome, you need to put the database in HOT BACKUP MODE (termed as quiesce database) then split the CA and UNQuiesce database. This doesn't affect the functionality of data base but enables the open file to be readable at remote storage. And after that if you want to resume the CA, probably one of the idea is to copy over the remote CA volumes to another set of disk using BC.
Remember, whether using the SYNC or ASYNC, when the replication split occurs, it will copy over the exact amount of data as of the time of split.
Hope this helps.
In a database environment, it's impossible to guarantee the consistency of replicated volume (either SYNC or ASYNC) because of the open files in database. So if you go with SYNC replication and cut off at any point of time, open files will not be usable at remote storage.
To overcome, you need to put the database in HOT BACKUP MODE (termed as quiesce database) then split the CA and UNQuiesce database. This doesn't affect the functionality of data base but enables the open file to be readable at remote storage. And after that if you want to resume the CA, probably one of the idea is to copy over the remote CA volumes to another set of disk using BC.
Remember, whether using the SYNC or ASYNC, when the replication split occurs, it will copy over the exact amount of data as of the time of split.
Hope this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-20-2008 07:02 AM
05-20-2008 07:02 AM
Re: Ensuring consistency in DR group split
I definitely understand that the data will be completely inconsistent due to the way that the DB engine uses and writes to the DB files. I know that there will be a crash recovery process on the DR side.
You mentioned putting the database in hot backup mode, and that is an option we have used and it works well. We just do a "Begin backup" and "End Backup".
What I am looking for is a way to essentially preserve write ordering on more than 8 luns at a time. I guess my initial post didn't emphasize that. If I can preserve write ordering for a larger group of disks, the recovery process will be easier.
This also comes into play in a situation where I have an actual link failure. If write order is preserved and I have a failure, in theory, my inconsistency will not be as distributed if all of my luns are in the same DR group.
You mentioned putting the database in hot backup mode, and that is an option we have used and it works well. We just do a "Begin backup" and "End Backup".
What I am looking for is a way to essentially preserve write ordering on more than 8 luns at a time. I guess my initial post didn't emphasize that. If I can preserve write ordering for a larger group of disks, the recovery process will be easier.
This also comes into play in a situation where I have an actual link failure. If write order is preserved and I have a failure, in theory, my inconsistency will not be as distributed if all of my luns are in the same DR group.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-20-2008 11:59 AM
05-20-2008 11:59 AM
Re: Ensuring consistency in DR group split
Mike,
You have a problem here... why are you using so many vdisks? Given the way the EVA works, even allowing for OCR and voting disks for RC, I can't understand why you would need so many seperate LUNs (unless you've been forced into that by restrictions in your OS or volume manager).
What platform is this on?
The one strict rule you need to follow with Oracle is that the datafiles and online redo need to be in the same DR group - everthing else can be in seperate groups and you can still recover. However I doubt thats going to get you down to just 8 LUNs, so I'd also suggest looking at upgrading the controller firmware of the EVAs.
Looking at the table in section 6.3 (p21) of this doc:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01432844/c01432844.pdf
You'll see if you upgrade to 4.x then you can at least get up to 32 vdisks in your DR group - with the other point above that might be enough.
I won't pretend that upgrading to 4.x is easy - its a major upgrade as it completely changes the way vdisks are presented going from active/passive to an active/active ALUA style configuration, but unless you can completely reconstruct your database on <=8 vdisks I can't see what else you can do.
HTH
Duncan
I am an HPE Employee
You have a problem here... why are you using so many vdisks? Given the way the EVA works, even allowing for OCR and voting disks for RC, I can't understand why you would need so many seperate LUNs (unless you've been forced into that by restrictions in your OS or volume manager).
What platform is this on?
The one strict rule you need to follow with Oracle is that the datafiles and online redo need to be in the same DR group - everthing else can be in seperate groups and you can still recover. However I doubt thats going to get you down to just 8 LUNs, so I'd also suggest looking at upgrading the controller firmware of the EVAs.
Looking at the table in section 6.3 (p21) of this doc:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01432844/c01432844.pdf
You'll see if you upgrade to 4.x then you can at least get up to 32 vdisks in your DR group - with the other point above that might be enough.
I won't pretend that upgrading to 4.x is easy - its a major upgrade as it completely changes the way vdisks are presented going from active/passive to an active/active ALUA style configuration, but unless you can completely reconstruct your database on <=8 vdisks I can't see what else you can do.
HTH
Duncan
I am an HPE Employee

The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP