1753383 Members
5886 Online
108792 Solutions
New Discussion юеВ

Service guard info

 
SOLVED
Go to solution
Randy Liggett
New Member

Service guard info

We are thinking of using service guard
to have as a backup server and disaster
recovery both.
My question is that the backup server is
3 miles away. What is the maximum distance
that service guard will work between the two servers ?????? Where do I find the basic setup
scenario ?
Thanks
Randy
10 REPLIES 10
Steven E. Protter
Exalted Contributor

Re: Service guard info

Regular Serviceguard is limited by how long a scsi cable can be to shared scsi disk.

With fiber channel, the distance is substantially longer, measured in I believe hundreds of meters.

There is a cofiguration called a continental cluster which can be used to have servers in different data centers as part of service guard clusters, or probably more accurately multiple clusters in different data centers working together to provide service.

So if you have the bucks, distance is not really a factor.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Rita C Workman
Honored Contributor
Solution

Re: Service guard info

Well there are different clusters.
Campus would go up to 10km
Metro up to 100km
Continental - which is the one that goes all the way...

From you logistics, your looking at a either a Campus or Metro Cluster.

I've attached an old Bob Sauers (SG Guru). It's a little old, but you may find it helpful. Anything by Bob Sauers you should review.

Rgrds,
Rita
Randy Liggett
New Member

Re: Service guard info

Thanks SEP,
We are looking at two different servers
completely. Not sharing any disk,just if the server fails, it would jump to the other.
The database would be ORACLE 9i with I guess
RAC. The thing I was worried about was the Virtual IP address and the hub for the heartbeat between the two servers.
Steven E. Protter
Exalted Contributor

Re: Service guard info

Right now, I just mess around with SG as a consultant doing something else. Rita does Continental Clusters.

Her document is going to be helpful.

I would think the Floating IP address issue would be addressed in the documentation.

On a campus cluster, it should not be an issue.

You need some kind of bandwidth to make this work, which would probably include an IP address the two boxes/clusters can share.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Rita C Workman
Honored Contributor

Re: Service guard info

Hi Randy,

Based on your last comment, SG is not for you.
SG relies on shared disk. You package up your shared disk, thus having the access to the disks fail between servers. The server o/s becomes (I hate to put it this way..) but irrelevant.

Since the O/S resides on vg00, and that is "generally" disk that is physically on your server, you couldn't fail that over.

Your best method would be make recovery tapes. So I might question your stmt...completely different servers. They should be similar/compatible or same class servers. As long as your in that ballpark. You could use your tapes. But failover an O/S...you'd have to put your O/S on shared disk ... and there are many caveats with something like that.

Rgrds,
Rita
Randy Liggett
New Member

Re: Service guard info

Hey Rita,
Thanks for your help.
We would have the same type of servers,
RP5470's but one has a SAN and the other just external drives, no SAN.
They are worried about High Availablity
side more than just disaster recovery.
They need something in case the 1st
server fails. We can and would setup as needed if that's what Service Guard calls
for unless there is another type of product
that HP has to help with these fail over situtations.
Thanks
Randy
Rita C Workman
Honored Contributor

Re: Service guard info

Hey Randy,

HA involves a little more than just failing over. It includes mirroring your data between sites. EMC offers SRDF and HP offers Continuance Access. Then you need the communication hardware to mirror your disks between sites.
It is a fairly 'big deal'. And far too much to explain in this simple bulletin board.

We set ours up, with a SAN on our DR site and host based attached disk at the primary site. Now both sites are SAN attached. So both sites don't have to be an 'exact' match..but they have to both be ABLE to replicate between arrays.

You might suggest to your bosses to send you or someone to HPWorld in SF this year. There are a couple folks doing sessions on DR, myself included, that will help you to see where you are and how to get to where you want to go.
I'll be happy to go over things with you in more details if your able to make it their.
This might help before you bring someone in-house to 'only sell their version of DR'.

Kindest Regards,
Rita
Steven E. Protter
Exalted Contributor

Re: Service guard info

As Rita notes, shared disk is required.

Depending on the distance between the machines, this can be done with a leased line or fiber.

Still, that does not guarantee high availability because if the data is only in one place that place can go away.

Therefore some planning is needed. A real DR plan. There are lots of people who can do that now that you have an idea of what is involved.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
melvyn burnard
Honored Contributor

Re: Service guard info

Randy

You may want to take a read of the Designing Disaster Tolerant Clusters manual at
http://docs.hp.com/en/B7660-90016/B7660-90016.pdf

There are various scenarios here, but if you are NOT going to have shared disc, or even use Hardware data replication (EMC arrays with SRDF/XP arays with CA/EVA arrays with CA) Then I am not sure that these products (Exteneded Serviceguard, Metrocluster or Continental Clusters) are suitable for you.

One thing to consider might be having the 2nd node up and running and running Oracle Standby database, or some similar idea.
My house is the bank's, my money the wife's, But my opinions belong to me, not HP!