- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Add new mount point in MetroCluster (4 Node Metro-...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-22-2011 10:18 PM
01-22-2011 10:18 PM
Add new mount point in MetroCluster (4 Node Metro-Rac)
Need your help.
There is a requirement to add new mount point in Metro-Rac cluster (4 nodes) using veritas.
Is this required downtime ? Pls confirm and do confirm the steps to be performed.
Thanks Ali..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2011 05:36 AM
01-23-2011 05:36 AM
Re: Add new mount point in MetroCluster (4 Node Metro-Rac)
What's the MetroCluster/Serviceguard version?
Are you using legacy or modular package configuration?
Is the new mount point going to use existing disk groups, or are you going to allocate new storage for it?
What type of storage are you using? (HP EVA, EMC Symmetrix, something else?) In a failover situation, MetroCluster needs to send commands to the storage systems to trigger the storage failover: the method to do this is specific to each storage manufacturer.
It might be possible to add the new mount point without downtime, but then you won't know for sure if it will really work in a failover situation. And since your organization has taken the considerable effort and expense to build a MetroCluster, reliability of this system is obviously very very important to your organization. So I would strongly recommend scheduling downtime so you can configure *and test* the new configuration with no risk of accidentally crashing production in the process.
You say you have a 4-node MetroCluster with RAC. I guess this means you have 2 active and 2 failover nodes. In that case, if you can run using only one node for a while, you might be able to do this without _database_ downtime, even though each of your active nodes will be separately down for a while for reconfiguration and testing. But I cannot know this for sure without knowing *everything* about your current cluster configuration.
In essence, the preparations required to add a new mount point are the same as when setting up a new MetroCluster package, but instead of creating a new package configuration/control script, you modify the existing package to include the new mount point.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2011 04:48 AM
01-24-2011 04:48 AM
Re: Add new mount point in MetroCluster (4 Node Metro-Rac)
using Metrocluster and Continentalclusters" manual helps identify necessary modifications.
The latest version of the manual is at this link: http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02581314/c02581314.pdf
MetroRAC is the most complex of HP cluster configurations, and the procedure to add a file system is likewise more involved than any other HP cluster.
See the discussion titled "Cluster File System and Cluster Volume Manager Sub-clusters" on page 355.
Start at page 365 for a description of how to create a file system (from scratch).
See also, "Configuring Complex Workload Packages to Use CFS" pg 368
Depending on whether you are also added a DG or not determines what needs to be done.
If you add a DG, you need to configure a dependency on the site safety latch. (see page 396), and also create the CFS file system at both sites.
Also update Oracle software with a link to the file system.
If the MetroRAC was integrated by HP Consulting, you might want to consider requesting HP Technical Services to perform the needed steps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2011 07:37 AM
02-01-2011 07:37 AM
Re: Add new mount point in MetroCluster (4 Node Metro-Rac)
Hello Matti/Stephen...
Thanks for the reply.. and sorry for the late reply.. as i was in short leave.
The environment is, it is Metro-Rac environment with 4 nodes. 2 nodes active and 2 nodes passive.
I knew the theory and understanding that. create the diskgroup and mount point with local cfs on one side and create the pair on storage on both side. Later split the pair and import the disk group and create the cfs on DR site.
here is the doubt that whether i need the down time or can configure the site controller in maintenance mode.
Pls confirm above plan will work out else is there are other method to configure...
Thanks Ali....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2011 09:53 AM
02-01-2011 09:53 AM
Re: Add new mount point in MetroCluster (4 Node Metro-Rac)
Actually it takes time but straight forward, follow the following steps.
1. Make sure all nodes in the cluster see the SAN luns and that there is no data replication occurring at the moment from the SAN side on those luns.
2. Stop horcm.
3. On the active site go to the CFS master and make the new diskgroup and fs as needed using the cfsdgadm and cfsdgmnt commands. Just as in a regular cluster.
4. Once done stop the packages which should unmount the fs and deport the diskgroup.
5. Add the lun names of those new luns to your horcm.conf file in the manner described on the documentation and restart horcm. When running a pairdisplay you should see the new disks listed as smpl not async since they are not being replicated yet.
6. Go to the other site and stop horcm and import the disk group.
7. Once the diskgroup is imported you can make the cfs packages like on the other site for the dg and fs.
8. When done halt the packages and add the luns to the horcm.conf file in the same manner as described on the documentation.
9. Start horcm and verify using pairdisplay that the luns are there listed as smpl.
10. At this point begin replication on the SAN side from the active site to the inactive site.
11. Make sure you apply any dependencies as needed.
12. Once the replication is done, you can verify by doing a pairdisplay. Now the luns should be listed as sync.
That should be all that is needed. There should be no downtime involved in the process. However, you should schedule a failover to verify everything works properly.