- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: V3 + Service Guard Upgrade.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2009 12:38 AM
тАО09-07-2009 12:38 AM
What are the things to be taken care(exept firmware upgrade ) during the upgrade of 11i v2 to 11i v3. with Service guard + oracle ASM running.
--Muthu.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2009 01:48 AM
тАО09-07-2009 01:48 AM
Re: V3 + Service Guard Upgrade.
One step for example is this webcast...
http://h20219.www2.hp.com/hpux11i/cache/602392-0-0-0-121.html
After that look at the rolling upgrade in the managing serviceguard guide
http://docs.hp.com/en/B3936-90135/B3936-90135.pdf
Hope it helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2009 03:30 AM
тАО09-07-2009 03:30 AM
Re: V3 + Service Guard Upgrade.
What precaution I hsould take care while removing the node from cluster for installation and rejoin to the cluster.
-- Muthu.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2009 05:52 AM
тАО09-07-2009 05:52 AM
SolutionSome say you shouldn't upgrade since upgrade is not reliable : do a cold install.
Others say that the HP upgrade utility has had many improvements and that it is quiet trutworthy right now.
A cold install is a very certain procedure, with too many hickups, and if anything goes wrong, you can allways start over. The drawback is that you need to know very good how your server has been configured since you will have to modofy every single configuration file again from scratch.
An upgrade might be a little more tricky, and if it goes wrong, there is no retry unless you do a complete restore of the original configuration.
The good thing is that, after the upgrade, your server is (almost) fully configured.
At the end, it will be up to you to choose.
If you choose to upgrade, do it in different steps :
1) Upgrade ServiceGuard on HP-UX 11i v2 to the same version you will go to on HP-UX 11i v3 (being 11.18 or 11.19)
2) Read the upgrade readme for important know issues and workarounds.
3) It may be a good idea to bring your HP-UX 11i v2 up to date with a recent Quality Pack, HW-Enablement, drivers, but take care here that your target HP-UX 11i v3 release is not older that the HP-UX 11i v2 release you start from.
If you choose to cold install, then you will have to cover these steps in general :
1) Stop the Oracle DB on the server you want to upgrade, and remove this server from the Oracle configuration. Normally, you install the software on 1 server, and then Oracle pushes it to other cluster members. So now you have to pull the software away from one server. The same has to be done for clusterware. From this point on, you have in fact a single-node database and clusterware environment on a 2-node ServiceGuard cluster.
2) Remove the server from the ServiceGuard configuration. You now have a single-node ServiceGuard cluster.
You don't have to backup anything from existing volumegroups. After cold-install of HP-UX 11i v3, you can as well export these configurations from the other HP-UX 11i v2 server and then import this on your HP-UX 11i v3 server.
When you storage configuration had been imported, you can continue :
3) After installation, add your server back to the serviceguard cluster, then add your server to the Clusterware cluster and then to the database cluster.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2009 11:52 PM
тАО09-07-2009 11:52 PM
Re: V3 + Service Guard Upgrade.
Let take this scenorio nodeA & nodeB in a two node cluster. would like to upgrade this nodes from v2 to v3.
Service guard 11.17 & SgeRAC & oracle 10gR2p4 & ASM & SLVM for ocr and voting
A) Backup in both node:
1) Take ignite backup
Take a note of lun & wwid id
backup /etc/fstab , cmcluster ,
lvmtab,lvmpvg,lvmconf, group,
passwd,hosts,crontabs,ipfilter config
2) ioscan , vgdisplay , pvdisplay , lvdisplay all verbose output.
3) vgexport -p -s -m of all volumes
4) take a list of all the disk belong to ASM and permission of those disks
5) note down the LUN id, (wwwn number) ( autopath display )
B) set AUTOSTART_CMCLD=0 in /etc/rc.config.d/cmcluster
1) bring the node B down
cmhaltnode -f nodeB
2) remove the nodeB info from cluster
In node A :
cmdeleteconf -n nodeB --> (command not avail so i am giving the below commands )
cmquerycl -v -C /etc/cmcluster/cluster.config -n nodeA
cmcheckconf -v -C /etc/cmcluster/cluster.config
cmapplyconf -v -C /etc/cmcluster/cluster.config
run cluster in single node(nodeA)
C) Install OS v3 & Service Guard on Node B( cold install )
D) Now halt the old cluster @ nodeA & remove from network
E) Bring up the new cluster in nodeB:
1) import the shared VG wiht -s option
vgimpor -s -m /tmp/vgvote.map /dev/vgvote ( map file which was taken earlier)
import all other vgs if required
vgcfgbackup /dev/vgvote
2) ASM disk permission check
3) change the ASM parameter to reflect the new persisitant DSF
4) Change the permission of OCR & VOTE volume permission. /dev/vgvote/*
5) Now setup your package configurations, Check and appy config
restore the pkg folder from backup
cmquerycl -v -C /etc/cmcluster/newcluster.config -n nodeB
6) change the cluster lock disk path in the cluster.config file.
cmcheckconf -v -C /etc/cmcluster/cluster.config
cmcheckconf -v -C /etc/cmcluster/cluster.config -P /etc/cmcluster/pkg/crspkgu02.conf
cmapplyconf -v -C /etc/cmcluster/cluster.config -P /etc/cmcluster/pkg/crspkgu02.conf
7) start the cluster in nodeB
cmruncl -v
tail -f /var/adm/syslog/syslog.log
8) check the data base running
crs_stat -t
F) Install OS v3 & Service Guard on Node A ( cold install )
verify all below mentioned items are backed up.
1) Take ignite backup
Take a note of lun & wwid id
backup /etc/fstab , cmcluster ,
lvmtab,lvmpvg,lvmconf, group,
passwd,hosts,crontabs,ipfilter config
2) ioscan , vgdisplay , pvdisplay , lvdisplay all verbose output.
3) vgexport -p -s -m of all volumes
4) take a list of all the disk belong to ASM and permission of those disks
5) note down the LUN id, (wwwn number) ( autopath display )
G) Add to the Cluster:
take vgexport -s -p -m /tmp/vgnew.map /dev/vgvote in nodeB
1) import the shared VG wiht -s option
vgimpor -s -m /tmp/vgnew.map /dev/vgvote
import all other vgs if required
vgcfgbackup /dev/vgvote
2) ASM disk permission check
3) change the ASM parameter to reflect the new persisitant DSF
4) Change the permission of OCR & VOTE volume permission. /dev/vgvote/*
5) Create cluster config file and setup the package configurations, Check and appy config
restore the pkg folder from backup
cmquerycl -v -C /etc/cmcluster/newcluster.config -n nodeA -n nodeB
6) change the cluster lock disk path in the cluster.config file.
cmcheckconf -v -C /etc/cmcluster/newcluster.conf
in node A cmcheckconf -v -C /etc/cmcluster/newcluster.conf -P /etc/cmcluster/pkg/crspkgu01.conf
in node B cmcheckconf -v -C /etc/cmcluster/newcluster.conf -P /etc/cmcluster/pkg/crspkgu02.conf
in node A cmapplyconf -v -C /etc/cmcluster/newcluster.conf -P /etc/cmcluster/pkg/crspkgu02.conf
in node B cmapplyconf -v -C /etc/cmcluster/newcluster.conf -P /etc/cmcluster/pkg/crspkgu02.conf
7) start the cluster in nodeB
cmruncl -v
tail -f /var/adm/syslog/syslog.log
8) check the data base running
crs_stat -t
---Muthu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-16-2009 10:33 PM
тАО09-16-2009 10:33 PM
Re: V3 + Service Guard Upgrade.
I don't know if it will work, certainly the part where node B has been reinstalled. I don't know if it is possible to create a single-node serviceguard cluster from scratch. But it could work fine.
I also don't know if it is possible to add/remove a node online in serviceguard.
Maybe it is also possible to shut down node B and NOT remove it from the clusterconfig on node A.
Then, after the install, copy the cluster configuration files (/etc/cmcluster/...) from node A to node B and (with a bit of luck) start the cluster on node B and join with node A.
If you have a good backup of your Oracle software (/etc/oratab, /var/opt/oracle and your oracle software installation directory), you can probably just restore your oracle software and start running.
This way, you don't have effective downtime, as you can failover from node A to node B when you start upgrading node A.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-17-2009 12:38 AM
тАО09-17-2009 12:38 AM
Re: V3 + Service Guard Upgrade.
We are in the process.
We did a cold install not upgrade. Of course we taken the /var/opt/oracle fodler backup also. and no restore of old cluster files. created a fresh one. and imported the oracle binary volume. cluster was running fine. package was not started. We strugled to start CRS. crs is checking for .ssh key in the root user.(crs was installed by root user) we created "ssh-keygen -t rsa" then crs started working. Now new cluster is formed in the nodeA with V3. now we are going to upgrade nodeB. We have done this in test & development setup. So we ware able to run long time with single node. in case of production env we can't offer more down time.
I will share the full steps by steps procedure, what we followed, after upgrade of both node.
Thanks.
--Muthu.