Operating System - HP-UX
1752777 Members
6296 Online
108789 Solutions
New Discussion юеВ

Installing Oracle RAC with no ServiceGuard (on 11.23)

 
SOLVED
Go to solution
Timothy Nibbe
Advisor

Installing Oracle RAC with no ServiceGuard (on 11.23)

This question had originally been posted in the Server | 9000 area, this is probably the better area to post this.

I am a Unix sysadmin and I am working with an Oracle dba to add a second volume group to an Oracle RAC installation on a two node cluster.

We have an operating two node Oracle 10.2.0.3 RAC cluster with ASM using a "raw" volume group that is managed with LVM.

We are using two 9000 series servers running 11i v2 and do not have Online JFSV licenses.

We are using an IBM SAN and NOT using ServiceGuard.

The cluster is working fine with one volume group, although it is running out of space. New LUNS were build in shared hostgroups and presented to both servers. Using an Oracle document that was provided by the dba, I added a second volume group by building a new vg on the first node, used export to create a map file of the vg, and imported the new vg into the second node.

The LVM portion of the Oracle document ended after importing the new vg onto the node. There was one more OS level step in the document that called for editing /etc/lvmrc to turn off AUTO_VG_ACTIVATE and to modify the custom_vg_activation section. From looking at various .orig and .bak versions of the file, I found that this file had been modified at least a few times when the server was built, and that the servers had been operating (and rebooting) properly with AUTO_VG_ACTIVATE turned on; so I left it as-is.

My understanding is that the new vg could be activated on both nodes either by using vgcreate or simply by rebooting the servers. I could not find any sort of documentation or other information on how to activate the vg without ServiceGuard.

After a reboot, both of the san vgs are active on both servers. The problem is that the database is VERY slow when using just one node and it is totally unusable when trying to use both nodes. The database does start, but it takes a very long time to start. Even after exporting the new vg from the second node, the first node runs very slowly.

I have tried several things to get it to work, including editing /etc/lvmrc as the Oracle dument specified, and only having vg00 activated during boot. Oracle did not activate either of the dabase volume groups; so I tried using vgchange using several switches that I found in various online documentation, the only switches I found that would activate the vg's were 'vgchange -a y -s /dev/vgsan'. Even though this command activated the vg's, the database still worked dog slow.

Does anybody have any insight about what I might be doing wrong or have any more information about installing RAC on HP-UX without ServiceGuard?


Some outputs. /dev/vgsan is the original vg, /dev/vgsan2 is the new vg, and both servers look the same:

abdwddb2:/dev/vgsan # ls -l
total 0
crw-r----- 1 root sys 64 0x020000 Jun 2 11:32 group
brw-r----- 1 root sys 64 0x020001 Jun 2 11:47 lun1
brw-r----- 1 root sys 64 0x020002 Jun 2 11:47 lun2
brw-r----- 1 root sys 64 0x020003 Jun 2 11:47 lun3
brw-r----- 1 root sys 64 0x020004 Jun 2 11:47 lun4
crw-r----- 1 oracle oinstall 64 0x020001 Jun 2 11:47 rlun1
crw-r----- 1 oracle oinstall 64 0x020002 Jun 2 11:47 rlun2
crw-r----- 1 oracle oinstall 64 0x020003 Jun 2 11:47 rlun3
crw-r----- 1 oracle oinstall 64 0x020004 Jun 2 11:47 rlun4
crw-rw---- 1 oracle oinstall 64 0x020005 Jun 2 11:47 rvolcrs
crw-rw---- 1 oracle oinstall 64 0x020006 Jun 2 11:47 rvolvd
brw-r----- 1 root sys 64 0x020005 Jun 2 11:47 volcrs
brw-r----- 1 root sys 64 0x020006 Jun 2 11:47 volvd


abdwddb2:/dev/vgsan2 # ls -l
total 0
crw-r--r-- 1 root sys 64 0x030000 Oct 29 13:20 group
brw-r----- 1 root sys 64 0x030006 Oct 29 13:21 lun10
brw-r----- 1 root sys 64 0x030001 Oct 29 13:21 lun5
brw-r----- 1 root sys 64 0x030002 Oct 29 13:21 lun6
brw-r----- 1 root sys 64 0x030003 Oct 29 13:21 lun7
brw-r----- 1 root sys 64 0x030004 Oct 29 13:21 lun8
brw-r----- 1 root sys 64 0x030005 Oct 29 13:21 lun9
crw-r----- 1 oracle oinstall 64 0x030006 Oct 29 13:21 rlun10
crw-r----- 1 oracle oinstall 64 0x030001 Oct 29 13:21 rlun5
crw-r----- 1 oracle oinstall 64 0x030002 Oct 29 13:21 rlun6
crw-r----- 1 oracle oinstall 64 0x030003 Oct 29 13:21 rlun7
crw-r----- 1 oracle oinstall 64 0x030004 Oct 29 13:21 rlun8
crw-r----- 1 oracle oinstall 64 0x030005 Oct 29 13:21 rlun9


abdwddb2:/dev/vgsan2 # vgdisplay vgsan
--- Volume groups ---
VG Name /dev/vgsan
VG Write Access read/write
VG Status available
...


abdwddb2:/dev/vgsan2 # vgdisplay vgsan2
--- Volume groups ---
VG Name /dev/vgsan2
VG Write Access read/write
VG Status available
...
2 REPLIES 2
Solution

Re: Installing Oracle RAC with no ServiceGuard (on 11.23)


-----------------
We have an operating two node Oracle 10.2.0.3 RAC cluster with ASM using a "raw" volume group that is managed with LVM.

We are using two 9000 series servers running 11i v2 and do not have Online JFSV licenses.

We are using an IBM SAN and NOT using ServiceGuard.
-----------------

Tim,

I stopped reading the rest of the post after reading that part... the configuration you describe is not supported. Concurrent read/write access to a LVM volume from multiple hosts is *only* supported if you are using "Shared LVM" (SLVM), and SLVM is *only* supported with Serviceguard Extension for RAC (SGeRAC), which you have stated you are not using.

If your cluster is built using just Oracle Clusterware without Serviceguard Extension for RAC, then your storage choices are:

- raw LUNs (i.e. direct access to the devices under /dev/dsk and /dev/rdsk
- ASM over raw LUNs - i.e. ASM using the raw disks.

ASM over LVM is only an option if you are using SGeRAC.

So at this point your choices are:

i) Implement SGeRAC and you can use ASM over SLVM
ii) Just use ASM over raw disks

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Timothy Nibbe
Advisor

Re: Installing Oracle RAC with no ServiceGuard (on 11.23)

Thank you; this persuaded the dba to go to raw PVs, which work like a champ. This will also clear up some related problems on some other servers.