- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Adding new Volume Groups in Serviceguard configura...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-26-2010 10:32 PM
05-26-2010 10:32 PM
Adding new Volume Groups in Serviceguard configuration
After we upgrade our EVA and present the new Disk Group, how can we create new VGs and integrated them in SG. Please help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-26-2010 10:54 PM
05-26-2010 10:54 PM
Re: Adding new Volume Groups in Serviceguard configuration
Scan new disks
Create new vgs & lvs
Export vg to map file
Import vg in all failover nodes
Deactivate vg (vgchange -a n vgname)
Make vg cluster aware (vgchange -c y vgname )
Active vg exclusively (vgchange -a e vgname )
Mount new new lvs manually with mount command
Take copy of /etc/cmcluster/pkg/pkg.cntl file
Edit /etc/cmcluster/pkg/pkg.cntl & add new vg, lv details
Copy pkg control files to all failover nodes
Gudluck
Prasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-26-2010 10:55 PM
05-26-2010 10:55 PM
Re: Adding new Volume Groups in Serviceguard configuration
1.
[root@server1]/ # datapath query adapter
Active Adapters :2
Adapter# Name State Mode Select Error Path Active
0 1/0/8/1/0 NORMAL ACTIVE 4208692968 0 147 60
1 1/0/2/1/0/4/0 NORMAL ACTIVE 4238442731 0 147 60
[root@server1]/ #
2.
[root@server2]/ # ioscan -fnC disk
3.
[root@server2]/ # insf
insf: Installing special files for sdisk instance 540 address 1/0/1/1/0/4/0.203.8.0.67.12.2
insf: Installing special files for sdisk instance 538 address 1/0/1/1/0/4/0.203.8.0.97.14.1
insf: Installing special files for sdisk instance 539 address 1/0/10/1/0.103.8.0.67.12.2
insf: Installing special files for sdisk instance 537 address 1/0/10/1/0.103.8.0.97.14.1
[root@server2]/ #
4.
[root@server2]/ # cfgvpath -r
Running Dynamic reconfiguration
Add disk: vpath70
Add disk: vpath71
Add disk: vpath268
Add disk: vpath269
5.
[root@server2]/ # cd /tmp/
[root@server2]/tmp # cd vgs
[root@server2]/tmp/vgs # mkdir 17-DEC-2009
[root@server2]/tmp/vgs # cd 17-DEC-2009
[root@server2]/tmp/vgs # vi luns
6
[root@server2]/tmp/vgs/17-DEC-2009 # ls -l /dev/*/group | sort -k 6
crw-r----- 1 root sys 64 0x000000 Feb 10 2009 /dev/vg00/group
crw-r--r-- 1 root sys 64 0x220000 Mar 19 2009 /dev/new_bscsvg00/group
crw-r--r-- 1 root sys 64 0x230000 Mar 19 2009 /dev/new_bscsvg01/group
crw-r--r-- 1 root sys 64 0x240000 Mar 19 2009 /dev/new_bscsvg02/group
crw-r--r-- 1 root sys 64 0x250000 Mar 18 2009 /dev/lockdisk/group
crw-r--r-- 1 root sys 64 0x260000 Mar 19 2009 /dev/new_bscsvg04/group
crw-r--r-- 1 root sys 64 0x270000 Mar 19 2009 /dev/new_bscsvg05/group
crw-r--r-- 1 root sys 64 0x280000 Mar 19 2009 /dev/new_bscsvg06/group
crw-r--r-- 1 root sys 64 0x290000 Mar 19 2009 /dev/new_bscsvg07/group
crw-r--r-- 1 root sys 64 0x2a0000 Mar 19 2009 /dev/new_bscsvg08/group
crw-r--r-- 1 root sys 64 0x2b0000 Mar 19 2009 /dev/new_bscsvg09/group
crw-r--r-- 1 root sys 64 0x2c0000 Mar 19 2009 /dev/new_bscsvg10/group
crw-r--r-- 1 root sys 64 0x2d0000 Mar 19 2009 /dev/new_udrvg00/group
crw-r--r-- 1 root sys 64 0x2e0000 Mar 19 2009 /dev/new_udrvg01/group
crw-r--r-- 1 root sys 64 0x2f0000 Mar 19 2009 /dev/new_udrvg02/group
crw-r--r-- 1 root sys 64 0x300000 Mar 19 2009 /dev/new_udrvg03/group
crw-r--r-- 1 root sys 64 0x310000 Mar 19 2009 /dev/new_udrvg04/group
crw-r--r-- 1 root sys 64 0x320000 Mar 19 2009 /dev/new_udrvg05/group
crw-r--r-- 1 root sys 64 0x330000 Mar 19 2009 /dev/new_udrvg06/group
crw-r--r-- 1 root sys 64 0x340000 Mar 19 2009 /dev/new_udrvg07/group
crw-r--r-- 1 root sys 64 0x350000 Mar 19 2009 /dev/new_udrvg08/group
crw-r--r-- 1 root sys 64 0x360000 Mar 19 2009 /dev/new_udrvg09/group
crw-r--r-- 1 root sys 64 0x370000 Mar 19 2009 /dev/new_udrvg10/group
crw-r--r-- 1 root sys 64 0x380000 Mar 19 2009 /dev/new_udrvg11/group
crw-r--r-- 1 root sys 64 0x390000 Mar 19 2009 /dev/new_udrvg12/group
crw-r--r-- 1 root sys 64 0x3a0000 Mar 19 2009 /dev/new_udrvg13/group
crw-r--r-- 1 root sys 64 0x3b0000 Mar 19 2009 /dev/new_udrvg14/group
crw-r--r-- 1 root sys 64 0x3c0000 Mar 22 2009 /dev/new_bscsvg03/group
crw-r--r-- 1 root sys 64 0x3d0000 Mar 22 2009 /dev/udrvg06/group
crw-r--r-- 1 root sys 64 0x3e0000 Jun 29 15:40 /dev/new_udrvg16/group
crw-r--r-- 1 root sys 64 0x3f0000 Aug 12 11:04 /dev/new_bscsvg11/group
crw-r--r-- 1 root sys 64 0x400000 Nov 27 16:38 /dev/new_udrvg17/group
crw-r--r-- 1 root sys 64 0x4b0000 Mar 30 2009 /dev/udrvg15/group
AND
#rlogin server2
[root@server1]/ # ls -l /dev/*/group | sort -k 6
crw-r----- 1 root sys 64 0x000000 Feb 10 2009 /dev/vg00/group
crw-rw-rw- 1 root sys 64 0x010000 Dec 9 2004 /dev/bscsvg00/group
crw-rw-rw- 1 root sys 64 0x020000 Dec 9 2004 /dev/bscsvg01/group
crw-rw-rw- 1 root sys 64 0x030000 Dec 9 2004 /dev/bscsvg02/group
crw-rw-rw- 1 root sys 64 0x040000 Dec 9 2004 /dev/bscsvg03/group
crw-r--r-- 1 root sys 64 0x050000 Mar 6 2009 /dev/bscsvg04/group
crw-rw-rw- 1 root sys 64 0x060000 Dec 9 2004 /dev/bscsvg05/group
crw-r--r-- 1 root sys 64 0x070000 Feb 23 2007 /dev/bscsvg06/group
crw-r--r-- 1 root sys 64 0x080000 Nov 8 2007 /dev/bscsvg07/group
crw-r--r-- 1 root sys 64 0x140000 Mar 27 2008 /dev/bscsvg08/group
crw-r--r-- 1 root sys 64 0x170000 Aug 4 2008 /dev/bscsvg09/group
crw-r--r-- 1 root sys 64 0x200000 Mar 6 2009 /dev/bscsvg10/group
crw-r--r-- 1 root sys 64 0x220000 Mar 19 2009 /dev/new_bscsvg00/group
crw-r--r-- 1 root sys 64 0x230000 Mar 19 2009 /dev/new_bscsvg01/group
crw-r--r-- 1 root sys 64 0x240000 Mar 19 2009 /dev/new_bscsvg02/group
crw-r--r-- 1 root sys 64 0x250000 Mar 18 2009 /dev/lockdisk/group
crw-r--r-- 1 root sys 64 0x260000 Mar 19 2009 /dev/new_bscsvg04/group
crw-r--r-- 1 root sys 64 0x270000 Mar 19 2009 /dev/new_bscsvg05/group
crw-r--r-- 1 root sys 64 0x280000 Mar 19 2009 /dev/new_bscsvg06/group
crw-r--r-- 1 root sys 64 0x290000 Mar 19 2009 /dev/new_bscsvg07/group
crw-r--r-- 1 root sys 64 0x2a0000 Mar 19 2009 /dev/new_bscsvg08/group
crw-r--r-- 1 root sys 64 0x2b0000 Mar 19 2009 /dev/new_bscsvg09/group
crw-r--r-- 1 root sys 64 0x2c0000 Mar 19 2009 /dev/new_bscsvg10/group
crw-r--r-- 1 root sys 64 0x2d0000 Mar 19 2009 /dev/new_udrvg00/group
crw-r--r-- 1 root sys 64 0x2e0000 Mar 19 2009 /dev/new_udrvg01/group
crw-r--r-- 1 root sys 64 0x2f0000 Mar 19 2009 /dev/new_udrvg02/group
crw-r--r-- 1 root sys 64 0x300000 Mar 19 2009 /dev/new_udrvg03/group
crw-r--r-- 1 root sys 64 0x310000 Mar 19 2009 /dev/new_udrvg04/group
crw-r--r-- 1 root sys 64 0x320000 Mar 19 2009 /dev/new_udrvg05/group
crw-r--r-- 1 root sys 64 0x330000 Mar 19 2009 /dev/new_udrvg06/group
crw-r--r-- 1 root sys 64 0x340000 Mar 19 2009 /dev/new_udrvg07/group
crw-r--r-- 1 root sys 64 0x350000 Mar 19 2009 /dev/new_udrvg08/group
crw-r--r-- 1 root sys 64 0x360000 Mar 19 2009 /dev/new_udrvg09/group
crw-r--r-- 1 root sys 64 0x370000 Mar 19 2009 /dev/new_udrvg10/group
crw-r--r-- 1 root sys 64 0x380000 Mar 19 2009 /dev/new_udrvg11/group
crw-r--r-- 1 root sys 64 0x390000 Mar 19 2009 /dev/new_udrvg12/group
crw-r--r-- 1 root sys 64 0x3a0000 Mar 19 2009 /dev/new_udrvg13/group
crw-r--r-- 1 root sys 64 0x3b0000 Mar 19 2009 /dev/new_udrvg14/group
crw-r--r-- 1 root sys 64 0x3c0000 Mar 22 2009 /dev/new_bscsvg03/group
crw-r--r-- 1 root sys 64 0x3d0000 Mar 22 2009 /dev/udrvg06/group
crw-r--r-- 1 root sys 64 0x3e0000 Jun 29 15:40 /dev/new_udrvg16/group
crw-r--r-- 1 root sys 64 0x3f0000 Aug 12 11:04 /dev/new_bscsvg11/group
crw-r--r-- 1 root sys 64 0x400000 Nov 27 16:38 /dev/new_udrvg17/group
crw-r--r-- 1 root sys 64 0x4b0000 Mar 30 2009 /dev/udrvg15/group
7.
[root@server2]/tmp/vgs/17-DEC-2009 # rbdf | sort -n
/dev/new_udrvg00/uu01 62521344 62045352 472896 99% /uu01
/dev/new_udrvg00/work 83984384 72886944 11062064 87% /uu00/work
/dev/new_udrvg01/uu02 56885248 56510792 371584 99% /uu02
/dev/new_udrvg01/uu03 56885248 56815056 69696 100% /uu03
/dev/new_udrvg02/uu04 56885248 56835328 49584 100% /uu04
/dev/new_udrvg02/uu05 56885248 56543144 339488 99% /uu05
/dev/new_udrvg03/uu06 56885248 56537632 344960 99% /uu06
/dev/new_udrvg03/uu07 56885248 56672680 210968 100% /uu07
/dev/new_udrvg04/uu08 56885248 56552264 330424 99% /uu08
/dev/new_udrvg04/uu09 56885248 56731200 152896 100% /uu09
/dev/new_udrvg05/uu10 56885248 56593200 289824 99% /uu10
/dev/new_udrvg05/uu11 56885248 56615784 267416 100% /uu11
/dev/new_udrvg07/udrarch 113770496 36404008 76762592 32% /udrarch
/dev/new_udrvg08/uu13 56885248 56744808 139400 100% /uu13
/dev/new_udrvg08/uu14 56885248 56644112 239312 100% /uu14
/dev/new_udrvg09/uu15 56885248 56747432 136800 100% /uu15
/dev/new_udrvg09/uu16 56885248 56189664 690200 99% /uu16
/dev/new_udrvg10/uu17 56885248 56358504 522696 99% /uu17
/dev/new_udrvg10/uu18 56885248 56256960 623440 99% /uu18
/dev/new_udrvg11/uu19 56885248 56850480 34560 100% /uu19
/dev/new_udrvg11/uu20 56885248 56344672 536416 99% /uu20
/dev/new_udrvg12/uu21 56885248 56175824 703944 99% /uu21
/dev/new_udrvg12/uu22 56885248 56437104 444704 99% /uu22
/dev/new_udrvg13/uu23 56885248 56607888 275256 100% /uu23
/dev/new_udrvg13/uu24 56885248 56144784 734744 99% /uu24
/dev/new_udrvg14/uu25 113770496 113732880 37376 100% /uu25
/dev/new_udrvg16/uu28 104857600 103901704 926080 99% /uu28
/dev/new_udrvg16/uu29 104882176 104746152 135024 100% /uu29
/dev/new_udrvg17/uu30 61440000 60718528 715840 99% /uu30
/dev/new_udrvg17/uu31 61440000 61395296 44360 100% /uu31
/dev/udrvg06/home 10420224 4250620 5976832 42% /uu00/home
/dev/udrvg06/oracle 5308416 1719737 3371729 34% /uu00/oracle
/dev/udrvg06/shipment 13369344 3784 12947894 0% /uu12/shipment
/dev/udrvg06/udrbkp 307298304 134076384 171868824 44% /udrbkp
/dev/udrvg15/uu26 67092480 66870128 220680 100% /uu26
/dev/udrvg15/uu27 62914560 62908299 5993 100% /uu27
/dev/vg00/logs 8192000 399798 7305207 5% /logs
/dev/vg00/lvol1 409200 149280 219000 41% /stand
/dev/vg00/lvol3 1048576 369272 674024 35% /
/dev/vg00/lvol4 6144000 4404224 1726464 72% /opt
/dev/vg00/lvol5 5537792 1619760 3891624 29% /tmp
/dev/vg00/lvol6 4096000 1553280 2522912 38% /usr
/dev/vg00/lvol7 10256384 7526984 2708456 74% /var
/dev/vg00/lvol8 2064384 1416370 607526 70% /home
/dev/vg00/oemlv 10485760 3666054 6606740 36% /OEMagent
/dev/vg00/opt_itmlv 4194304 2128 3930172 0% /opt/ITM
Filesystem kbytes used avail %used Mounted
[root@server2]/tmp/vgs/17-DEC-2009 #
8.
[root@server2]/tmp/vgs/17-DEC-2009 # mkdir /dev/new_udrvg18
[root@server2]/dev/new_udrvg18 # mknod /dev/new_udrvg18/group c 64 0x410000
9.
for i in ` cat /tmp/vgs/17-DEC-2009/lunids `
>do
>datapath query essmap | grep -i $i
>done
vpath70 c16t10d1 1/0/10/1/0 NONE 75DM01127D1 IBM 2107900 32.0GB 27 209 0000 2c Y R1-B3-H3-ZB 231 RAID5
vpath70 c17t10d1 1/0/1/1/0/4/0 NONE 75DM01127D1 IBM 2107900 32.0GB 27 209 0000 2c Y R1-B4-H1-ZB 301 RAID5
vpath71 c16t10d0 1/0/10/1/0 NONE 75DM01127D0 IBM 2107900 32.0GB 27 208 0000 2c Y R1-B3-H3-ZB 231 RAID5
vpath71 c17t10d0 1/0/1/1/0/4/0 NONE 75DM01127D0 IBM 2107900 32.0GB 27 208 0000 2c Y R1-B4-H1-ZB 301 RAID5
vpath268 c66t14d1 1/0/10/1/0 NONE 75DM01130F1 IBM 2107900 32.0GB 30 241 0000 11 Y R1-B3-H3-ZB 231 RAID5
vpath268 c67t14d1 1/0/1/1/0/4/0 NONE 75DM01130F1 IBM 2107900 32.0GB 30 241 0000 11 Y R1-B4-H1-ZB 301 RAID5
vpath269 c12t12d2 1/0/10/1/0 NONE 75DM01121E2 IBM 2107900 32.0GB 21 226 0000 20 Y R1-B3-H3-ZB 231 RAID5
vpath269 c13t12d2 1/0/1/1/0/4/0 NONE 75DM01121E2 IBM 2107900 32.0GB 21 226 0000 20 Y R1-B4-H1-ZB 301 RAID5
10.
[root@server2]/tmp/vgs/17-DEC-2009 # strings /etc/lvmtab >> etclvmtab.before
11.
[root@server2]/tmp/vgs/17-DEC-2009 # for i in ` cat /tmp/vgs/17-DEC-2009/luns `
server2 > do
server2 > pvcreate /dev/rdsk/$i
server2 > done
12.
[root@server2]/tmp/vgs/17-DEC-2009 # vgcreate /dev/new_udrvg18 /dev/dsk/vpath70 /dev/dsk/vpath71 /dev/dsk/vpath268 /dev/dsk/vpath269
Increased the number of physical extents per physical volume to 8191.
Volume group "/dev/new_udrvg18" has been successfully created.
Volume Group configuration for /dev/new_udrvg18 has been saved in /etc/lvmconf/new_udrvg18.conf
13.
[root@server2]/tmp/vgs/17-DEC-2009 # lvcreate -L 55555 -n uu32 -i 4 -I 4096 /dev/new_udrvg18
Warning: rounding up logical volume size to extent boundary at size "55556" MB.
Warning: rounding up logical volume size to extent boundary at size "55568" MB for striping.
Logical volume "/dev/new_udrvg18/uu32" has been successfully created with
character device "/dev/new_udrvg18/ruu32".
Logical volume "/dev/new_udrvg18/uu32" has been successfully extended.
Volume Group configuration for /dev/new_udrvg18 has been saved in /etc/lvmconf/new_udrvg18.conf
14.
[root@server2]/tmp/vgs/17-DEC-2009 # newfs -o largefiles /dev/new_udrvg18/ruu32
newfs: /etc/default/fs is used for determining the file system type
version 4 layout
56901632 sectors, 7112704 blocks of size 8192, log size 256 blocks
unlimited inodes, largefiles supported
7112704 data blocks, 7112152 free data blocks
218 allocation units of 32768 blocks, 32768 data blocks
last allocation unit has 2048 data blocks
15.
[root@server2]/tmp/vgs/17-DEC-2009 # vgchange -a n /dev/new_udrvg18
[root@server2]/tmp/vgs/17-DEC-2009 # vgchange -c y /dev/new_udrvg18
[root@server2]/tmp/vgs/17-DEC-2009 # vgchange -a e /dev/new_udrvg18
16.
mount /dev/new_udrvg18/uu32 /uu32
mount -o remount,largefiles,delaylog,nodatainlog,mincache=direct,convosync=direct /dev/new_udrvg18/uu32 /uu32
17.
check "mount" output.
[root@server2]/home/ib054033 # mount | grep uu32
/uu32 on /dev/new_udrvg18/uu32 delaylog,nodatainlog,largefiles,mincache=direct,convosync=direct on Thu Dec 17 16:53:37 2009
18.
[root@server2]/tmp/vgs/17-DEC-2009 # cd /uu32
[root@server2]/uu32 # ls -l
total 0
drwxr-xr-x 2 root root 96 Dec 17 16:40 lost+found
[root@server2]/uu32 # mkdir oracle
[root@server2]/uu32 # chown oracle:oinstall /uu32
19.
cd /etc/cmcluster/rate
[root@server2]/etc/cmcluster/rate # cp rate.cntl rate.cntl.17-DEC-2009
20.
vi rate.cntl & add new vg & FS entries.
NOTE:- EXTREME CARE SHOULD BE GIVEN WHEN YOU EDITING CNTL FILES, any SAMLL MISTAKE IN THIS FILE WILL CAUSE PKG FAILURE.
AS CLUSTER PKG STATURTUO IS FULLIY DEPEND ON THIS CONTROL FILE.
21.
[root@server2]rcp rate.cntl server1:/etc/cmcluster/rate/rate.cntl
22.
[root@server2]/tmp/vgs/17-DEC-2009 # rbdf > ]/tmp/vgs/17-DEC-2009/ardf
Gudlcuk
Prasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2010 12:12 AM
05-27-2010 12:12 AM
Re: Adding new Volume Groups in Serviceguard configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2010 12:16 AM
05-27-2010 12:16 AM
Re: Adding new Volume Groups in Serviceguard configuration
i am not sure about sure about this..
Do we need to add new vg details on ascci files ???
i never done on my servers ... correct me if i am wrong.
Gudluck
Prasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2010 01:30 AM
05-27-2010 01:30 AM
Re: Adding new Volume Groups in Serviceguard configuration
you mean you never configure the cluster VGs into /etc/cmcluster/cluster.ascii? I always did this when I configured a cluster's VGs, it read at /etc/cmcluster/cluster.ascii:
# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02
and in /etc/cmcluster/
# VOLUME GROUPS
and I recall that cmapplconf should be run after VGs modification(add or remove).
so this is my way to accomplish cluster configuration, and all the time the cluster operates well.
so, when you mentioned this,maybe we all need someone to confirm the right way.HAHA!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2010 01:38 AM
05-27-2010 01:38 AM
Re: Adding new Volume Groups in Serviceguard configuration
i don't know, i never add new vgs in /etc/cmcluster/cluster.ascii . i used to add only in pkg control file
we have 30+ cluster nodes in my site.. all are running well now..
don't know is it right way or not..
GUdluck
Prasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2010 02:15 AM
05-27-2010 02:15 AM
Re: Adding new Volume Groups in Serviceguard configuration
maybe we are all doing not wrong, and as I consider, VGs were just need to be activated or de-activated for application usage by cluster package control script. I consider configuring the VGs "to" the cluster(in /etc/cmcluster/cluster.ascii) is for the purpose of recognizing those VGs as VGs of cluster, so they don't need to be activate by /etc/lvmrc when the os boot, otherwise how the os find out which VGs are to be activate when it boot?
This is just my opinion.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2010 04:41 AM
05-27-2010 04:41 AM
Re: Adding new Volume Groups in Serviceguard configuration
/etc/lvmrc file by setting AUTO_VG_ACTIVATE=0
and local volume group should be activated via custom_vg_activation() script.
so even i think it is not require to include in cluster.ascii file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-30-2010 07:36 AM
05-30-2010 07:36 AM