Operating System - HP-UX
1748283 Members
3754 Online
108761 Solutions
New Discussion юеВ

Re: Extend FS on a HP-UX Cluster need help

 
SOLVED
Go to solution
cheesytime
Regular Advisor

Extend FS on a HP-UX Cluster need help

Hello,

The Storage Administrator just told me that he has provided us with a new LUN for our cluster (2 node cluster)

These are the special files I see:

/dev/[r]dsk/c8t2d5
/dev/[r]dsk/c11t2d5


This is the xpinfo output:

node1:
============================
Device File : /dev/rdsk/c8t2d5 Model : XP10000
Port : CL1A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : b2 BC1 (MU#1) : SMPL
Loop Id : 20 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : 2D+2D
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49200
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---


============================
Device File : /dev/rdsk/c11t2d5 Model : XP10000
Port : CL2A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : 98 BC1 (MU#1) : SMPL
Loop Id : 30 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : 2D+2D
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49210
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---

============================


node2:
=============================
Device File : /dev/rdsk/c8t2d5 Model : XP10000
Port : CL5A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : ae BC1 (MU#1) : SMPL
Loop Id : 22 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : ---
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49240
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---

===========================
Device File : /dev/rdsk/c11t2d5 Model : XP10000
Port : CL6A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : 90 BC1 (MU#1) : SMPL
Loop Id : 32 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : ---
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49250
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---
=============================

I would like to extend the size of the existing FS (see below)

/dev/vgdp/lvtest 14221312 12946708 1239830 91% /test

JFS is installed if needed

JFS B.11.11 The Base VxFS File System
OnlineJFS B.11.11.03.03 Online features of the VxFS File System
PHKL_24026 1.0 JFS Filesystem swap corruption
PHKL_28512 1.0 Fix for POSIX_AIO in JFS3.3
PHKL_29115 1.0 JFS Direct I/O cumulative patch
PHKL_30366 1.0 JFS3.3;ACL patch
PHKL_34805 1.0 JFS3.3 patch; mmap

I would like to extend it 13000Mb.

Thanks and a step-by-step guide would be truly appreciated.
16 REPLIES 16
cheesytime
Regular Advisor

Re: Extend FS on a HP-UX Cluster need help

I forgot to add the following:

Filesystem kbytes used avail %used Mounted on
/dev/vgdp/lvtest 14221312 12947552 1239012 91% /test

root@host:/etc/cmcluster/TEST # vgdisplay vgdp |egrep -i "Free|PE"
Open LV 1
Max PE per PV 3473
PE Size (Mbytes) 4
Total PE 3472
Alloc PE 3472
Free PE 0

Maybe this helps you help me!
Tingli
Esteemed Contributor
Solution

Re: Extend FS on a HP-UX Cluster need help

If you search for this site, there are quite a few thread about extending cluster file systems. I remember that only a few days ago there was one.
cheesytime
Regular Advisor

Re: Extend FS on a HP-UX Cluster need help

I would appreciate help for this particular case. Searching tho.
cheesytime
Regular Advisor

Re: Extend FS on a HP-UX Cluster need help

Does this look good to go?

1) cmhaltpkg -v pkgname
2) vgchange -c n /dev/vgdp
3) vgchange -a n /dev/vgdp
3) vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
4) lvextend -L 26888 /dev/vgdp/lvtest <-- That would increase its size to 26888Mb right?

5) vgexport -v -p -s -m /tmp/vgdp-new.map /dev/vgdp
6) rcp /tmp/vgdp-new.map girdb:/tmp/

8) rlogin node2
9) on node2 vgexport /dev/vgdp
10) mkdir /dev/vgdp

11) mknod /dev/vgdp/group c 64 0x110000 <-- How do i check this number, I mean which one to use

11# vgchange -a n /dev/vgdp
12# vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp

13# start the cluster in first node:
cmrunpkg -v pkgname
cheesytime
Regular Advisor

Re: Extend FS on a HP-UX Cluster need help

I dont see any free PEs when I run vgdisplay -v /dev/vgdp but like I said they provided me with new lun which special files are /dev/[r]dsk/c8t2d5
/dev/[r]dsk/c11t2d5

Help on this will be strongly appreciated.
Mel Burslan
Honored Contributor

Re: Extend FS on a HP-UX Cluster need help

pvcreate /dev/rdsk/c8t2d5
vgextend /dev/vgdp /dev/dsk/c8t2d5
lvextend -L whatyoudesire /dev/vgdp/lvolWHATEVER

this takes care of the node which is currently mounting the filesystem, hence keeping the vg active.

sometime, preferably before a catastrophe hits, you need to take the cluster offline (not necessarily today if it is under management pressure)

vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
rcp /tmp/vgdp-new.map girdb:/tmp/

from this point on, I am copying your commands so, if there are any typos, they are yours.

on node2
vgexport /dev/vgdp
mkdir /dev/vgdp

mknod /dev/vgdp/group c 64 0x110000

....<-- How do i check this number, I mean which one to use

Before performing the vgexport,
cd /dev/vgdp
ls -l group

the value of this number will be displayed clearly, right between the size of 64 bytes and timestamp. You will use the same number to keep it consistent

vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp

at this point you are done, but it is always a good idea to start the cluster and fail the package over to the second node to see if it can successfully fail-over, when there is a need.

After that you can fail the package back to the primary node and resume normal operation.

HTH
________________________________
UNIX because I majored in cryptology...
cheesytime
Regular Advisor

Re: Extend FS on a HP-UX Cluster need help

Mel:

1) cmhaltpkg -v pkgname
2) vgchange -c n /dev/vgdp
3) vgchange -a n /dev/vgdp

pvcreate /dev/rdsk/c8t2d5
vgextend /dev/vgdp /dev/dsk/c8t2d5
lvextend -L 26888 /dev/vgdp/lvomni

this takes care of the node which is currently mounting the filesystem, hence keeping the vg active.

sometime, preferably before a catastrophe hits, you need to take the cluster offline (not necessarily today if it is under management pressure)

vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
rcp /tmp/vgdp-new.map girdb:/tmp/


on node2 ()


vgexport /dev/vgdp
mkdir /dev/vgdp

mknod /dev/vgdp/group c 64 0x110000

vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp

cmrunpkg -v pkg

Does it look good now?

You forgot to add the stop pkg on your instructions :-)
cheesytime
Regular Advisor

Re: Extend FS on a HP-UX Cluster need help

The final thing looks like this:

1) cmhaltpkg -v pkgname
2) vgchange -c n /dev/vgdp
3) vgchange -a n /dev/vgdp
4) pvcreate /dev/rdsk/c8t2d5
5) vgextend /dev/vgdp /dev/dsk/c8t2d5
6) lvextend -L 26888 /dev/vgdp/lvomni <-- This makes the FS 26888MB right?
7) vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
8) rcp /tmp/vgdp-new.map girdb:/tmp/


on node2

vgexport /dev/vgdp
mkdir /dev/vgdp
mknod /dev/vgdp/group c 64 0x110000
vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp
cmrunpkg -v pkg

Did i forget anything?
Rita C Workman
Honored Contributor

Re: Extend FS on a HP-UX Cluster need help

If your just adding disk to the primary node where the package is already running (hence vg=active/exclusive) you do NOT need to halt anything.

Just add the disk to existing volume group. Run your vgexport to create the mapfile and copy that to the other node(s).

On the OTHER nodes, just run the standard vgexport and then:
mkdir /dev/vgname
mknod /dev/vgname/group c 64 0x--0000
vgimport -vs -m /etc/lvmconf/vgname.map /dev/vgname

It's that simple.

Rgrds,
Rita