- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Extend FS on a HP-UX Cluster need help
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 10:14 AM
тАО08-18-2009 10:14 AM
The Storage Administrator just told me that he has provided us with a new LUN for our cluster (2 node cluster)
These are the special files I see:
/dev/[r]dsk/c8t2d5
/dev/[r]dsk/c11t2d5
This is the xpinfo output:
node1:
============================
Device File : /dev/rdsk/c8t2d5 Model : XP10000
Port : CL1A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : b2 BC1 (MU#1) : SMPL
Loop Id : 20 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : 2D+2D
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49200
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---
============================
Device File : /dev/rdsk/c11t2d5 Model : XP10000
Port : CL2A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : 98 BC1 (MU#1) : SMPL
Loop Id : 30 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : 2D+2D
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49210
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---
============================
node2:
=============================
Device File : /dev/rdsk/c8t2d5 Model : XP10000
Port : CL5A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : ae BC1 (MU#1) : SMPL
Loop Id : 22 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : ---
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49240
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---
===========================
Device File : /dev/rdsk/c11t2d5 Model : XP10000
Port : CL6A Serial # : 00042130
Host Target : 02 Code Rev : 5001
Array LUN : 15 Subsystem : 0004
CU:LDev : 00:03 CT Group : ---
Type : OPEN-E CA Volume : SMPL
Size : 13893 MB BC0 (MU#0) : SMPL
ALPA : 90 BC1 (MU#1) : SMPL
Loop Id : 32 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : RAID1 RAID Type : ---
RAID Group : 1-1 ACP Pair : 1
Disk Mechs : HDD0000 HDD0100 HDD0200 HDD0300
FC-LUN : 0000a49200000003 Port WWN : 50060e8004a49250
HBA Node WWN: --- HBA Port WWN: ---
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
CLPR : ---
=============================
I would like to extend the size of the existing FS (see below)
/dev/vgdp/lvtest 14221312 12946708 1239830 91% /test
JFS is installed if needed
JFS B.11.11 The Base VxFS File System
OnlineJFS B.11.11.03.03 Online features of the VxFS File System
PHKL_24026 1.0 JFS Filesystem swap corruption
PHKL_28512 1.0 Fix for POSIX_AIO in JFS3.3
PHKL_29115 1.0 JFS Direct I/O cumulative patch
PHKL_30366 1.0 JFS3.3;ACL patch
PHKL_34805 1.0 JFS3.3 patch; mmap
I would like to extend it 13000Mb.
Thanks and a step-by-step guide would be truly appreciated.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 10:23 AM
тАО08-18-2009 10:23 AM
Re: Extend FS on a HP-UX Cluster need help
Filesystem kbytes used avail %used Mounted on
/dev/vgdp/lvtest 14221312 12947552 1239012 91% /test
root@host:/etc/cmcluster/TEST # vgdisplay vgdp |egrep -i "Free|PE"
Open LV 1
Max PE per PV 3473
PE Size (Mbytes) 4
Total PE 3472
Alloc PE 3472
Free PE 0
Maybe this helps you help me!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 10:31 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 10:33 AM
тАО08-18-2009 10:33 AM
Re: Extend FS on a HP-UX Cluster need help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 10:38 AM
тАО08-18-2009 10:38 AM
Re: Extend FS on a HP-UX Cluster need help
1) cmhaltpkg -v pkgname
2) vgchange -c n /dev/vgdp
3) vgchange -a n /dev/vgdp
3) vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
4) lvextend -L 26888 /dev/vgdp/lvtest <-- That would increase its size to 26888Mb right?
5) vgexport -v -p -s -m /tmp/vgdp-new.map /dev/vgdp
6) rcp /tmp/vgdp-new.map girdb:/tmp/
8) rlogin node2
9) on node2 vgexport /dev/vgdp
10) mkdir /dev/vgdp
11) mknod /dev/vgdp/group c 64 0x110000 <-- How do i check this number, I mean which one to use
11# vgchange -a n /dev/vgdp
12# vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp
13# start the cluster in first node:
cmrunpkg -v pkgname
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 10:57 AM
тАО08-18-2009 10:57 AM
Re: Extend FS on a HP-UX Cluster need help
/dev/[r]dsk/c11t2d5
Help on this will be strongly appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 11:13 AM
тАО08-18-2009 11:13 AM
Re: Extend FS on a HP-UX Cluster need help
vgextend /dev/vgdp /dev/dsk/c8t2d5
lvextend -L whatyoudesire /dev/vgdp/lvolWHATEVER
this takes care of the node which is currently mounting the filesystem, hence keeping the vg active.
sometime, preferably before a catastrophe hits, you need to take the cluster offline (not necessarily today if it is under management pressure)
vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
rcp /tmp/vgdp-new.map girdb:/tmp/
from this point on, I am copying your commands so, if there are any typos, they are yours.
on node2
vgexport /dev/vgdp
mkdir /dev/vgdp
mknod /dev/vgdp/group c 64 0x110000
....<-- How do i check this number, I mean which one to use
Before performing the vgexport,
cd /dev/vgdp
ls -l group
the value of this number will be displayed clearly, right between the size of 64 bytes and timestamp. You will use the same number to keep it consistent
vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp
at this point you are done, but it is always a good idea to start the cluster and fail the package over to the second node to see if it can successfully fail-over, when there is a need.
After that you can fail the package back to the primary node and resume normal operation.
HTH
UNIX because I majored in cryptology...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 11:18 AM
тАО08-18-2009 11:18 AM
Re: Extend FS on a HP-UX Cluster need help
1) cmhaltpkg -v pkgname
2) vgchange -c n /dev/vgdp
3) vgchange -a n /dev/vgdp
pvcreate /dev/rdsk/c8t2d5
vgextend /dev/vgdp /dev/dsk/c8t2d5
lvextend -L 26888 /dev/vgdp/lvomni
this takes care of the node which is currently mounting the filesystem, hence keeping the vg active.
sometime, preferably before a catastrophe hits, you need to take the cluster offline (not necessarily today if it is under management pressure)
vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
rcp /tmp/vgdp-new.map girdb:/tmp/
on node2 ()
vgexport /dev/vgdp
mkdir /dev/vgdp
mknod /dev/vgdp/group c 64 0x110000
vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp
cmrunpkg -v pkg
Does it look good now?
You forgot to add the stop pkg on your instructions :-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 11:31 AM
тАО08-18-2009 11:31 AM
Re: Extend FS on a HP-UX Cluster need help
1) cmhaltpkg -v pkgname
2) vgchange -c n /dev/vgdp
3) vgchange -a n /dev/vgdp
4) pvcreate /dev/rdsk/c8t2d5
5) vgextend /dev/vgdp /dev/dsk/c8t2d5
6) lvextend -L 26888 /dev/vgdp/lvomni <-- This makes the FS 26888MB right?
7) vgexport -v -p -s -m /tmp/vgdp-org.map /dev/vgdp
8) rcp /tmp/vgdp-new.map girdb:/tmp/
on node2
vgexport /dev/vgdp
mkdir /dev/vgdp
mknod /dev/vgdp/group c 64 0x110000
vgimport -v -s -m /tmp/vgdp-new.map /dev/vgdp
cmrunpkg -v pkg
Did i forget anything?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2009 11:31 AM
тАО08-18-2009 11:31 AM
Re: Extend FS on a HP-UX Cluster need help
Just add the disk to existing volume group. Run your vgexport to create the mapfile and copy that to the other node(s).
On the OTHER nodes, just run the standard vgexport and then:
mkdir /dev/vgname
mknod /dev/vgname/group c 64 0x--0000
vgimport -vs -m /etc/lvmconf/vgname.map /dev/vgname
It's that simple.
Rgrds,
Rita