- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- dbci Package down on hpux 11.31
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-24-2012 06:14 PM
11-24-2012 06:14 PM
dbci Package down on hpux 11.31
Hi,
My two node cluster running with single package. Currently my cluster running fine, but package showing down.
I have create twho file system and after reboot cluster package showing down. Log message are as below...
/dev/vg22/rlvol1:file system is clean - log replay is not required
/dev/vg23/rlvol1:file system is clean - log replay is not required
fsck: /etc/default/fs is used for determining the file system type
file system is clean - log replay is not required
/dev/vg26/r112_64:file system is clean - log replay is not required
Nov 25 06:57:22 AM - Node "ruprddb": Mounting /dev/vg01/lvol1 at /oracle/RRP
Nov 25 06:57:22 AM - Node "ruprddb": Mounting /dev/vg21/lvol1 at /export/sapmnt/RRP
Nov 25 06:57:22 AM - Node "ruprddb": Mounting /dev/vg22/lvol1 at /export/usr/sap/RRP
Nov 25 06:57:22 AM - Node "ruprddb": Mounting /dev/vg23/lvol1 at /export/usr/sap/trans
Nov 25 06:57:22 AM - Node "ruprddb": Mounting /dev/vg25/stage11202 at
mount: /dev/vg25/stage11202 was either ignored or not found in /etc/fstab
ERROR: Function check_and_mount
ERROR: Failed to mount /dev/vg25/stage11202
Nov 25 06:57:22 AM - Node "ruprddb": Unmounting filesystem on /dev/vg23/lvol1
Nov 25 06:57:23 AM - Node "ruprddb": Unmounting filesystem on /dev/vg22/lvol1
Nov 25 06:57:23 AM - Node "ruprddb": Unmounting filesystem on /dev/vg21/lvol1
Nov 25 06:57:23 AM - Node "ruprddb": Unmounting filesystem on /dev/vg01/lvol1
Nov 25 06:57:23 AM - Node "ruprddb": Deactivating volume group vg01
Deactivated volume group in Exclusive Mode.
Volume group "vg01" has been successfully changed.
Nov 25 06:57:23 AM - Node "ruprddb": Deactivating volume group vg02
cmrunpkg showing below output....
[ruprddb]# cmrunpkg dbciRRP
Running package dbciRRP on node ruprddb
The package script for dbciRRP failed with no restart. dbciRRP should not be restarted
Unable to run package dbciRRP on node ruprddb
Check the syslog and pkg log files for more detailed information
cmrunpkg: Unable to start some package or package instances.
Please advice...
- Prashant
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-25-2012 03:50 AM
11-25-2012 03:50 AM
Re: dbci Package down on hpux 11.31
Looks like your package configuration does not define a valid mount point for /dev/vg25/stage11202.
Ideally, the package filesystems should not be mentioned in /etc/fstab at all: they should be fully under Serviceguard control. If you need to add them to /etc/fstab for some reason (e.g. you frequently have the cluster down and need to be able to manually mount the filesystem easily for application maintenance), then be sure to add the "noauto" option for the package filesystems in /etc/fstab.
Check that your package configuration lists a valid mount point for /dev/vg25/stage11202, and also check that the mount point directory exists and is useable as a mount point in both nodes (i.e. does not have any other non-package filesystems mounted on it).
By the way, I wonder why you have so many VGs for a single package. This can make your disk space management less flexible than it could be: if you have multiple LVs in a single VG, you can use unallocated space in a VG to extend any LV without restrictions. The only way to change disk space allocation between multiple VGs is to make at least one PV within a VG empty (e.g. using pvmove), reduce the empty PV out of the VG that has excess capacity, and add it to a VG that needs more capacity.
A VG can have more than one PV, and more than one LV. Within a VG, LVs can be any size: they are not restricted by PV size at all. A single PV can contain many LVs, or a single LV can extend over many PVs.
If your LUNs are not all equal (e.g. different RAID types and/or performance levels), it might make sense to have your package contain one VG for each LUN type. But your package looks like it might contain 25 VGs???