HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: mirrored root disks
Operating System - HP-UX
1834462
Members
3093
Online
110067
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-1999 11:14 AM
10-29-1999 11:14 AM
mirrored root disks
I am ordering an L1000 with mirrored root disks. Obviously I have to have two
disks for the vg00 volume group and have to pay for the Mirror Disk software.
I feel that this is my best choice because my user wants his Oracle database up
as much as possible.
other options:
use IGNITE-UX , buy a tape drive (DDS3) to make_recovery but the tape drive
costs more than an extra disk drive, it would be used only to make a recovery
tape and obviously to recover the operating system in case the root drive
died. Recovery would have to wait for HP to replace the disk drive which would
be at least a day.
use IGNITE-UX , buy an extra disk drive on the Ignite Server to hold the
archive and use make_net_recovery. This would be a little cheaper than above
but again recovery would be delayed by a day.
have a duplicate root disk drive without Mirror Disk software. Despite the
fact that I am familiar with logical volume management, I think that this is a
labour-intensive, error-prone task that I don't want to do every week.
comments?
disks for the vg00 volume group and have to pay for the Mirror Disk software.
I feel that this is my best choice because my user wants his Oracle database up
as much as possible.
other options:
use IGNITE-UX , buy a tape drive (DDS3) to make_recovery but the tape drive
costs more than an extra disk drive, it would be used only to make a recovery
tape and obviously to recover the operating system in case the root drive
died. Recovery would have to wait for HP to replace the disk drive which would
be at least a day.
use IGNITE-UX , buy an extra disk drive on the Ignite Server to hold the
archive and use make_net_recovery. This would be a little cheaper than above
but again recovery would be delayed by a day.
have a duplicate root disk drive without Mirror Disk software. Despite the
fact that I am familiar with logical volume management, I think that this is a
labour-intensive, error-prone task that I don't want to do every week.
comments?
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-01-1999 02:09 AM
11-01-1999 02:09 AM
Re: mirrored root disks
I agree with most of your assessment. Assuming you do not want the cost
associated with moving to a highly available environment, a mirrored root disk
is your best option to provide availability and spped of recovery from hardware
(disk) failure. Actually, you should mirror root even within a Service Guard
environment. My only suggestion would be that you also build an Ignite golden
image and keep it updated after every ystem change has been deemed stable.
Mirroring protects from hardware failure, but not data corruption. The golden
image (or make_recovery tape) can be used to restore the last known stable
configuration in the event that data corruption makes your root VG unbootable.
associated with moving to a highly available environment, a mirrored root disk
is your best option to provide availability and spped of recovery from hardware
(disk) failure. Actually, you should mirror root even within a Service Guard
environment. My only suggestion would be that you also build an Ignite golden
image and keep it updated after every ystem change has been deemed stable.
Mirroring protects from hardware failure, but not data corruption. The golden
image (or make_recovery tape) can be used to restore the last known stable
configuration in the event that data corruption makes your root VG unbootable.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-03-1999 02:53 AM
11-03-1999 02:53 AM
Re: mirrored root disks
While all of your options and concerns seem to be very valid, I wanted to point
out a few of the limitations of the Mirrored Root VG that often go unnoticed.
* The mirroring does provide extra copies of essential root filesystem data but
does not necessarily provide an alternate boot disk and does not provide
"failover".
* If one disk is unavailable, then the system can use the alternate instead;
however, the crash of a vg00 disk still could panic and reboot the system. If
the system is not configured properly then it will not come back up as
"automatically" as desired.
Here are some things to check and some best practices:
1) Make sure both disks are indeed "boot disks". The lvols can be mirrored onto
a non-bootable disk. In this scenerio, if you lose your boot disk and go down,
you won't be back up until you replace the bad disk and either reimage via
ignite or cold-install. This not generally desired. To check that both disks
are truly bootable, use lvlnboot. To be truly bootable, both disks must show as
"Boot Disk" and both disks must be mentioned for "Boot", "Root", and "Swap". If
they are not, then only one of the disks is truly bootable.
Here is an example of a mirrored root vg with only one bootable disk:
# lvlnboot -v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
/dev/dsk/c1t6d0 (56/52.6.0) -- Boot Disk
/dev/dsk/c1t4d0 (56/52.4.0)
Boot: lvol1 on: /dev/dsk/c1t6d0
Root: lvol3 on: /dev/dsk/c1t6d0
Swap: lvol2 on: /dev/dsk/c1t6d0
Dump: lvol2 on: /dev/dsk/c1t6d0, 0
Here is the same configuration with two bootable disks:
# lvlnboot -v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
/dev/dsk/c1t6d0 (56/52.6.0) -- Boot Disk
/dev/dsk/c1t4d0 (56/52.4.0) -- Boot Disk
Boot: lvol1 on: /dev/dsk/c1t6d0, c1t4d0
Root: lvol3 on: /dev/dsk/c1t6d0, c1t4d0
Swap: lvol2 on: /dev/dsk/c1t6d0, c1t4d0
Dump: lvol2 on: /dev/dsk/c1t6d0, 0
2) Make sure that your primary boot path is the primary boot disk and the
secondary boot path is your bootable mirrored disk. Make sure autosearch is on.
The command setboot can check and change these settings. For the above example,
the following is correct:
# setboot
Primary bootpath : 56/52.6.0
Alternate bootpath : 56/52.4.0
Autoboot is ON (enabled)
Autosearch is ON (enabled)
If this is not set correctly, then your system will not look to boot from the
mirror if the primary is not available.
3) Check to make sure that the autoboot string disables the quorum checking. In
order to activate a Volume Group, LVM requires that MORE than HALF of the disks
be available. In the case of a 2 disk vg00 that would mean that BOTH need to be
available to activate vg00. This means that if EITHER your primary disk or your
mirror crashes, then the system will not be able to activate vg00 at boot time.
This means that your system will not come up. You will have to interrupt the
boot sequence and override the quorum checking in order to bring the system up.
To get around this you can use the mkboot with the -a option to set the
autoboot string to disable quorum checking by default. You must have this set
up properly on both disks. For the above example this would be the correct
format:
# mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c1t6d0
# mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c1t4d0
NOTE: It is the -lq parameter passed to the hpux kernel loader that disables
the quorum checking.
out a few of the limitations of the Mirrored Root VG that often go unnoticed.
* The mirroring does provide extra copies of essential root filesystem data but
does not necessarily provide an alternate boot disk and does not provide
"failover".
* If one disk is unavailable, then the system can use the alternate instead;
however, the crash of a vg00 disk still could panic and reboot the system. If
the system is not configured properly then it will not come back up as
"automatically" as desired.
Here are some things to check and some best practices:
1) Make sure both disks are indeed "boot disks". The lvols can be mirrored onto
a non-bootable disk. In this scenerio, if you lose your boot disk and go down,
you won't be back up until you replace the bad disk and either reimage via
ignite or cold-install. This not generally desired. To check that both disks
are truly bootable, use lvlnboot. To be truly bootable, both disks must show as
"Boot Disk" and both disks must be mentioned for "Boot", "Root", and "Swap". If
they are not, then only one of the disks is truly bootable.
Here is an example of a mirrored root vg with only one bootable disk:
# lvlnboot -v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
/dev/dsk/c1t6d0 (56/52.6.0) -- Boot Disk
/dev/dsk/c1t4d0 (56/52.4.0)
Boot: lvol1 on: /dev/dsk/c1t6d0
Root: lvol3 on: /dev/dsk/c1t6d0
Swap: lvol2 on: /dev/dsk/c1t6d0
Dump: lvol2 on: /dev/dsk/c1t6d0, 0
Here is the same configuration with two bootable disks:
# lvlnboot -v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
/dev/dsk/c1t6d0 (56/52.6.0) -- Boot Disk
/dev/dsk/c1t4d0 (56/52.4.0) -- Boot Disk
Boot: lvol1 on: /dev/dsk/c1t6d0, c1t4d0
Root: lvol3 on: /dev/dsk/c1t6d0, c1t4d0
Swap: lvol2 on: /dev/dsk/c1t6d0, c1t4d0
Dump: lvol2 on: /dev/dsk/c1t6d0, 0
2) Make sure that your primary boot path is the primary boot disk and the
secondary boot path is your bootable mirrored disk. Make sure autosearch is on.
The command setboot can check and change these settings. For the above example,
the following is correct:
# setboot
Primary bootpath : 56/52.6.0
Alternate bootpath : 56/52.4.0
Autoboot is ON (enabled)
Autosearch is ON (enabled)
If this is not set correctly, then your system will not look to boot from the
mirror if the primary is not available.
3) Check to make sure that the autoboot string disables the quorum checking. In
order to activate a Volume Group, LVM requires that MORE than HALF of the disks
be available. In the case of a 2 disk vg00 that would mean that BOTH need to be
available to activate vg00. This means that if EITHER your primary disk or your
mirror crashes, then the system will not be able to activate vg00 at boot time.
This means that your system will not come up. You will have to interrupt the
boot sequence and override the quorum checking in order to bring the system up.
To get around this you can use the mkboot with the -a option to set the
autoboot string to disable quorum checking by default. You must have this set
up properly on both disks. For the above example this would be the correct
format:
# mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c1t6d0
# mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c1t4d0
NOTE: It is the -lq parameter passed to the hpux kernel loader that disables
the quorum checking.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP