GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- VPAR collapse / removal
Operating System - HP-UX
1854719
Members
13191
Online
104102
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Knowledge Base
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Knowledge Base
Forums
Discussions
- Cloud Mentoring and Education
- Software - General
- HPE OneView
- HPE Ezmeral Software platform
- HPE OpsRamp
Knowledge Base
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2004 07:38 PM
03-30-2004 07:38 PM
VPAR collapse / removal
Due to S/W constraints (new VPAR software not available for RP8420) when we upgrade from RP8400, I'm working on a method to "collape" or de-configure a pair of VPARs back to normal server.
The s/w, apps, and disks on the redundant VPAR will need to be made visible on the remaining VPAR, which will then become a "normal" unpartitioned server.
Here are my steps :
- check kernel params and drivers on redundant VPAR and apply appropriate changes to target VPAR
- check any patch differences and apply on target VPAR where applicable
- check installed S/W base (swlist, other) on redundant VPAR and pre-install on target VPAR where possible
- Identify FSTAB entries on redundant VPAR and build new FSTAB on target VPAR, saved as FSTAB.NEW
- create new FS mount-points for new FSTAB on target VPAR
- do full backups at OS and DB levels
- Perform VGEXPORTS on redundant VPAR, FTP mapfiles to target VPAR
- do ioscan on redundant VPAR, FTP result to target VPAR for I/O path checks later
- check users on redundant VPAR and create where possible: create USERADD file for execution later for other cases
- reboot server in normal mode (/stand/vmunix)
- run IOSCAN and check against saved file above
- VGIMPORT using map files created above
- activate new VGs
- check LVOLs all syncd and available on new VGs
- move /etc/fstab to fstab.old and move pre-created fstab.new into place
- mount filesystems
- start DB and app
- perform functional tests
Can the forum just check my logic.
I'm running MC/SG packages on the target VPAR: is there likely to be any impact (the server when normalised will have the same hostname as the target VPAR). The redundant VPAR has been checked, and apart from Omniback/DP5.1 and Oracle RMAN, we predict no problems from the hostname change (I hope!).
The s/w, apps, and disks on the redundant VPAR will need to be made visible on the remaining VPAR, which will then become a "normal" unpartitioned server.
Here are my steps :
- check kernel params and drivers on redundant VPAR and apply appropriate changes to target VPAR
- check any patch differences and apply on target VPAR where applicable
- check installed S/W base (swlist, other) on redundant VPAR and pre-install on target VPAR where possible
- Identify FSTAB entries on redundant VPAR and build new FSTAB on target VPAR, saved as FSTAB.NEW
- create new FS mount-points for new FSTAB on target VPAR
- do full backups at OS and DB levels
- Perform VGEXPORTS on redundant VPAR, FTP mapfiles to target VPAR
- do ioscan on redundant VPAR, FTP result to target VPAR for I/O path checks later
- check users on redundant VPAR and create where possible: create USERADD file for execution later for other cases
- reboot server in normal mode (/stand/vmunix)
- run IOSCAN and check against saved file above
- VGIMPORT using map files created above
- activate new VGs
- check LVOLs all syncd and available on new VGs
- move /etc/fstab to fstab.old and move pre-created fstab.new into place
- mount filesystems
- start DB and app
- perform functional tests
Can the forum just check my logic.
I'm running MC/SG packages on the target VPAR: is there likely to be any impact (the server when normalised will have the same hostname as the target VPAR). The redundant VPAR has been checked, and apart from Omniback/DP5.1 and Oracle RMAN, we predict no problems from the hostname change (I hope!).
Trying is the first step to failure - Homer Simpson
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2004 07:49 PM
03-30-2004 07:49 PM
Re: VPAR collapse / removal
I thing your steps are right at all, but it
depends of how much and which DBs and apps
you were running before the upgrade. Sometime is simply not to "collapse", but
to reinstall all. It is more clearly, I think.
Regards,Stan
depends of how much and which DBs and apps
you were running before the upgrade. Sometime is simply not to "collapse", but
to reinstall all. It is more clearly, I think.
Regards,Stan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2004 07:52 PM
03-30-2004 07:52 PM
Re: VPAR collapse / removal
Hi Stan,
Yes, you are right, a nice clean reinstall and re-import or recreation of all data would have been nice. Sadly, no chance of that. This server is "live", and critical to our revenue income.
Yes, you are right, a nice clean reinstall and re-import or recreation of all data would have been nice. Sadly, no chance of that. This server is "live", and critical to our revenue income.
Trying is the first step to failure - Homer Simpson
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2026 Hewlett Packard Enterprise Development LP