- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- More questions! RAID, volume groups, etc.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-11-2001 02:08 PM
06-11-2001 02:08 PM
I have two HPs named 'Data' and 'Lore' and one LSI RAID tower with eight drives in it. It is split up so that four of the drives make up 100GB (on controller B) which I have mounted to Data, three of the drives make up 67GB (on controller A) which I have mounted to Lore, and one hot swap. I had my questions answered on how to create volume groups to go along with these -- my most recent problem being the 67GB volume group.
Right now I need to test failovers in case one of the machines goes down. The way I think I understand this is I switch both RAID volume groups so that they are talking to controller B, the get on Data and create a new volume group that is assigned to the smaller array. I currently have /dev/vgraid5 in physical volume /dev/dsk/c4t2d0. I would make a volume group /dev/vglore in physical volume /dev/dsk/c4t2d1. If I set it up this way, and Lore dies and the RAID fails over so that Data sees both controllers, would this allow Data to see the information that was on Lore?
I could very possibly be way off base here, I'm just kind of guessing. How do you think I should go about setting up these systems so that in the event a computer goes down and the controller fails over, the remaining machine will sort of pick up where the other left off?
I don't have much longer to work on this today, but I'll be in bright and early tomorrow to try out any suggestions you all may have.
Thanks in advance again,
Melissa
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2001 03:54 AM
06-12-2001 03:54 AM
Re: More questions! RAID, volume groups, etc.
You need to consider implementing MC/ServiceGuard high-availability clustering.
High-availability clusters permit services (applications) to resume running in the event of a software or hardware failure. MC/ServiceGuard coordinates the assumption of the downed service by an adoptive node (server) in a cluster (group). While a processing interruption occurs, the actual downtime is designed to be minimal. The process is called a "failover".
A good overview can be obtained starting here:
http://docs.hp.com/hpux/onlinedocs/B3936-90026/B3936-90026.html
A plethora of documents on high-availability appear here:
http://docs.hp.com/hpux/ha/index.html
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2001 04:03 AM
06-12-2001 04:03 AM
Re: More questions! RAID, volume groups, etc.
and it doesn't get any easier either!
The problem without serviceguard is that you can remove all SPOF (single points of failure) from your configuration except for the host.
(MC=Mult Computer)
The problem you'll run into is:
If hostA panics and doesn't dis-activate the vg properly, then hostB, even though he can see the disks cannot activate the vg without recreating disk headers.. This gets complicated and too much bother to try without serviceguard.
Later,
Bill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-13-2001 10:49 AM
06-13-2001 10:49 AM
Re: More questions! RAID, volume groups, etc.
There is no data on either one of these machines that is imperative -- they are not being used by anyone but me. My task is to get our LSI RAID box set up between these two HPs at level 5 so when I get that finished (the end of the summer??) they could actually use it. They will not purchase software that expensive for me, this is not really that big of a deal to them right now.
I know it's more difficult, and being a mostly inexperienced unix user (though learning at a reasonable pace), it's probably not worth my effort (or I'm not going to get it worked out), but could you maybe help me out a little bit with my task of making this RAID system work manually?
I think I understand that if I have a volume group "vg67" (for 67BG array, no longer vgraid) on Lore that I will need to unmount it and deactivate it and go to Data and create a vg67 there as well, correct? And if I have vg100 on Data that I'll need to do the same and create a vg100 on Lore. Firstly, if vg67 on Lore has the group node of 0x010000 can I still make the vg67 on Data with node 0x030000 (because 0x010000 is already taken by a local disk).
If I 'ioscan -fnC disk' on each machine I know that my 100GB array on Data is /dev/dsk/c4t2d0 and that it is /dev/dsk/c4t0d0 on Lore, and that the 67GB array is /dev/dsk/c4t2d1 on Data and /dev/dsk/c4t0d1 on Lore. Will I run into problems with this? If I have a volume group mounted on Lore that is c4t0d1 I obviously cannot keep that the same when I put it on Data, but will have to change it to c4t2d1. Is this okay?
Right now I have unmounted and deactivated vg67 on Lore and created one (using c4t2d1) on Data. I have gone through everything that I have been assisted with in previous posts and everything looks great until I go to mount it. The disk is busy. Even though I unmounted and deactivated it on Lore, is that computer still using the disk? Will I have to turn Lore off to be able to mount this on Data? Can I even do this?
If you have any suggestions as to where I could go from here, I would be greatly appreciative. Thank you so very much.
Struggling but keeping at it,
Melissa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-14-2001 03:56 PM
06-14-2001 03:56 PM
SolutionTo address the manual method you are attempting, you are correct in your assumption of unmounting the logical volumes (LV's) first.
And yes, next you will have to create a volume group (VG) framework on the other system (either Data or Lore or both). Using the group minor number of 0x030000 or any other unused minor for that matter should not be a problem. Often, we use conventions like vg03 is 0x030000 and vg05 is 0x050000, just to keep things manageable for users that are not as familiar with the system as we (or you) are.
Next, I think you want to look at the commands 'vgexport' and 'vgimport' to prepare volume groups to be moved from one system to another. Although you are not actually moving any physical disks, vgexport will properly remove the VG from the LVM structure of Lore, and vgimport will add the VG to Data, and vice-versa.
However, James' and Bills' suggestions are definitely worth a bundle. The manual method you are contemplating will only work if you can switch these VG's before the system crashes. After a crash, there will likely be no way to get the VG's mounted on the other system, in a usaable or timely fashion. That's one reason why MC ServiceGuard was created. It will be important that you communicate the caveats of the manual method to the people you will be turning the system over to before you return to school.
Lastly, I applaude your couragious efforts. Tackling an HPUX system is not a minor task.
Good Luck,
Curt
P.S. Data and Lore. Pretty funny ;)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-15-2001 08:16 AM
06-15-2001 08:16 AM
Re: More questions! RAID, volume groups, etc.
I mounted and tested such a system 3 years ago between 2 D230 (HPUX11)using a HDS5750 RAID5 subsystem one was for production the other for development... a solution for a dept without money...
I sold the solution as "can work with 1 hour downtime"
My conclusion after extensive testing is downtime can be brought down to less than half an hour (my best score :20 min..) but it not obvious to not forget something.
Is your rootvg on the subsystem?
In my case it wasnt so here are some remaks that may apply or not consireing your config:
With the 2 D230 I had internal hotswap disks in mirror for vg00 and vg00X on subsystem HDS
In order to get the second system (lets call it box2) up to replace the first I had to:
1) have a updated copy of lmv struct. of box1=>
vgexport in a mapfile, copy all important files (/etc/netconf /etc/passwd /etc/fstab...)
with the mapfile on box2 and maintain regularly..
When Id simulate a crash Id have to change the hostname, save all box2 important files vgexport in mapfile and replace by box1 files then vgimport...
But at the price of MC-SG now (here in Europe I think for a L class its around 6000$) Is it really worth it?
All the best
Victor