- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: Help with HP Nike Disk Array's...
Disk Enclosures
1748249
Members
3905
Online
108760
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-05-2000 07:31 AM
тАО12-05-2000 07:31 AM
Help with HP Nike Disk Array's...
Good morning everyone,
Due to a failed Oracle Upgrade on our Production Server last weekend there is now a big rush to get the C1 & C2 initiative underway. What that involves from my standpoint as you will see from the attached document is some major changes to the Test Server & Development Servers. We have already ordered the disks for the Test Server ( 2x18G and 2x36G ) internal disks. The 2x2G internal disks from the Test Server will then be moved to the Development Server for internal mirroring. The approx. 80G HP Nike Disk Array is then required to move from Test Server to join the Development along with the current 80G HP Nike Disk Array. This will then give Development approx. 160G of HP Nike Disk Array space. Development will then become strictly a Test / Development box for everyone else and the original Test Server will be mine for a Management Server hosting applications such as Omniback. The Management Server can then be upgraded right away to HP-UX 11.0.
If anyone can take a look at the below information and provide any feedback you may have that would be greatly appreciated.. My knowledge of HP Disk Array's that we have currently have is very limited.
Please find attached a copy of the File system Layout for both C1 & C2 outlining both the current configuration with the proposed configuration for each system.
In the current layout for the Test System there are the following three logical volumes configured inside of the Nike Array along with many others:-
/dev/vg_nike_00/lvol1 5,009,167 ,014,682 944,393 80.15% /data/dev/cml
/dev/vg_nike_00/lvol3 3,125,737 2,388,160 706,319 76.40% /data/dev/ccdb
/dev/vg_nike_00/lvol4 17,552,283 1,475,993 15,900,767 8.41% /data/prd/platdb
When this Nike Array is detached from the Test System as shown in the Proposed and connected with the Development Server as described in the Development Proposed will there not be a conflict with the logical volume names? As you will see on the Development current the only logical volumes currently configured with the entire Nike Array Volume Group are as follows:
/dev/vg_nike_00/lvol1 512,499 215,043 292,331 41.96% /home/work
/dev/vg_nike_00/lvol3 3,847,033 3,743,799 64,763 97.32% /home/oracle/app
/dev/vg_nike_00/lvol4 75,895,687 64,890,070 10,246,660 85.50% /data
This consumes the entire 80G of space. My concern or question is when you combine the two Nike Array's as shown in the Development proposed, what problems will this create in terms of duplicate logical volume names?
How can we get around this problem?
What is the "Best Practice" for setting up Disks in a RAID 5 logical group? What is the maximum amount of disks you can place into one group? I know if you have 5 disks in one group you would loose 1 entire disk. Is it not possible to have 10, 15, or 20 disks in a single group where you are only loosing one?
Any comments or feedback would be greatly appreciated.
Thanks,
S Aldrich
Due to a failed Oracle Upgrade on our Production Server last weekend there is now a big rush to get the C1 & C2 initiative underway. What that involves from my standpoint as you will see from the attached document is some major changes to the Test Server & Development Servers. We have already ordered the disks for the Test Server ( 2x18G and 2x36G ) internal disks. The 2x2G internal disks from the Test Server will then be moved to the Development Server for internal mirroring. The approx. 80G HP Nike Disk Array is then required to move from Test Server to join the Development along with the current 80G HP Nike Disk Array. This will then give Development approx. 160G of HP Nike Disk Array space. Development will then become strictly a Test / Development box for everyone else and the original Test Server will be mine for a Management Server hosting applications such as Omniback. The Management Server can then be upgraded right away to HP-UX 11.0.
If anyone can take a look at the below information and provide any feedback you may have that would be greatly appreciated.. My knowledge of HP Disk Array's that we have currently have is very limited.
Please find attached a copy of the File system Layout for both C1 & C2 outlining both the current configuration with the proposed configuration for each system.
In the current layout for the Test System there are the following three logical volumes configured inside of the Nike Array along with many others:-
/dev/vg_nike_00/lvol1 5,009,167 ,014,682 944,393 80.15% /data/dev/cml
/dev/vg_nike_00/lvol3 3,125,737 2,388,160 706,319 76.40% /data/dev/ccdb
/dev/vg_nike_00/lvol4 17,552,283 1,475,993 15,900,767 8.41% /data/prd/platdb
When this Nike Array is detached from the Test System as shown in the Proposed and connected with the Development Server as described in the Development Proposed will there not be a conflict with the logical volume names? As you will see on the Development current the only logical volumes currently configured with the entire Nike Array Volume Group are as follows:
/dev/vg_nike_00/lvol1 512,499 215,043 292,331 41.96% /home/work
/dev/vg_nike_00/lvol3 3,847,033 3,743,799 64,763 97.32% /home/oracle/app
/dev/vg_nike_00/lvol4 75,895,687 64,890,070 10,246,660 85.50% /data
This consumes the entire 80G of space. My concern or question is when you combine the two Nike Array's as shown in the Development proposed, what problems will this create in terms of duplicate logical volume names?
How can we get around this problem?
What is the "Best Practice" for setting up Disks in a RAID 5 logical group? What is the maximum amount of disks you can place into one group? I know if you have 5 disks in one group you would loose 1 entire disk. Is it not possible to have 10, 15, or 20 disks in a single group where you are only loosing one?
Any comments or feedback would be greatly appreciated.
Thanks,
S Aldrich
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2000 04:09 AM
тАО12-06-2000 04:09 AM
Re: Help with HP Nike Disk Array's...
Hi Shaun,
Golden rules for the nike are as follows.
1. access lvm vg primary path through the
Default LUN owning SP. otherwise you get a HUGE perf decrease.
1a. To find the owning SP, either A or B, go into nike gridmgr main menu. RS232, 9600baud,
vt100, Change Parameters->Change logical unit
->Change Default ownership.
Select LUN number and it'll tell you
This SP (the one you've RS232'd to) is NOT the
default owner => it's the other SP
or This SP is the Default owner => LVM primary path through this SP HW address/PV
Note also, DONT create all your LUNs on one SP.
or change the ownership of some to the other SP and access through the other SP.
You'll end up bottle necking your busses otherwise.
For example, say I creatred lun 1 on SPA and
SPA has SCSI ID of 5 connected to 8/0
and SPB is SCSI ID 3 connected to 8/4
8/0.5.1 = lun1 through SPA
device file c0t5d1 from ioscan for example
8/4.3.1 = lun 1 alternate through SPB, not the default owner so much slower access, but good
for HA to configure it anyway,
make sure AutoTrespass is on.
(Change Storage options Package to type 2)
so vgcreate vgname /dev/dsk/c0t5d1 /dev/dsk/c1t3d1
PRI Alternate ( if pri fails)
lvcreate and newfs as usual.
vgdisplay to verify all is correct.
2. Creating your LUNs.
There are 5 internal SE busses on the model 20 and 30
they are left to right
ie
0 A B C D E A B C D E
1 A B C D E A B C D E
That's a nike 20
Create a 5 disk RAID 5 on 0 A B C D E
another on 1 A B C D E
yes, one disk will be efectively
parity redundancy.
If you create 10 Disk RAID 5, you put 2 disks
at least on the sam internal bus, not a great
idea, but if you want why not.
Choose 0 A B C D E A B C D E
Yes one disk is still parity, 9 are usable for
data. I didn't think oracle supported R5?
But keep it to 5 disk cause 10 disk is slower than 5.. on parity calculation
as well as rebuild reads and write.
If you've got a large image database ie
large files choose Raid 3 rather than R5 R5 is better for smaller
text files.
Anyway BE MOST carefull about your primary
paths.
Note that for some of the config canges it's a good
idea to reboot both SP's.
If you've got the newer controllers, use 5/3/5 on the menu.
The newer controllers don't have an enable disable switch and are 2 times faster than the
older ones.
Send a tools-> info > run from MSTM if you
have online diag installed.
Later,
Bill
(PS I have't read your attachments, I'll have a look later, but I don't know
much about the oracle config so I can't help you
there.
Golden rules for the nike are as follows.
1. access lvm vg primary path through the
Default LUN owning SP. otherwise you get a HUGE perf decrease.
1a. To find the owning SP, either A or B, go into nike gridmgr main menu. RS232, 9600baud,
vt100, Change Parameters->Change logical unit
->Change Default ownership.
Select LUN number and it'll tell you
This SP (the one you've RS232'd to) is NOT the
default owner => it's the other SP
or This SP is the Default owner => LVM primary path through this SP HW address/PV
Note also, DONT create all your LUNs on one SP.
or change the ownership of some to the other SP and access through the other SP.
You'll end up bottle necking your busses otherwise.
For example, say I creatred lun 1 on SPA and
SPA has SCSI ID of 5 connected to 8/0
and SPB is SCSI ID 3 connected to 8/4
8/0.5.1 = lun1 through SPA
device file c0t5d1 from ioscan for example
8/4.3.1 = lun 1 alternate through SPB, not the default owner so much slower access, but good
for HA to configure it anyway,
make sure AutoTrespass is on.
(Change Storage options Package to type 2)
so vgcreate vgname /dev/dsk/c0t5d1 /dev/dsk/c1t3d1
PRI Alternate ( if pri fails)
lvcreate and newfs as usual.
vgdisplay to verify all is correct.
2. Creating your LUNs.
There are 5 internal SE busses on the model 20 and 30
they are left to right
ie
0 A B C D E A B C D E
1 A B C D E A B C D E
That's a nike 20
Create a 5 disk RAID 5 on 0 A B C D E
another on 1 A B C D E
yes, one disk will be efectively
parity redundancy.
If you create 10 Disk RAID 5, you put 2 disks
at least on the sam internal bus, not a great
idea, but if you want why not.
Choose 0 A B C D E A B C D E
Yes one disk is still parity, 9 are usable for
data. I didn't think oracle supported R5?
But keep it to 5 disk cause 10 disk is slower than 5.. on parity calculation
as well as rebuild reads and write.
If you've got a large image database ie
large files choose Raid 3 rather than R5 R5 is better for smaller
text files.
Anyway BE MOST carefull about your primary
paths.
Note that for some of the config canges it's a good
idea to reboot both SP's.
If you've got the newer controllers, use 5/3/5 on the menu.
The newer controllers don't have an enable disable switch and are 2 times faster than the
older ones.
Send a tools-> info > run from MSTM if you
have online diag installed.
Later,
Bill
(PS I have't read your attachments, I'll have a look later, but I don't know
much about the oracle config so I can't help you
there.
It works for me (tm)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2000 04:41 AM
тАО12-06-2000 04:41 AM
Re: Help with HP Nike Disk Array's...
Note: only insert/replace the disks online.
Do not power off the nike. The first 3 disks
+ another on hte second row
contain copies of the nike boot code. ie
0 A B C A
from the earlier diagram.
If you remove disks label them before so you
may rescue the array.. You just need one boot code to boot the array. Leave the one
in the top left of the array.. then add your 18G disks, but leve the power on.
Remember... leave the power on.
relating to your vg's and lvols, you can vgexport then both of them!
and vgimport the pvs by device file one by one!!! But just backup and recreate a better
config, it'll be worth it in the long run.
Try to keep VGs in the same disk array.... Unless you mirror.. but you're p^rotected by
raid and redundant controllers (with autotresspass and lvm) anyway... might be a waste of space.
By the way Backup!!
And leve the power on on the nike!!!
Don't forget to turn on memory caching!!
mstm->tools->info->run
Later,
Bill
Do not power off the nike. The first 3 disks
+ another on hte second row
contain copies of the nike boot code. ie
0 A B C A
from the earlier diagram.
If you remove disks label them before so you
may rescue the array.. You just need one boot code to boot the array. Leave the one
in the top left of the array.. then add your 18G disks, but leve the power on.
Remember... leave the power on.
relating to your vg's and lvols, you can vgexport then both of them!
and vgimport the pvs by device file one by one!!! But just backup and recreate a better
config, it'll be worth it in the long run.
Try to keep VGs in the same disk array.... Unless you mirror.. but you're p^rotected by
raid and redundant controllers (with autotresspass and lvm) anyway... might be a waste of space.
By the way Backup!!
And leve the power on on the nike!!!
Don't forget to turn on memory caching!!
mstm->tools->info->run
Later,
Bill
It works for me (tm)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-25-2002 06:42 AM
тАО03-25-2002 06:42 AM
Re: Help with HP Nike Disk Array's...
Thanks for helping me hunt down the problem, I discovered that even though I couldn't see the RAID 1/0 form the SPA it did show up on the SPB. I tried taking ownership of it from the SPB and that errored out, so I tried the old standby rebooting.
When it came up it showed the 1st drive as "rebuild" while the other 3 looked OK. I was then able to unbind and rebind without error.
tks again
Cathy Squires
When it came up it showed the 1st drive as "rebuild" while the other 3 looked OK. I was then able to unbind and rebind without error.
tks again
Cathy Squires
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP