- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- disk upgrades
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2001 09:16 AM
06-29-2001 09:16 AM
here are some additional questions i am asking myself:
should i use the same setup with the hvd10 as with the jamaica, ie, using a separate unit for mirroring? or should i replace our 4 HAAS units with 1 hvd10 populated with 36GB drives, and use half of the unit on a separate controller for mirroring? or will this decrease i/o performance too much since only 5 disks are being used, compared to 16 on my current setup?
thanks for any input, id like to hear from anyone who has been in a similar situation. btw, i always assign points.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2001 10:35 AM
06-29-2001 10:35 AM
SolutionIntuitively, it would seem that if you used higher RPM disks, fewer of them per SCSI channel and striped the data across them that I/O performance should improve. However, to be absolutely sure, you will have to do the experiment. With all those things going for you, I think the performace will improve (probably since not all the filesystems get the same amount of I/O, and the ones that do get high I/O will improve dramatically with striping).
Also, if you can swing it, I would go with multiple HVD10's for redundancy purposes. Although the HVD10 is a very hot-swappable device, the midplane board (located in the center of the HVD10; where all the disks and controllers plug) requires the whole unit to be down for replacement.
Good Luck,
Curt
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2001 11:38 AM
06-29-2001 11:38 AM
Re: disk upgrades
do the disks in any of your arrays support alternate / dual paths? If so, you should make use of these when configuring the volumge group than a particular file system will reside in, along with striping a particular lvol across multiple disks.
For example if a given disk is accessible through c0t0d0 and through c1t0d0, make sure when you add the disk to the vg that you add both paths. Also, if you have multiple disks with two paths, make sure that you alternate which disk is the "primary" path. For example if you were adding 4 disks to a given volume group:
Path 1 Path 2
disk 1: c0t0d0 c1t0d0
disk 2: c0t0d1 c1t0d1
disk 3: c0t0d2 c1t0d2
disk 4: c0t0d3 c1t0d3
you would want to add them as below:
vgextend /dev/vg_whatever /dev/dsk/c0t0d0 /dev/dsk/c1t0d0 /dev/dsk/c1t0d1 /dev/dsk/c0t0d1 /dev/dsk/c0t0d2 /dev/dsk/c1t0d2 /dev/dsk/c1t0d3 /dev/dsk/c0t0d3
Notice that I alternate which path is first as I go down the list, this assigns the primary path in the volume group.
Now, after you've done this, if you stripe an lvol over all 4 of the lvols above, you've gotten about as much performance as you're going to get out of these 4 disks (if they are dual path as explained above).
Hope this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-30-2001 02:43 AM
06-30-2001 02:43 AM
Re: disk upgrades
I take the following things as assunptions
01. You are not worried about the Filesystem capacity.
02. You are more worried about the performance.
For achieving the above out put you do not need to replace anything. Just keep it as it is.
You said "One filesystem per disk " . which effects the performance very badly.
I would suggest you the following practice.
-Take a full system back (all & all)
-Make a single vg for all the disks
-Make two PVGs (Physical volume groups, one pvg which consists all the disks in HASS 1 & 2 other consists HASS 3 &4
- lvcreate -LXXXX /dev/vgXX (Create all logical volumes like this)
- lvchange -s g -D y /dev/vgXX/lvolXX (turn on strict PVG & Distribution)
This setup will strip your Logical volumes into number of disks ,which will improve the I/O .
If you are worried about space. Go for 36 GB modules but increase the number of disks so that Diskdrive access time is not a bottle neck.
Same time try to go for seperate SCSI channels
Get in touch for further inputs.
Wish you all the best,
Kaps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2001 07:37 AM
07-02-2001 07:37 AM
Re: disk upgrades
The thing to bare in mind is that by reducing the number of disks you are moving the "bottleneck" away from the SCSI Bus to the disks themselves. I suspect your problem at the moment is due to a bottleneck on the SCSI bus rather than disks, you have not mentioned how many controllers you are running with, but 6 - 7 disks per F/W controller is really the maximum recommended for good performance.
You have not mentioned if you have to work to a tight budget, but I would recommend that if you move to the HVD10 solution, I would install 2 (1 + 1 mirror) running with 20 disks but setup in split bus mode, so with 4 controllers on your server you have a maximum of 5 disks per controller.
I would definetly stripe across all 10 disks per HVD10 and replicate the same setup on the mirror.