1827442 Members
6075 Online
109965 Solutions
New Discussion

Disks in LVM

 

Disks in LVM

I think the idea is to stike a balance as to how many disks can be added in a VG..

I had a D class with a Model 20 disk array and a disk station. we had 16 disks in a single VG. later on it was broken down to two vg's. i feel that after split up there is a inprovement in disk performance. So there is some relation between no of disks and performance,
but we cannot keep less no also
When one door closes, God opens anather one. But we stare at the closed one so long that we miss the open door
5 REPLIES 5
Thierry Poels_1
Honored Contributor

Re: Disks in LVM

Hi George,

yesterday there was another thread about the same topic, check it out:
http://forums.itrc.hp.com/cm/QuestionAnswer/1,11866,0xb1f2c6af36b7d5118ff10090279cd0f9,00.html

Tuning the Nike 20 is of course not the same as a bunch of seperate SCSI-disks but anyway ...

regards,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
Tim D Fulford
Honored Contributor

Re: Disks in LVM

I'm prity neutral about this but here is a cautionary tale...

A few years ago I was working with a T500, it had one vg; vg00; which contained some 40 or so scsi disks. These disks were NOT hot swappable but were mirrored. The machine broke, well one of the disks.

It would not boot even with hpux -lq. We removed all but the boot disks, & got a skeleton vg00 up but the other disks would not join in, we could not get above single user mode! We tried to recover the system with the install/recovery disk, still no joy, too many disks not enough i-nodes. In the end it was back to a fresh install & get the backup tapes out. At this point we found out exactly which disk had died, & that the FW was well out of date (hence the mirroring did not work)

Lesson 1 the more disks in a vg the more likeley one disk will fail

Lesson 2 Make sure your machines FW/HW/SW are fully compatible, Disaster recovery rehersals help. & regular maintainance slots.

Lesson 3 VG00 is crucial, put as few disks in it as possible (say 2 1 mirrored pair), & test that it can boot off EITHER of those disks. ALONE. (DR rehersals)

Lesson 4 SCSI disks are OK, but they are the bottom of the disk wish list.

You can put loads of (scsi) disks in one VG & you can spread the load over many controllers to get the performance, but think about how it could all go wrong.

Tim
-
Michael Tully
Honored Contributor

Re: Disks in LVM

Hi George,

It doesn't matter how many disks are in a
volume group as long as you have your
applications and data segregated from your
operating system. General rule of thumb for
systems that have internal disks is to use
these disks as Operating System including
your mirror and external disks in subsequent
volume groups. You can have up to 255 disks
in a single volume group, but this must be
stated when using the 'vgcreate' command to
create your volume group or you will end up
with the default which is 16.
In answer to your question, unless there
was type of bottleneck on one disk where
you may have moved some data etc. to another
the relationship is purely coincidental.
I read the link that was presented by Thierry
and the statements made there I agree with.
The key is to spread your potential hardest
hits across more spindles not less.
We have a number of SAN's and most of the
disk activity occurs in the caching and not
on the disks themselves.

My 2 cents worth
-Michael
Anyone for a Mutiny ?
Sridhar Bhaskarla
Honored Contributor

Re: Disks in LVM

If we talk about *performance* I do not believe if a volume group has too many disks will slowdown the performance. Morever, more disks will give me the flexibility of moving around the data and makes it more efficient. Your case could be different. As you had a one volume group, you might have had some frequently accessed data that was residing on a heavily used disk and when you re-arranged, you might have gotten rid of that bottleneck.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try
A. Clay Stephenson
Acclaimed Contributor

Re: Disks in LVM

From the perspective of performance, the number of disks is not nearly as important as the number of controller paths especially where disk arrays are concerned. Bear in mind that when dealing with arrays, Glance, sar, etc. have no idea that this is not a simple disk. I have seen many SA's divide their arrays into many LUN's so that it APPEARS that there is no bottleneck but since all the I/O was going through the same array contrller, the actual I/O rates were not implroved. In dealing with arrays, Nike's, AutoRAID's, XP's, EMC's, ... the crucial thing is to have as many active paths into the array as possible. This is the reason to create more LUNS. If your disk array has three physical paths, then the best way to configure is to create 3 LUN's of equal size for each volume group. The primary path for LUN0 should be through array channel A (alternate B), the primary path for LUN1 should be array channel B (alternate C e.g.); and the primary path for LUN2 should be through array channel C (alternate A). This concept can be applied to any number of array channels that the disk array has. One LUN for each extermal SCSI path - that's all you need for each VG.

For each logical volume with the volume group, you then strip across all three LUN's (in the above example). That's about as good as it gets.

Regards, Clay
If it ain't broke, I can fix that.