1845768 Members
4578 Online
110250 Solutions
New Discussion

Number of Volume Groups

 
Simon R Wootton
Regular Advisor

Number of Volume Groups

We have an N class server running HP-UX 11.00 with 450Gb disk triple mirrored to give us 150Gb useable. Currently, we have 20 volume groups spread across these disks. Would we be better off with fewer volume groups ? If so, why ? What are the pro's/con's ?
6 REPLIES 6
Ron Irving
Trusted Contributor

Re: Number of Volume Groups

I'm no guru, but just looking at your setup, performence would be better with fewer volume groups, yes? Less overhead, and all. Might make your life a little easier, too.

ron
Should have been an astronaut.
Bill McNAMARA_1
Honored Contributor

Re: Number of Volume Groups

memory requirements for lvm to manage vgs is more a strain..
Just ease of management really..

Bill
It works for me (tm)
Mark Mitchell
Trusted Contributor

Re: Number of Volume Groups

The overhead of three mirrors is going to have more impact than any number of VG's. I personally haven't had any issues with large numbers of VG's
Shahul
Esteemed Contributor

Re: Number of Volume Groups


Hi

I think since U have 3 way mirroring, Let it be less VGs. This is just asuggession.

Shahul
Manju Kampli
Trusted Contributor

Re: Number of Volume Groups

having more VGS will create additional set of data structures for each VGs in the kernel and conserves kernel storage space.. it might also take bit more memory .. but I have never read about impact on performance of system due to number of VGs.. if your system is having less memory then reducing number of VGs may help .. otherwise the impact is alomost null

Never stop "LEARNING"
Thierry Poels_1
Honored Contributor

Re: Number of Volume Groups

Hi,
I don't think a small number of VG's (<50) will have much impact on system performance. We're not talking about 100's of VG's on a 96MB RAM server.
The more VG's you have the easier it is to spread your data physically over your disks.
regards,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.