Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Disk Arrays
Showing results for 
Search instead for 
Did you mean: 

Config opinions please?

Tim Medford
Valued Contributor

Config opinions please?

I'm setting up a new Autoraid 12H, to be used exclusively by an Oracle database, mostly OLTP with some batch. The array has dual 96mg controllers, 9 x 36gb drives, 2 SCSI channels coming out of the server.

Here's what I did:
1) Took all defaults on config (hot spare is active, etc...)

2) Created 4 LUNS of size 36gb each.

3) Created 4 standard physical volumes.

4) Created 2 volume groups, striped across controllers, using 2 LUNS for each VG:
vgcreate /dev/vg01 /dev/dsk/c4t0d0 /dev/dsk/c6t1d1
vgcreate /dev/vg02 /dev/dsk/c6t1d2 /dev/dsk/c4t0d3

5) Vgextended VGs to use alternate links
vgextend /dev/vg01 /dev/dsk/c6t1d0
vgextend /dev/vg01 /dev/dsk/c4t0d1
vgextend /dev/vg02 /dev/dsk/c4t0d2
vgextend /dev/vg02 /dev/dsk/c6t1d3

6) Created a bunch of logical volumes, spread evenly over the VGs based on I/O volume.
lvcreate -i2 -I8 -L 2048 -n ora_work1 /dev/vg02

7) Created filesystems (basic vxfs settings, the only special setting was an 8K blocksize).

Does all this seem reasonable?? Would you have done anything differently? Specifically I'm wondering about the number of LUNS, number of VGs and the stripe size of 8K on the LVs. I set the stripe size to 8K in order to match the filesystem, which also matches the Oracle block size.

I would really appreciate some experienced words of wisdom! I'm not very familiar with the autoraid technology.


Insu Kim
Honored Contributor

Re: Config opinions please?

It looks good.
This is what i'm doing to handle load-balancing.

vgcreate /dev/vg01 /dev/dsk/c4t0d0 /dev/dsk/c4t0d1
vgcreate /dev/vg02 /dev/dsk/c6t1d2 /dev/dsk/c6t1d3

5) Vgextended VGs to use alternate links
vgextend /dev/vg01 /dev/dsk/c6t1d0
vgextend /dev/vg01 /dev/dsk/c6t1d1
vgextend /dev/vg02 /dev/dsk/c4t0d2
vgextend /dev/vg02 /dev/dsk/c4t0d3

Traffic to vg01 will be routed to controller A (under assumption that SCSI ID is 0) and traffic to vg02 will be driven to controller B on AutoRAID.

Never say "no" first.
Bob Inglis
Trusted Contributor

Re: Config opinions please?

You have both missed one thing. With a Volume Group of more than one Physical Volume, The subsequent PV's don't get used until the previous PV is full. If the second PV is on a second controler it won't get used until the first PV is full. In both configurations end up using one controler for one VG and the other controler for the other VG. This may be OK if both VG's are being used equally.
The only way to ensure load ballancing between the controlers is to use LVM striping.
Plan for the future and tomorrow will take care of itself.
Tim Medford
Valued Contributor

Re: Config opinions please?

Bob - Thanks for the reply. I want to make sure I understand you correctly though.

When I created the logical volumes, I specified the "-i2" option which told it to stripe across 2 drives. In my case the 2 drives would be 2 separate LUNS, and should be accessed through different controllers because of the way I set up the VGs (I thought).

When I created the VGs, I used 2 device files, one through controller X and one through controller Y.

Is there more to the LVM striping than this? I'm confused.

Thanks, I really appreciate the help.
Dave Wherry
Esteemed Contributor

Re: Config opinions please?

I try to stay away from LVM striping because (I beleive this is correct) if you want to lvextend a logical volume later you may not be able to. I think the stripe has to be contiguous. Rather than LVM striping try extent based striping.
Add your 2 disks, 1 from each controller, into your volume group. Also put them in a physical volume group. This can be done with vgcreate, in sam or by editing the file /etc/lvmpvg. The entry would look like this:
VG /dev/vg01

Then when you create your logical volumes use the Distibuted option, -D.
lvcreate -D y -s g -n lvol1 /dev/vg01
lvextend -L 1024 /dev/vg01/lvol1 PVG1

This creates a 1GB logical volume that is stripped across the 2 disks. Works great. If you want to extend this logical volume later you can. You can also mirror the logical volume if you want to. If you use LVM striping you can not mirror it also.