1823916 Members
3491 Online
109667 Solutions
New Discussion юеВ

1 LUN vs n LUN @ EVA 8K

 
SOLVED
Go to solution
Jack Fan
Regular Advisor

1 LUN vs n LUN @ EVA 8K

All,
I have some question still confuse me and need your help to explaining.

Here is my hardware spec,
1. Superdome
2. EVA 8K + 73GB*48

NOW, based on I** consultant suggestion, only create one LUN from EVA 8K and mount it on my superdome server for SAP ERP system.

But the disk utilization of superdome always gets 100% and no disk queue.

My question is, I want to tell my boss that "Create multiple LUN on EVA 8K will better than single LUN from OS point of view"..

1) Are there any major different between multiple and single LUN on EVA 8K from OS point of view?

2) 100% disk utilization on multiple LUN(pv) and 100% disk utilization on single LUN(pv), any different??? How to calculate the disk utilization?

3) Any performance enhancement if use multiple LUN?

Thanks,
Jack Fan
13 REPLIES 13
sajeer_2
Regular Advisor

Re: 1 LUN vs n LUN @ EVA 8K


Hi Jack,

Have a look in to this post.

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1055954
Check the attached document from peter.

sajeer
Alzhy
Honored Contributor
Solution

Re: 1 LUN vs n LUN @ EVA 8K

Jack,

Based on the architecture of the EVAs -- I will whole heartedly agree with the "Consultant".. Hey they're consultants coz they're supposed to know better right?!

Seriously though - he is correct. With the EVA, you cannot go wrong with a single large LUN (either raw or cooked usage). All the striping and optimization is done inside the EVA...

I will however advice you to increase the SCSI queue depth of your large LUNs to something greater than the default. The EVA ports can handle up to 2048 --- so depending on the number of EVA connected hosts to the same port -- I would adjust it accordingly. Most of my clients - who are using the EVA in the same fashion as your have opted for queue depths anywhere between 64 to 256.

Also, do not be alarmed by Glance showing 100% Utilisation always. This is "normal" since the default MWA/Glance setting alarms at 100% when disks report very high I/O rates.. Be worried if you start seeing queuing.


Hope this helps...

Hakuna Matata.
Ivan Ferreira
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

Last time I had an Interview with an HP consultant, he recomended to create multiple LUNs even for an EVA, why? Because for every LUN a OS buffer/cache is asigned to it. So, with more LUNs, more cache and performance should increase.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
A. Clay Stephenson
Acclaimed Contributor

Re: 1 LUN vs n LUN @ EVA 8K

You are now in an area where I wouldn't trust anyone if I wanted to know the "correct" answer for my environment. The ONLY reliable method is to test and measure for yourself.

Don't necessarily be concerned if Glance or any other host-based performance tool shows 100% utilization on an array LUN. All it knows (or can know) is that a tremendous amount of i/o is going through what it sees as 1 physical disk. (It has no way of knowuing what it sees as 1 "disk" is actually comprised of n disks.) I have definitely seen cases where dividing the i/o into many LUN's made things APPEAR better to Glance but the actual i/o rate was hardly affected.
If it ain't broke, I can fix that.
Alzhy
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

Jack,

For your reading pleasure:

http://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf

http://www.oracle.com/technology/products/database/asm/pdf/HP-UX%20-ASM-StgWorks-MP%2002-06.pdf#search=%22EVA%20Striping%22

It appears, the best practice of yester years for EVAs still is the same -- using smaller LUNs and "striping them" under LVM/VxVM to create larger storage units does not offer much performance advantages.

It also does appear having a single disk group (inside your EVA) still offers the best performance.

So in your case, I will probably instruct my EVA Admin to simple config the EVA8K's all 48 73GB Disks as just one Diskgroup. From this diskgroup, carve out your redo log LUNs (VRAID1) and carve up one single large LUN for your SAP DB instance mainstream files... Of course forget not to have the queue depth on these EVA LUNs increased to at least 4 via scsictl.

Hope this helps..


Hakuna Matata.
Dave Hutton
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

A (maybe) small reason why you may want to present 2 vdisks is load balancing over the EVA controllers.

If you present 1 vdisk, create the vg and lvols it will only use 1 path (A - or whatever path you chose)

If you were to present 2 disks, create the vg and lvols and you alternated the path A and B, both controllers would be used.

Alzhy
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

But EVA 8Ks (as well as 4K/6K and upgraded EVA5K/3K) are now "active-active" arrays...

Hakuna Matata.
Dave Hutton
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

Nelson is right, I'm still thinking 3k + 5k. I guess I was thinking it was pvlink only. So until the path failed it would take over.

Alzhy
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

Correction:

I said:
"Of course forget not to have the queue depth on these EVA LUNs increased to at least 4 via scsictl. "

Obviously a queue depth of 4 is too small.. I meant "at least 64 via scsictl"..

If you have fewer servers (say 4) connected to your EVA, you can even increase queue depth of your large LUN to 128! The redo log LUNS can stay at 8 o 16.

Hakuna Matata.
Hein van den Heuvel
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

>> for SAP ERP system

So follow the SAP best practices documents first and foremost, followed closely by any DB best practices guidelines.
The OS and EVA guidelines are only a 'hint' because it they do not know how the system might be used.


>> My question is, I want to tell my boss that "Create multiple LUN on EVA 8K will better than single LUN from OS point of view"..


Why do you 'want' that?
What are your reasons?
"If it ain't broke, then don't fix it"
100 busy on a monitor does not indicate a broken storage setup, just a broken monitor.

That said, I find a single big device a bit much.

One trivial reason... iostat over multiple devices tell you a lot about the system performance characteristics.

The already mentioned cache effects are an other reason. You can tweak cache usage per lun based on expected usage patterns.

I used to like to recommend to seperate of the REDO and ARCH files in a seperate group even to acknowledge the specific sequential write, and write only nature or that data (ok, redo is read to move to arch). There is not too much general purpose evidence to back this up though, only specific (unnatural?) benchmarks suggests this it seems.

Don't go too small. Too many luns get confusing, and you'll get a hard time distribution data based on space requirements.

What are you going to do with the big lun?
Use LVM to carve it up into small pieces anyway?

Be sure to look aroudn for similar topics, here in this forum, the storage forum and general places.

hth,
Hein.
HvdH Performance Consulting


Alzhy
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

I think the consultant wants something like:

/dev/eva/sapredo - 10G (VRAID1)
/dev/eva/saparch - 200G (VRAID1)
/dev/eva/sapdb1 - 2048G (VRAID1 or VRAID5)
/dev/eva/sapdb2 - 2048G growth...

All VRAIDS will come from just one "Disk Group" inside the EVA encompassing ALL 48 73Giggers for performance... This is what I will actually recommend myself...

There is no online LUN expansion support (yet) in 11.11 (or even 11.23) --- once that is already possible, then in the above example, we simply enlarge the LUN for say /dev/eva/sapdb1 whenever there is a need...


Pretty slick and easy on them EVA's eh?


Hakuna Matata.
Jack Fan
Regular Advisor

Re: 1 LUN vs n LUN @ EVA 8K

Based on this link/doc of description http://www.oracle.com/technology/products/database/asm/pdf/HP-UX%20-ASM-StgWorks-MP%2002-06.pdf#search=%22EVA%20Striping%22 provided by Nelson.

This document says...

'To leverage I/O distribution across as many resources as possible, it is best to present more than one
LUN to a disk group (allowing ASM to do the striping).', page 11.

Is't right?

Jack Fan
Alzhy
Honored Contributor

Re: 1 LUN vs n LUN @ EVA 8K

Jack,
I think that bullet specifically applies to StorageWorks/XP Arrays (a.k.a. Hitachi Lightning, Tagmastor or USP) where the only way to get the best performance is to stripe accross as many "array groups" and "array control processors" as one can.

On the EVA, the controllers do that for you. ALL the disks in a created "diskgroup" inside your EVA are "engaged" so that the more disks you have in your "diskgroup" - the higher the performance accross however many LUNS you create and regardless of the sizes. The referenced white paper on striping following that section actually proves a point that indeed there is no "penalty" in double striping (i.e. striping under LVM or ASM what are already stripes inside the EVA). So there really is no realizable performance advantage to having say a striped storage of say 8 x 250GB EVA LUNS versus a single 2TB EVA LUN. Less LUNs, less management overhead.

And with the ability soon for HP-UX to honour EVA on-line expansion (if it is not already) -- your storage management on your host practically becomes on somewhat obsolete. Filesystems/volumes can be on single LUNs - whose EVA expansion also filters down automatically to the host level.

You can actually try a benchmark... carve an LVM striped Volume with say 8x250GB EVA LUNS and an LVM Volume with just a singe 2TB LUN (have all as VRAID1s or VRAID5s)... Benchmark your App or use any storage benchmarking tool -- you'll be surprised there is basically no difference in performance..


If you however have several EVAs hooked up to your server, trhe rules for the XP should apply.. you can get better performance by striping accross LUNS from each EVA array.

Hope this helps...
Hakuna Matata.