Operating System - HP-UX
1834414 Members
1736 Online
110067 Solutions
New Discussion

Logical Volume setup on EMC

 

Logical Volume setup on EMC

Hi All,

I was wanting to get some opions from fellow sys admins out there. We have a current setup where the LV's for an oracle database is software stripped on the EMC array (H/W stripped).

Would anyone know what the advantage of this is? I always thought that with the EMC 9Gb cache, then that would more than handle I/o performance to the luns/disks?

Any info please?

Regards,
Achille
5 REPLIES 5
Jeff Schussele
Honored Contributor

Re: Logical Volume setup on EMC

Hi Achille,

I generally do not like to SW stripe a LUN that's *already* HW striped. But then again sometimes a HW RAID 1+0 (NO HW stripe) is better with a SW stripe than a HW stripe with a BIG HW cache. It all depends on the way the data is being accessed and how much you want to gamble. With a big HW cache an OS crash won't probably kill you 'cause the writes are already in the HW cache - where they should be with a DB volume....unless you're running a big Oracle SGA and get the hit at a particularly bad time when the SGA hasn't flushed quite yet. Mount options make *all* the difference.
So I always look to performance & not lose sleep over just *when* the crash hits - hell you'll probably still have to pick up some pieces so go for the perf & keep the DBAs off your back....

My 2 cents,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
Hein van den Heuvel
Honored Contributor

Re: Logical Volume setup on EMC

Disclaimer: I know nothign specific about an emc, just using general controller knowledge.

For starters, of you want to present luns larger than one disk, then you must combine them somehow. Just might as well stripe.

The large cache is good for significant bursts, but not good enough for copying GBs of data. The cache is shared for all users/luns of the emc no? If you pump 100 mb/sec to your lun then after 10 seconds or so you have put 1GB there and your portion of the cache is full. In the background the system has already been sucking out to the target disk (at 30mb/sec?), but you are putting in faster then it is being taken out. By striping in more disks you can configur such that you can keep up, or such that the amount you fall behind/sec can be covered by the cache. In a similar example... if you push in 100mb/sec and take out at 3*30MB/sec, the cache is filling up at a rate of 10mb/sec. So if you have 2Gb to copy, you'll peak at 200MB in the cache. Once your reach you quota in the cache, the transfer speed drops to the back-end/disk speeds and stripign may be needed.

fwiw,
Hein.
Michael Tully
Honored Contributor

Re: Logical Volume setup on EMC

There is always the 'SAME' process.

Stripe And Mirror Everything

I have an itanium server (and EMC 8530) which is giving us pretty ordinary disk IO stats. I had just straight volumes set up and mounted as filesystems. Using glance and sar I was able to determine that certain LUN's are getting hammered. WHat I've done is stripe all volumes across 32 LUN's at a 64K stripe size. I am currently testing this now. When the tests are complete, I would be happy to post them back here. Using even 1/0 will give some trouble. (this is what I have) The symm I have has 16Gb cache.
Anyone for a Mutiny ?
Geoff Wild
Honored Contributor

Re: Logical Volume setup on EMC

We have EMC (DMX 1000's). The DMX is setup with striping and mirroring - key is to stripe across as many disks as possible. We use 32 GB metas for data files, and 8 GB metas for redo logs. I group 4 metas into 1 lvol for data. We also have 4 paths to each Meta through 2GB SAN. Our SAP system (on Oracle) is on 146 GB drives and our transaction system (oracle again) on 72 GB drives.

We don't use software striping at all - no need.

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Hein van den Heuvel
Honored Contributor

Re: Logical Volume setup on EMC

Yes, there is a lot of power in SAME. It is not the end-all-be-all, but it it will give near optimal performance with minimal (brain) effort.

Michael, I am suspicious about the 64k stripe recommendation (and even 4k in recent reply). My believe is that in general a 1MB or 4MB stripe will spread the IO just as nicely and it will not needlessly break up larger IOs. I have a test in the pipeline for Itanium + EVA to confirm or deny this. I'd be interested in an EMC storage angle.
So if you are 'just testing' and have a chance to stick in a large chunk size, please do so. Be sure to include the db load/copy and or tablescans in the test (those would do the large IOs).

Cheers,
Hein.