1833559 Members
3381 Online
110061 Solutions
New Discussion

n4000 configuration

 
SOLVED
Go to solution
Don Bentz
Regular Advisor

n4000 configuration

I posted this once before and received only 1 response. Please give me some advice if you can.
We are acquiring an N4000 server to be used for Oracle Financials applications. This will replace an existing K260 which has a model 12 with (12) 4.3 gig drives and a model 12H with (8) 18 gig drives and (4) 36 gig drives. I won?t go into detail about the configuration, except our big problem is consistently disk bottleneck.
For the N4000, we are considering an F60 with 3 SC10 enclosures, each with (8) 36 gig drives and an FC10 with (4) 9 gig drives and (4) 73 gig drives. We will probably mirror the entire capacity controlled by the FC60 and the FC10 will be used to contain the archived redo logs and the "test" database. I realize that the 9gig drives may seem a waste of money, but we didn't want to waste a lot of more expensive disks for dedicated usage (i.e. the archived redo logs). The FC10 will be used for the "test" Oracle system and the archived redo logs to reduce channel contention when the redo log switches occur (i.e. the archived redo and active redo being read/written through the same fibre channel). Granted the fibre channel is fast but my opinion is that "wire" bottlenecks are like litter. "Every litter bit hurts".
On the server, we are configuring 4gig of memory and (4) 550mhz processors are considering having each 1024mb on a separate carrier. Some of the discussions we have looked into imply there is a significant performance benefit from this.
Does anybody have any comments/recommendations? Also, can anybody tell me anything about the FC60/SC10 reliability? My understanding is that with the FC60 you can use RAID 0, RAID 0/1 or RAID 5.
Insecurity is our friend. It keeps you dependent.
4 REPLIES 4
Carlos Fernandez Riera
Honored Contributor

Re: n4000 configuration

A fast reponse:

FC are Fibre chanel + CACHE + SC10 disks ( Ultra fast wide SCSI)

FC10 is FC Arbitrated loop.

I have experienced FC10, and they are really fast. Speeds for FC arbitrated loop is 10 Mb/s


I have not experienced w/ FC60: Ultra fast wide SCSI is 80Mb/s.
Try to configure a FC60 with more disk enclosures and more cache. Maybe the same cost and more performance, plus disk proteccion ( raid 5).

N4000 * 4 8500 + 4GB = powerfull machine.
N4000 have two I/O buses of 1.4Gb/s, not mesurable with K's.

I mean you must think on your disk protection, not on I/O capacity, compared to K.


unsupported
Solution

Re: n4000 configuration

Don,

I was involved with an FC60 implementation about 10 months ago, so i'm a little bit fuzzy about the details but...

As I remember, you are correct in that the FC60 will do RAID0, 0/1 and RAID 5.

I'm not going to comment on performance as this was a new system for us, rather than an upgrade, and its just running SAS rather than Oracle. Also I've never used 12H disk. However, a couple of things to bear in mind:

1. We've had no reliability issues at all with our FC60, or the FC10s we've had on site for the last 12 months.

2. The FC60 fibre channel is generally 5 times faster that FWD SCSI. I seem to remember the FC60 has two FCAL connections to its host (I'm not sure if it can have more than that or whether it now supports Switched fabric rather than just FCAL), and that you see all your LUNs down both channels. Unfortunately LVM doesn't load balance over the two channels so for improved performance remember to alternate which channel is the primary, and which is the standby when adding disks into your volume group. e.g. If you have two LUNS 0 and 1, and you see them as:
/dev/dsk/c2t0d0, /dev/dsk/c3t0d0 LUN 0
/dev/dsk/c2t0d1, /dev/dsk/c3t0d1 LUN 1
Then if you are adding these to a volume group, (assuming they,ve already been pvcreate'd) you would do something like this:
vgextend vg01 /dev/dsk/c2t0d0 (Primary)
vgextend vg01 /dev/dsk/c3t0d0 (Standby)
vgextend vg01 /dev/dsk/c3t0d1 (Primary)
vgextend vg01 /dev/dsk/c2t0d1 (Standby)

This means that both channels will be utilised and you increase your throughput down your 'wire'

3. There's a popular saying: 'Spindles Win Prizes' - that is, it doesn't matter how much on-board cache your disk array has or how fast your IO channels are, the more physical spindles you can get over the most channels into your system, the faster things will go. Look very carefully at the ratio of
GB storage : Spindles : I/O Channels
You might find that rather than using FC60, buying more FC10s and more fibre channel cards provides a better throughput for you. Of course then your dependent on software mirroring/striping so its all relative!

4. Spreading the memory load over several carriers will enhance memory performance, and in fact the N class will tell you during it's boot if memory isn't configured optimally over the memory carriers available. However if disk is your bottleneck this might not actually improve things for you. (particularly if your database is on raw disk, as all database I/O will ignore the buffer cache)

5. I was slightly concerned that you dismissed the archived redo to your FC10, I hopr you still intend to mirror it in the FC10, or write a copy off to tape very quickly after you get it! Writes to the redo log are the only actual writes that Oracle absolutely guarantee will happen, and are therefore very important, after all I can get my datafiles back with redo, I can't get my redo back from my datafiles. I protect my redo at least as well as the rest of the database, if not better.

Hope this helps

/dre

I am an HPE Employee
Accept or Kudo
Steven Sim Kok Leong
Honored Contributor

Re: n4000 configuration

Hi,

Just a point. Redo logs are sequentially written. Thus it is best to stripe your data filesystems (either RAID 0 or RAID 0+1) but leave your redo filesystems at RAID 1.

Hope this helps. Regards.

Steven Sim Kok Leong
Brainbench MVP for Unix Admin
http://www.brainbench.com
Printaporn_1
Esteemed Contributor

Re: n4000 configuration

Hi,

Just for more info.
speed of FC60 each Fiberchanel is around 170 Mbyte/s. there're 2 controller that is not load balanceing but back up each other. then just consider planing binding your LUN and arrange datafiles for high access tablespace to balance on LUN.
If you found that the performance is not balance on controllers that is LUN's owner , you can reassign later after binding disk.

and yes , archive log should be Mirror rather than Mirror+striping.
that mean if you want mirroring you have to mirror 2 disks in a LUN otherwise FC60 will automatic configure as RAID0/1 which is striping+Mirroring.

Regards,
Printaporn

enjoy any little thing in my life