Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

VA7410 performance question

Tim D Fulford
Honored Contributor

VA7410 performance question

Hi all VA storage wiz kids...

I have a conumdrum.. We have been running our database on a single VA7400 with
- 1GB cache per controller
- 30x18GB 15krpm disks.
- 12 even LUNs
- RAID1+0 mode
We are (in my opionion) IO bound, doing about 2,000-2,500 IO/s at 2.5kB/IO. Each Lun gets about 3.2 ms service time, prity good I thought.

That said, we knew that one VA would not be enough so I thought ... well if we had 2 VA's & spread the IO evenly over the two we'd get twice the performance...

However, we installed 2xVA7410 with
- 1GB cache per controller
- 30x36GB 15krpm disk per VA7410 (total 60 disks).
- RAID1+0 mode
- 12 LUNs (6 per VA7410)
The load over the two arrays is even and the service times are even over the LUNs at 2.1 ms. Now I expected (ok, designed & did calculations) for LUN service times of 1.6ms...

o Any suggestions as to why the performance of 2xVA7410 is NOT twice VA7400
o Any suggestions about how we could "tweak" the performance up?

All answers/opionions/suggestion as always greatfully recieved and amply rewarded.

Regards

Tim
-
3 REPLIES
Patrick Wallek
Honored Contributor

Re: VA7410 performance question

How are you accessing the VA7410's? I would probably do something like 1 FC connection per controller per VA, so a total of 4FC connections.

Next, how are your LV's set up? Are they striped across RGs in each VA? What about striped across VA7410s?

How are the LUNs in the VA set up? I was told with my VA7400 that odd-numbered LUNs should in RG1 and even-numbered LUNs should be in RG2. Then when setting up the VGs / LVs, the odd-numbered LUNs primary access should be through controller 1 and the even-numbered LUNs should be accessed through controller 2. I don't know if that applies to the VA7410, but I would suspect so.
Tim D Fulford
Honored Contributor

Re: VA7410 performance question

Patric

1 - The computers have 2 FC cards going to 2 FC Switches (FCS). FCS1 goes to VA controller1's & FCS2 goes to VA controller2's. I Think this is fine as the bandwidth is less than 7MB/s (2500IO @ 2.5kB). there are 3 computers in a SG cluster but only ONE (the database) access the VAs.

2 - The LUNs are set up over the RG as you say, LUN0,2,4 in RG1 & LUN1,3&5 in RG2. But it configurable, e.g. I could create LUN0,1&2 in RG1 & LUN3,4&5 in RG2 (but I did not)

3 - I've been especially careful to make sure the devices map to the native controller. e.g. /dev/dsk/c6t0d0 goes DIRECTLY to the relavent controller on the VA (c1 in this case). I used armtopology, armdsp -a & ioscan -fknCdisk to do this. I know if I reference, say, LUN1 via controller 1 it will need to go back & forth over the VA backplane/N-Way bus.

My thoughs are really to do with dsk queues. In effect I have quadroupled the amount of storage. In reality I've only doubled the LUN size. As such I was wondering if the SCSI disk queues kernel setting may be to balme??

Regards

Tim
-
Tim D Fulford
Honored Contributor

Re: VA7410 performance question

oops

I have also striped the data EVENLY across the VA's & LUNS. so

I have a volume group vgdata with ALL 12 LUNS fro the two VA's & created my logical volumes
lvcreate -L -i 12 -I 4 -n vgdata

Tim
-