Disk Enclosures
1752565 Members
5403 Online
108788 Solutions
New Discussion юеВ

SureStore 12H Disk Array performance question

 
Dan Ryan
Advisor

SureStore 12H Disk Array performance question

I have a 12H with 6 18 GB drives. arraydsp -6 recommend adding another disk as I/Os are stacking up in the comand queues for the disk.

I'v e added 3 more disks but this has changed the performance recommendation.

The Working Set 0.100

How do I spread the I/O across the new disks?
Do I have to create a new LUN and migrate (pvmove) data to it?

Thanks for your time and consideration
Don't have time to do it right the first time, but always time to do it twice
9 REPLIES 9
Pete Randall
Outstanding Contributor

Re: SureStore 12H Disk Array performance question

Dan,

In order to reduce the queue length, you're going to need to redistribute your data, spreading it across more drives. So yes, create more LUNs and move some of the data into them.

HTH,
Pete

Pete
A. Clay Stephenson
Acclaimed Contributor

Re: SureStore 12H Disk Array performance question

No, the 12H will balance the I/O and gradually move RAID5 to RAID 1/0. What you don't want to do is create more LUN's. The AutoRAID performs best when two criteria are met: 1) No more than about 50% of the capacity is allocated as LUN's. 2) Split the traffic so that the I/O is balanced across both external SCSI path's. Ideally, each VG is comprised of 2 identical LUN's with LUN0's primary path thru controller X (alternate Y) and LUN1's primary path thru controller Y (alternate X). Each LVOL is then striped across both LUN's typically in 64K chunks.
If it ain't broke, I can fix that.
Pete Randall
Outstanding Contributor

Re: SureStore 12H Disk Array performance question

Thanks, Clay.

Pete
S.K. Chan
Honored Contributor

Re: SureStore 12H Disk Array performance question

You wouldn't have to do that. The whole architecture design of a 12H "AutoRAID" is suppose to take care of the "load balancing" (if you want to call it that) part for you because all the disks in a 12H are accessed by the same internal bus anyway (if you have 2 controller then 2 internal buses). All you have to do to keep it running at well is to make sure you maintain enoough RAID 0/1 space and by adding more disk modules to it you've already done that.
Dan Ryan
Advisor

Re: SureStore 12H Disk Array performance question

Thank Pete,

Here's my plan:

I've got one lvol spread across 4 8GB Luns. The luns were created while we had 4 diskin th array.

So I created a 32 GB Lun which should be spread across all the disk in the Array.

I added the Lun to the VolumeGroup and will be doing a pvmove from each of the 4 8GB Luns to the 32 GB Lun.

I'm not sure what going from 4 Luns to 1 Lun will do, because I'm picking up more spindles even though it looks like fewere disks.

will let you know how it turns out.
Don't have time to do it right the first time, but always time to do it twice
Dan Ryan
Advisor

Re: SureStore 12H Disk Array performance question

Thanks everyone,
I've noticed some minimal activity on the new disks. Overall the Qlen being reported by glance are greatly reduced even though the Luns are reported as 100% Utilization.

In any case I'm going to let the 12H percolate overnight and seee how it looks in the morning
Don't have time to do it right the first time, but always time to do it twice
A. Clay Stephenson
Acclaimed Contributor

Re: SureStore 12H Disk Array performance question

Dan,

AutoRAID's don't work like that at all. Regardless of the number of disks that were in the array when a LUN was created it will automatically balance across all the available disks. You absolutely, positively cannot say that LUN3 for example is spread across Disks 1A, 3B, 5A, and 5B. You have no control whatsoever of the physical layout of the data.

If it ain't broke, I can fix that.
Pete Randall
Outstanding Contributor

Re: SureStore 12H Disk Array performance question

Dan,

After doing some research so I don't end up sticking my foot in my mouth again, I can offer this excerpt from the AutoRAID User's Guide in support of what Clay has been saying:

Increasing the amount of RAID 0/1 space available

If the write working set is exceeding the amount of available RAID 0/1 space, you can restore performance by increasing the amount of RAID 0/1 space. You can do this in one of the following ways:

. Include a disk and leave its capacity unallocated. This is an effective way of permanently increasing the amount of RAID 0/1 space available for the write working set.

. Delete an unneeded existing logical drive (LUN) and leave its capacity unallocated. This too will permanently increase the amount of RAID 0/1 space available for the write working set.

. Add a disk and create a new logical drive (LUN) with its capacity. This is a temporary way of increasing the amount of available RAID 0/1 space. As the new logical drive begins to fill up with data, it will be converted to RAID 5 space and you may again find that the available 10 percent RAID 0/1 minimum is too small to accomodate the write working set.

Clay's suggestion (the first bullet) is obviously the best idea in the long run. Mine (the third bullet) *MIGHT* offer temporary relief only.

Good luck,
Pete

Pete
Dan Ryan
Advisor

Re: SureStore 12H Disk Array performance question

Well looks like adding the disk to the 12H has increaed the disk performance. The primary indicator is that the end users and developers have told me they are doing more transactions faster. Pretty smart of me to recommend the additional disks!

The only issue I have is that when I look at the 12H the orginal six disk are flashing like crazy. While the three new disks flash occaionally, once or twice every three seconds.

I didn't change the orginal configuration 4 8GB Luns (split across two controllers) assigned to a single Lvol and a 20GB Lun assigned to a second Lvol. Adding three disk appears to have greatly reduced the disk queing(req Queue) reported by glance/PerfView. It has done nothing to reduce the disk queueing reported by the 12H.

Adding more disk to the 12H shouldn't have impacted the 12 raid1/raid5 performance as the "working set size" was .1 and "relocate blocks " was 0 before the disk add.

The idea was to reduce the 12H internal queueing making the array service the I/O faster. According to the 12H stats, the SCSI Queueing is about the same. The disk queueing reported by the OS is reduced.

At the end of the day the only concern is mine as why aren't these new disk doing more work?

Thanks again for your consideration.




Don't have time to do it right the first time, but always time to do it twice