Disk Enclosures
1753504 Members
4556 Online
108794 Solutions
New Discussion юеВ

Re: VA7410 and LVM Striping Performance

 
Amy Bach_1
Advisor

VA7410 and LVM Striping Performance

Hi All,

We are having a little dilema in our data center. We have two VA7410 disk arrays with (1) DS2405 enclosure each and a total of 20 drives on each VA/DS. One VA is connected to an rp8400 with 2 FC HBAs and the other VA is connected to an rp7410 also with 2 FC HBAs.

On each system we created two large luns (one per RG)taking up all the space available for luns. We then created a volume group using these two luns. After creating the volume group we added the alternate paths to the luns. We then created several logical volumes on the volume group without using LVM striping. Since the VA already mirrors and stripes I have always been told that double striping is a big no no.

One of our administrators strongly disagrees. He says that in order to fully utilize the bandwidth from the 2 FC HBAs on each system the logical volumes should have been created using striping (ie lvcreate -L ## -n Name -i 2 -I Stripe_Size VG_NAME)

This "issue" has now been brought up by management and they want us to determine whether or not our current VA/LVM configuration is hindering performance by not having striped at the LVM level.

If anyone can give me suggestions/feedback on the subject, it would be much appreciated.

Thanks for your help!

-Amy

9 REPLIES 9
Bharat Katkar
Honored Contributor

Re: VA7410 and LVM Striping Performance

Amy,
Have a look at similar discussion below. But i personally think what you have already done is Okay.

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=607203

With regards,

You need to know a lot to actually know how little you know
Brian King
Advisor

Re: VA7410 and LVM Striping Performance

Amy,

Using whatever performance monitoring tools that you may have at you disposal, you should be able to prove that your FC interfaces are probably nowhere near close to causing a bottleneck with I/O throughput.

I went through this same exercise and the train of thought to use LVM stripping on top of whatever hardware stripping was taking place, proved to be a waste of time. It sounds like, in your configuration, that you are stripping across sufficient spindles and adding LVM stripping, will effectively buy you nothing.

Today's technology is far better than days past. It used to be that LVM stripping on top of some older arrays yielded positive results. In addition, if you do apply LVM stripping on top of hardware stripping, you need to be careful (and aware) of the hardware and LVM stripping size, block size, etc. so as not to waste space. It can be a time consuming task with very little (if any) benefit.

Hope this helps,

Brian
Yew Lee
Advisor

Re: VA7410 and LVM Striping Performance

While striping allows IO to occur on both FC path, it can create complexity when you have to add more disks to some logical volumes in the future. So do take that into consideration.

I would suggest you do application load testing on a plain LVM setup. Try to balance the IO load by moving logical volumes around. Make LVM striping your last choice. I have seen an huge increase in performance doing LVM striping across NIKE 20 LUNs, but I have not done LVM striping for the newer storage arrays.
On the move....
Roger Buckthal_2
Frequent Advisor

Re: VA7410 and LVM Striping Performance

There is no performance value gained by striping LUNs within the same RG. The VA already does this. However, striping between LUNs on different RGs can provide load balancing for the disks, the controllers and the host FC links.

The gain you might receive will be based on how out-of-balance your current systems is. The more balanced your system is, the less gain├в ┬ж. Obviously. The VA keeps statistics on balance, but they are hard to examine without the right tools. Sar might also help.

However, you have another potential problem with this configuration ├в queue depth. By default HPUX provides a host queue depth of 8 per LUN. You have one LUN per RG of 10 disks, that a queue depth on 8 for 10 disk ├в potentially not enough. My rule is that you need at least 2X the number of disks ├в a total queue depth of 20 per RG.

You have three options to fix this; 1) create more LUNs per RG, 2) change the kernel default of queue depth per LUN, 3) change to individual queue depth on a LUN.

The easiest, and my recommendation, is to change the kernel. (next would be to change the individual queue depths with the scsictl
Tim D Fulford
Honored Contributor

Re: VA7410 and LVM Striping Performance

Hi

I seem to be at odds with most of the above... VA7410 has two RG. You should have at LEAST two LUNs, one form each reduncancy group. To make the best use of the FC and controllers stripe across these two LUNs. Though you could easily use extent based striping (lvcreate -D y -s g ...).

If you only create one LUN in one RG then half the disks will be idle and one controller will be idle.

Personally I have found that more LUNs give better performance than fewer LUNs.. (see my questions on VA7410). That said, if you do want to use two LUNs make sure that max_scsi_qdepth is larger than the default of 8, say 64.

Regards

Tim
-
Steve Lewis
Honored Contributor

Re: VA7410 and LVM Striping Performance

A different point to note is that your VA will not give you peak performance when you have allocated all the available space - EVEN IN RAID 0/1 MODE. Always leave some free and make sure that you have allocated at least one hot spare.

We have been told by HP that our VA7410 performance is degraded because we allocated nearly all available space and that it moves blocks around scrubbing even in raid 0/1, not just in Autoraid mode.

To give you an example, when the array was new our backup shifted a constant 70000 blocks per second on each lun to the local tape drive. Now that we have less than 70Gb free on each RG out of 1.2 terabytes the backup goes at 35-40,000 blocks per second.

Our array is hard-configured RAID0/1 which we thought would give us more performance, but the reality is not so clear-cut.

I suspect that it has de-sequentialified some blocks in our production system because they contain historical (unchanging) data. I didn't think it would do this in RAID 0/1 mode.

For your striping issue my advice is to monitor how much each controller is actually being used. If there is an imbalance of load, then it may be worthwhile moving stuff or even trying a stripe. One word of caution - make sure you are going down the primary path and not across the (slow) internal bus in the VA to get to your RG lun.






Roger Buckthal_2
Frequent Advisor

Re: VA7410 and LVM Striping Performance

Steve, the VA does not move data ├в around├в when the whole array is set to ├в RAID 1+0 only├в mode. For normal operation, you do not need to have additional reserved capacity (in RAID 1+0 mode). However, during a disk failure, the array can potentially convert some of the data to RAID 5DP in order to have sufficient space for the rebuild to complete. Additional unused capacity can mitigate this conversion, thus minimizing the performance affects of rebuilding the missing disks. (when the failed disk is replaced, the controller will convert any RAID 5DP data back to RAID 1+0)

I├в d point out that the VA implements a ├в virtual spare disk├в , not a physical spare disk as in traditional arrays.

The performance difference you experienced must be explained by some
Amy Bach_1
Advisor

Re: VA7410 and LVM Striping Performance

Thank you all for all your suggestions. Here is some additional info that is causing us some concern. As stated before one LUN was created in RG1 and the other in RG2, both of them make up one volume group. When we look at the statistics, one of the luns is 98% full and the other one is only about 13% full. This clearly shows that since we didn't LVM stripe the LUNs are not being used evenly, whether this is affecting performance or not, we don't know. Users are not complaining so we are happy, but we are worried that it may present a performance problem down the road. One thing than concerns us more than the uneven use of the LUNS is the fact that the array appears to be in a constant "disk scrubbing" mode or "Optimizing array" mode. The Optimizing array comes and goes, but the "disk scrubbing" appears to be almost always going on. Is this normal? Does it affect performance in anyway and what could I do to fix the problem (if any)?

Again, thank you all for your help and quick responses!

Thanks!
-Amy
Peter Mattei
Honored Contributor

Re: VA7410 and LVM Striping Performance

Amy

If I was you I would build 2 LUNs per server (one per RG/controller), increase queue depth for those LUNs and put the two LUNs in one VG to balance the load between the controllers.

For reference see the attached advisory that can also be found here:
http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?locale=en_US&taskId=120&prodSeriesId=89018&prodTypeId=12169&objectID=lpg35039

Cheers
Peter
I love storage