- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Re: VA7400 Vs 2x VA7410 performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-19-2003 05:43 AM
12-19-2003 05:43 AM
VA7400 Vs 2x VA7410 performance
I have a conumdrum.. We have been running our database on a single VA7400 with
- 1GB cache per controller
- 30x18GB 15krpm disks.
- 12 even LUNs
- RAID1+0 mode
We are (in my opionion) IO bound, doing about 2,000-2,500 IO/s at 2.5kB/IO. Each Lun gets about 3.2 ms service time, prity good I thought.
That said, we knew that one VA would not be enough so I thought ... well if we had 2 VA's & spread the IO evenly over the two we'd get twice the performance...
However, we installed 2xVA7410 with
- 1GB cache per controller
- 30x36GB 15krpm disk per VA7410 (total 60 disks).
- RAID1+0 mode
- 12 LUNs (6 per VA7410)
The load over the two arrays is even and the service times are even over the LUNs at 2.1 ms. Now I expected (ok, designed & did calculations) for LUN service times of 1.6ms...
o Any suggestions as to why the performance of 2xVA7410 is NOT twice VA7400
o Any suggestions about how we could "tweak" the performance up?
All answers/opionions/suggestion as always greatfully recieved and amply rewarded.
Regards
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-21-2003 08:31 PM
12-21-2003 08:31 PM
Re: VA7400 Vs 2x VA7410 performance
What you are expecting is impossible!
You are already having OUTSTANDING response times!
Think about!
For reads: Your host reads have to be served by the backend disks. In open systems â cache hitsâ are not very common!
The physical limits of 15krpm disks are:
- 2ms average latency
- 3.6/3.9ms seek time
check it on http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah15k.3.pdf
For writes: You can achieve response times lower than 3ms for writes only. The cache must not be full i.e. must not need to destage data to the backend disks!
So, to reach your 2.1ms you must already have a high write ratio with no backend saturation.
With your new configuration (2x VA7410) you will be able to satisfy much more IOs at a low latency than with a single VA7400, but the limits of physics will remain!
Cheers
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-21-2003 09:22 PM
12-21-2003 09:22 PM
Re: VA7400 Vs 2x VA7410 performance
I'm afraid I disagree with the cache hit statement (I'm afraid I could not read the format in the reply, but I think I got the gist).
We get 100% write cache, and only about 20% read cache... there is 1GB cache in each controller.
Despite the caching side of things why do I get better performace (per $) from one VA? Why is it non linear?
I did a quick calc..
1xVA7400 with 30 15krpm disks...
1 mirrored pair disks ~ 6ms service time so 15 pairs (30 total) split into 12 LUNs should give 4.8 ms per LUN (e.g. 12* 1000/4.8 == 15 * 1000/6.0 ). So No caching I would expcet VA7400 to give 4.8 ms service time. This means 33% improvement due to caching
2xVA7410 with 60 15krpm disks ...
1 mirrored pair dsk ~ 6ms svc tm so 30 pairs (60 total) split into 12 LUNs should give 2.4ms only 12.5% improvement in caching.
Any way I need to muddle off & think about this more .. get some armperf stats to see exactly what the 2xVA7410 caching profile is.
Many thanks for the thought food
Merry Xmas
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-21-2003 11:09 PM
12-21-2003 11:09 PM
Re: VA7400 Vs 2x VA7410 performance
AFAIK we often see a (to the initial viewpoint unexpected) DROP off in performance when we add hardware.
For example when you ADD memory to many PCs, Servers, etc. the performance as seen by the end user (err.. well, when measured by some clever piece of software..) gets worse FOR A SMALL SIZED PIECE OF SOFTWARE because the hardware is having to chase through more circuits... before working out where to put/find somnething.
I suspect you are seeing the same sort of phenomenon here: As there are more bits of hardware (VAs, paths, disks etc. etc. ) then when not very much is going on (ie everything is performing like a good 'un) then as the software (mainly, in this case) and hardware (dunno enough about VAs myself...) has to spend a LITTLE more time hunting round working out where to put/find the data ...
I would strongly suggest you will see the advantage of spending all those dollars as the workload builds up and the paths/disks/caches get busy,,,
In summary, advice to the wise:
a) Things ain't always simple & obvious (if they were we wouldn't have these jobs)
b) Salesmen don't always have a full grasp of reality.
Regards & seasons greetings
Eric
;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-21-2003 11:59 PM
12-21-2003 11:59 PM
Re: VA7400 Vs 2x VA7410 performance
the 2xVA7410 gets total of 55% is caching (100% read + 25% write). The Va7400 gets 60% caching (100% write + 30% read)
So it does seem that 2xVA does not give the same caching levels!! (oh bug**r) But it is not way off!
The other thing I did notice is RAID1+0 Allocation. On the VA7400 it is 0.022GB, on BOTH VA7410 it is 15.22[1|3]. What is this... I have a 'orible feeling it is re-distributing data internally & I dont know why?
Regards
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-22-2003 12:13 AM
12-22-2003 12:13 AM
Re: VA7400 Vs 2x VA7410 performance
The thing about cache:
If you write to a disk array and the write cache that is not full, your IO will be serviced at electronic speed like 0.2ms or so.
If you read from a disk array and your data does not reside in the array cache you will need to access the real backend thus the disks!
Minimum Disk latency and seek time stay the same no matter how many disk or LUNs you have. Therefore you will never achieve a service time lower than the one of the fastest disk.
By adding more disks you will not lower the min. service time but allow many more IOs to occur before you reach the limits (saturate) by spreading the IOs.
See the attached PDF where I created graphs for a VA7410 pure reads, no cache hits, once with 15 disks and once with 60 disks.
You will notice that both graphs start at a response time of 6ms.
The difference is that the response time with 60 disks at 8000 IOPS is still only 10ms while with 15 disks you reach that response time at 2500 IOPs and above 3500 IOPs you response time hits the ceiling!
The reason is, that a single 15k disk can do round 150 IOPS with good response time. 15x 150 = 2250IOPS and 60x 150 = 9000.
I recommend you also have a look at the performance whitepapers here:
http://www.hp.com/products1/storage/products/disk_arrays/midrange/va7410/infolibrary/index.html
Cheers
Pet
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-22-2003 01:41 AM
12-22-2003 01:41 AM
Re: VA7400 Vs 2x VA7410 performance
Many thanks for the graphs...
I think you are trying to quantify
o Things are not linear
o More disks the better
I'm afraid I do not understand the graphs entirely.. It seems for a given number of disks (no caching) as the IO load goes up so does the service times. so at low IO rates you get the "ideal" service times (~6ms for 15krpm disk). But then there is another graph for more disks, it shows a much higher loading point.
Now the 1st graph is 15 disks & does a max IO load of ~3600 over 15 disks. this means each mirrored pair does 480 IO/s this is about 2.1 ms svc tm per dsk pair... BUT the graph shows nearly 30ms.
The second graph is about 9000 IO/s (@12 ms). 9000 IO over 30 mirrored pairs is 300 IO/dsk pair, which is 3.33ms....again it says 12ms.
My only conclusion is that disk queues must be forming such that the first graph has a queue depth of about 14-15 & the second has a depth of 3-4.
As such my only conclusion is I've misunderstod the graphs... I dont understand what is slowing things down.. To my simple mind the 1st graph should only get to 1250 IO/s & the second should get to 5,000 IO/s...
The only thing I did see clearly was that 4 times more disks can handle more IO at less degradation in performance...
Regards
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-22-2003 02:09 AM
12-22-2003 02:09 AM
Re: VA7400 Vs 2x VA7410 performance
Your way of looking at it is, as if it were individual drives mounted to a server, which is not true!
The VA74x0 arrays group disks into 2 redundancy groups (RG). RG1 consists of all odd drives and RG2 of all even drives.
You now carve LUNs out of the RGs which are striped over all disks in an RG! A LUN is then seen by the OS as a disk.
A LUN is divided into 256KB blocks which are striped over the RG.
Therefore if you randomly access a single LUN you hit all disks in one RG. The response time will be the one of the disk(s) your data blocks resides on.
If you access multiple LUNs in an RG you still hit the very same disks. As long as you do not saturate these disks the response time for all the LUNs will stay the same as for the single LUN!
Read more about the VA architecture starting on page 39 in the following VA Manual
http://h200002.www2.hp.com/bc/docs/support/SupportManual/lpg60187/lpg60187.pdf
Take care
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-22-2003 02:32 AM
12-22-2003 02:32 AM
Re: VA7400 Vs 2x VA7410 performance
I'm aware of this part of VA74X0 architecture.
I think I have a different definition of service time to you.
svc tm ~ (disk util % * 10) / (dsk IO rate)
so if the disk is doing 100 IO/s and disk% is 70%, you get a service time of 7ms.
Ignoring mirroring, if you stripe across 5 of the above disks (in one LUN say), then each disk should be able to do 100 IO/s at 70% util. e.g. you get 70% dsk util and 700 IO/s so 1ms service time for the LUN. BUT each disk is still doing 7ms service time & 100 IO/s at 70%.
This is why I do not quite get the graphs.. from your previous replies I asume you are measureing service time for the LUN or each IO. I would expect this to be much smaller than a single disk service time (possible exception at low IO rates where 1 disk is servicing all the IO).
Sorry for being thick today
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2004 02:13 AM
03-15-2004 02:13 AM
Re: VA7400 Vs 2x VA7410 performance
The answer to the above is.. Double the number of LUNs... because I kept the number of LUNs the same on the 2x VA7410 solution as the 1x VA7410, for wahtever reason, the IO throughput was similar to 1x VA7410.
What I've done now is to re-create the whole system on 24 LUNs and evenly stripe over these. This actually gave me the same service times!! Thus I have doubled the performance/ peak IO rate...
There seems to be some confusion (probably on my behalf) about service times, and disk utilisation and hence peak throughput [IO].
disk util = average % time spent doing IO to LUN/disk
service time = disk util% * 10 / IO rate.
This means a disk with 2ms service time can do at 100% utilisation 500 IO/s. This is where Peter & myself seem to diverge!! I sa y OK 1 disk can do 500 IO/s so 12 disks (LUNs) can do 6000 IO/s.
I did not see was a mention of "max_scsi_qdepth". I've heard that if you do choose a few large LUNs as opposed to lots of small LUNs, "max_scsi_qdepth" should be increased appropriately. Unfortunately I have no understanding of how this works and why it works.... So I've left it at its default of 8 throughout.
Regards
Tim