1836645 Members
1706 Online
110102 Solutions
New Discussion

Re: performance issue

 
hi_5
Frequent Advisor

performance issue

Currently, I'm using bonnie as I/O driver for EMC storage. I want to compare the io rate from EMC to storage , and also using fback and frecover to time the process . any suggestions on this topic?
Plan :
create 2 vg : 1 with EMC and 1 with Storage
Create 1 vg with both EMC and Storage : use pvmove to move the physical extend, or using fbackup and frecover...
5 REPLIES 5
Steven E. Protter
Exalted Contributor

Re: performance issue

fbackup and recover generally and write to tape.

pvmove or using lvextend -m 1 to mirror and break the mirror are going to be faster.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
John Meissner
Esteemed Contributor

Re: performance issue

If you are looking for disk I/O performance the tool I've used several times is perfview and measureware. This is a liscensed product through HP - I'm not entirely sure of the costs.

You can use this tool to create graphs or text readouts of a TON of performance checks. anywhere from CPU, memory, disk performance etc.

When looking at disk performance you can drill down to the individual disk to observe performance. I highly recommend this tool.
All paths lead to destiny
Leif Halvarsson_2
Honored Contributor

Re: performance issue

Hi,

Perhaps you should look at the Postmark filesystem benchmark.
http://www.netapp.com/tech_library/3022.html
Stuart Abramson_2
Honored Contributor

Re: performance issue

1. What is "bonnie"?

2. Why don't you just do a big "cp"? Why use "fbackup"/"frecover"? They probably introduce backup overhead, so your measure isn't just disk i/o, it's program performance.

3. BTW, you can get BIG performance improvements from fbackup by messing with the "-c configfile" parameter and making big block sizes, and reducing the retry count (which won't count on EMC disk) and other stuff, so why bother - it's not standar. We used to have a config file standard, but I have lost it.

Be carefull measuring to tape. Tape is often the limiting factor on a disk to tape transfer.
Brian M Rawlings
Honored Contributor

Re: performance issue

Some thoughts:

1> If you are looking for the 'real world' performance from your EMC, through your server(s), to your tape storage, you'll see it doing things as you suggest. Performance will be affected by setup of many things, which you can tweak and retry.

2> The suggestions above are a good start on improving performance, but if you tweak fbackup, say, as noted above, you have to figure out how to implement the same change on your real backup app before it will do you 'real world' good.

3> If you are looking to test individual throughput for EMC, or your server, network, or library, you really don't want to test them all together. If one is a bottleneck, you'll just see the overall speed, not the causitive element.

4> To test individual components, you can try things like raw reads and writes, using /dev/null or /dev/zero as source or destination device. These are "infinitely fast" (server speed, anyway) devices, so if you read from EMC and write to /dev/null, you really get to see what the EMC (and your server) can do. Likewise, if you read from /dev/zero and write to a tape drive, you can test throughput there, without the I/O and wait times associated with disk seeks and File System overhead. You won't get any compression out of /dev/null (random data), I've never tried it with the infinitely repeating zeros or ones out of /dev/zero or /dev/one, you might get vast compression from those. Hmmmm....

One other interesting trick, once you have a read or write baseline for a disk device, is to read from and then write to that same device. Performance you get out of this mix is much closer to "real" performance, since most apps have a R/W mix (not 50/50, generally, but some common pattern).

Regards, let us know what you find out, please.

--bmr
We must indeed all hang together, or, most assuredly, we shall all hang separately. (Benjamin Franklin)