- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: performance issue
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2003 09:12 AM
05-16-2003 09:12 AM
performance issue
Plan :
create 2 vg : 1 with EMC and 1 with Storage
Create 1 vg with both EMC and Storage : use pvmove to move the physical extend, or using fbackup and frecover...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2003 09:50 AM
05-16-2003 09:50 AM
Re: performance issue
pvmove or using lvextend -m 1 to mirror and break the mirror are going to be faster.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2003 09:52 AM
05-16-2003 09:52 AM
Re: performance issue
You can use this tool to create graphs or text readouts of a TON of performance checks. anywhere from CPU, memory, disk performance etc.
When looking at disk performance you can drill down to the individual disk to observe performance. I highly recommend this tool.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2003 10:02 AM
05-16-2003 10:02 AM
Re: performance issue
Perhaps you should look at the Postmark filesystem benchmark.
http://www.netapp.com/tech_library/3022.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2003 10:22 AM
05-16-2003 10:22 AM
Re: performance issue
2. Why don't you just do a big "cp"? Why use "fbackup"/"frecover"? They probably introduce backup overhead, so your measure isn't just disk i/o, it's program performance.
3. BTW, you can get BIG performance improvements from fbackup by messing with the "-c configfile" parameter and making big block sizes, and reducing the retry count (which won't count on EMC disk) and other stuff, so why bother - it's not standar. We used to have a config file standard, but I have lost it.
Be carefull measuring to tape. Tape is often the limiting factor on a disk to tape transfer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2003 09:39 PM
05-16-2003 09:39 PM
Re: performance issue
1> If you are looking for the 'real world' performance from your EMC, through your server(s), to your tape storage, you'll see it doing things as you suggest. Performance will be affected by setup of many things, which you can tweak and retry.
2> The suggestions above are a good start on improving performance, but if you tweak fbackup, say, as noted above, you have to figure out how to implement the same change on your real backup app before it will do you 'real world' good.
3> If you are looking to test individual throughput for EMC, or your server, network, or library, you really don't want to test them all together. If one is a bottleneck, you'll just see the overall speed, not the causitive element.
4> To test individual components, you can try things like raw reads and writes, using /dev/null or /dev/zero as source or destination device. These are "infinitely fast" (server speed, anyway) devices, so if you read from EMC and write to /dev/null, you really get to see what the EMC (and your server) can do. Likewise, if you read from /dev/zero and write to a tape drive, you can test throughput there, without the I/O and wait times associated with disk seeks and File System overhead. You won't get any compression out of /dev/null (random data), I've never tried it with the infinitely repeating zeros or ones out of /dev/zero or /dev/one, you might get vast compression from those. Hmmmm....
One other interesting trick, once you have a read or write baseline for a disk device, is to read from and then write to that same device. Performance you get out of this mix is much closer to "real" performance, since most apps have a R/W mix (not 50/50, generally, but some common pattern).
Regards, let us know what you find out, please.
--bmr