- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Netbackup performance issues
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 10:47 AM
тАО06-12-2008 10:47 AM
Netbackup performance issues
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 11:47 AM
тАО06-12-2008 11:47 AM
Re: Netbackup performance issues
So you are looking as 5.7 MB/second
Which direction is the data going?
How many MB/sec can the source deliver?
How many MB/sec can the target accept?
What is the network distance (KM) /latency (ms) (trace route)
Find out more and the OpenVMS side...
$Monito MODE, DISK
$SHOW RMS
$ANAL/SYS
SDA> SET PROC job-doing-io
SDA> SHOW PROC /CHAN
SDA> SHOW PROC/RMS=(RAB,BDBSUM) ! A few times
SDA> PROCIO ! Google: procio volker
Now put on your thinking cap, and ask for further help with lots more details if still needed.
Cheers!
Hope this helps some,
Hein van den Heuvel (at gmail dot com)
HvdH Performance Consulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 11:58 AM
тАО06-12-2008 11:58 AM
Re: Netbackup performance issues
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1240390
I can easily see this as a network issue, or as a problem on the local box, or on the backup server box.
As a general rule, network people seldom volunteer information about failures or sluggishness. This can be due to the tools they are (or are not) using, or otherwise. Operating system people and application people can have similar sorts of blindnesses, too.
As I mentioned over in the other thread, do benchmark the network path. There is no substitute for benchmarking the path. Seldom is there a magical answer to questions of performance -- well, other than the waggish buy all-new-faster gear from end-to-end, and I'm guessing you don't have the budget for that. :-)
And do learn about and do turn on jumbo frames. They can help network performance, assuming your network infrastructure supports that capability.
Current ECOs, too. If you escalate this to most any support organization, they're going to request that. So get there first, as that can reduce the support call reflections and call restarts.
And do verify your quotas are adequate for your application; at least as much as the documentation indicates is required. If the documentation doesn't indicate quota requirements, contact the support folks for formal product-specific recommendations and (in the interim) start with the V8.2 quota recommendations for BACKUP here:
http://64.223.189.234/node/49
And again, do benchmark this. If you have a gonzo-speed connection and your network and LAN benchmarks show gonzo-speed, then you know this bottleneck is netbackup, or the host, or something local to the host -- you've eliminated the network. If you see rotten network performance, you now have the needed statistics to toss at the networking folks.
Or call somebody in for a look.
Stephen Hoffman
HoffmanLabs LLC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 12:28 PM
тАО06-12-2008 12:28 PM
Re: Netbackup performance issues
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 12:34 PM
тАО06-12-2008 12:34 PM
Re: Netbackup performance issues
I will concur with Hoff, and amplify that the OR should be read as "inclusive or", to wit, it may be any individual or combination of the categories.
A few years ago, I encountered a client called me in to assist with the configuration of their new GS/ES cluster, purchased to "improve performance". In the process, I discovered that the major component of the "performance" problem was the remote shadowing of temporary files to a remote site over the WAN. Indeed, the entire new equipment purchase could likely have been avoided if we had investigated the "performance" issue before purchasing hardware.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 01:03 PM
тАО06-12-2008 01:03 PM
Re: Netbackup performance issues
$ show rms_default
$ help set rms_default
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 01:05 PM
тАО06-12-2008 01:05 PM
Re: Netbackup performance issues
http://64.223.189.234/node/181
If you measure, do use a tool that blasts data. (Do check with your network folks, too, as you probably don't want to blast GbE speeds during production.)
As for MBC and MBF settings, the settings are typical of iterative tuning. Try a few tests. For instance, run a test to gather baseline data, and record the results. Double or triple MBC. Try again. Set MBC back, and double or triple MBF. Repeat. If you keep at this for a while, you'll find a knee.
Rinse, lather, repeat.
http://h71000.www7.hp.com/doc/732final/6631/6631pro_contents.html
For determining and graphing performance data around various OpenVMS subsystems, T4 can be quite useful.
Or fire up something akin to, say, Cricket and watch the show when your netbackup lights up the LAN...
http://cricket.sourceforge.net/
Your networking folks may have something similar. Various managed switches offer this or similar tools.
For grins, you could load up a big RAM disk with zeros and run that over the wire using netbackup. That bypasses (most of) the local file system.
This could be pretty much anything from the source of the data to the process of writing the bits onto the destination storage medium, or anything in between.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 01:09 PM
тАО06-12-2008 01:09 PM
Re: Netbackup performance issues
A cautionary note regarding setting the RMS parameters: They are defaultable at both the system (in the system parameter file, manipulatable using SYSGEN, SYSMAN, or AUTOGEN [via modparams]), or at the per process level. It is strongly recommended to experiment at the per process level. However, note that when a sub-process is created, the RMS parameters ARE NOT copied, they are set from the system defaults.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 07:26 PM
тАО06-12-2008 07:26 PM
Re: Netbackup performance issues
AFAIK, the PROCIO widget is available from JFP, and not from Volker.
http://www.pi-net.dyndns.org/jfp/english/ProcIO.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2008 10:17 PM
тАО06-12-2008 10:17 PM
Re: Netbackup performance issues
Ah yes, it appears to be originally by Jean-Fran├Г┬зois back in 2001. Volker made a version available through champs and encompaserve, but the original atribution was not present. Thanks for clarifying.
Burleson>> I don't think jumbo frames is going to be allowed by our network team.
... Typical. Because they did not invent it?
... Is their mission in live to help or to hinder? To serve or to rule?
Still I would not worry about that just yet. Not until you hit 40+ MB/sec or so.
Burleson>> I do need clarification from Hein regarding the RMS settings that he used for his FTP
I failed to make my setting. I was just observing an odd, FTP provided, explicit setting. I tried for a moment to 'see' with the debugger where / how sys$system:tcpip$ftp_child.exe sets up the rab, but that looked it it would be too tedious.
IF that was a typical image, THEN you could do a SET RMS in sys$system:TCPIP$FTP_SERVER.COM, just after the "run:" label. But that does not appear to work.
Still I would not worry about that just yet. Why speculate if we have not even learned whether FTP plays a role?
>> How did you set the MBC and MBF
30 years of RMS experience and 3 years of RMS Engineering. :-).
10 buffer of 50 blocks may have been right for some specific system at some point in time, but it is not right, right now.
Too many, too small... IMHO of course.
http://h30266.www3.hp.com/odl/axpos/network/tcpip56/6526/6526pro_041.html
I now realize why I got in trouble (a little).
My file was RFM=VAR,LRL=0,MRS=0.
With that combination FTP opted to ignore my BIN request, or more precise it did not do what I expected to be BIN: Raw disk block bytes. It used $GET to get raw record bytes... of which there were none.
Swithing to rfm=fix,lrl=8192,mrs=8192 it switched to BLOCK IO $READ calls, with a buffer size of 60 blocks (30720. bytes) which is just about the 32-bit RAB max.
Maybe I should have defined TCPIP$FTP_RAW_BINARY to TRUE, or used PUT/RAW.
http://h30266.www3.hp.com/odl/axpos/network/tcpip56/6526/6526pro_041.html#ftp_logicals_tab
For BIN transfers of fixed length record files it looks like it does use Async $READs, with multiple buffers. That's good, but maybe not as good as RMS RAH can do. (Larger buffers, user control)
Hoff>> As for MBC and MBF settings
I don't think you can set it. The is no obvious logical name or such. "strings" only revealed TCPIP$$FTP_SERVER_MAX_QIO_BLOCKS
Bob> when a sub-process is created, the RMS parameters ARE NOT copied,
Correct, but not applicable for FTP best I can tell.
regards,
Hein.