Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Netbackup performance issues

Layne Burleson_1
Regular Advisor

Netbackup performance issues

Anyone had any performance issues using VMS netbackup client? We have seen incredibly slow backups - 270gb @ 13.5 hours. VMS 7.3-2, Netbackup 6.0MP5.
9 REPLIES
Hein van den Heuvel
Honored Contributor

Re: Netbackup performance issues


So you are looking as 5.7 MB/second
Which direction is the data going?
How many MB/sec can the source deliver?
How many MB/sec can the target accept?
What is the network distance (KM) /latency (ms) (trace route)

Find out more and the OpenVMS side...

$Monito MODE, DISK
$SHOW RMS
$ANAL/SYS
SDA> SET PROC job-doing-io
SDA> SHOW PROC /CHAN
SDA> SHOW PROC/RMS=(RAB,BDBSUM) ! A few times
SDA> PROCIO ! Google: procio volker


Now put on your thinking cap, and ask for further help with lots more details if still needed.

Cheers!

Hope this helps some,
Hein van den Heuvel (at gmail dot com)
HvdH Performance Consulting



Hoff
Honored Contributor

Re: Netbackup performance issues

This posting is a continuation of:

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1240390

I can easily see this as a network issue, or as a problem on the local box, or on the backup server box.

As a general rule, network people seldom volunteer information about failures or sluggishness. This can be due to the tools they are (or are not) using, or otherwise. Operating system people and application people can have similar sorts of blindnesses, too.

As I mentioned over in the other thread, do benchmark the network path. There is no substitute for benchmarking the path. Seldom is there a magical answer to questions of performance -- well, other than the waggish buy all-new-faster gear from end-to-end, and I'm guessing you don't have the budget for that. :-)

And do learn about and do turn on jumbo frames. They can help network performance, assuming your network infrastructure supports that capability.

Current ECOs, too. If you escalate this to most any support organization, they're going to request that. So get there first, as that can reduce the support call reflections and call restarts.

And do verify your quotas are adequate for your application; at least as much as the documentation indicates is required. If the documentation doesn't indicate quota requirements, contact the support folks for formal product-specific recommendations and (in the interim) start with the V8.2 quota recommendations for BACKUP here:

http://64.223.189.234/node/49

And again, do benchmark this. If you have a gonzo-speed connection and your network and LAN benchmarks show gonzo-speed, then you know this bottleneck is netbackup, or the host, or something local to the host -- you've eliminated the network. If you see rotten network performance, you now have the needed statistics to toss at the networking folks.

Or call somebody in for a look.

Stephen Hoffman
HoffmanLabs LLC

Layne Burleson_1
Regular Advisor

Re: Netbackup performance issues

Let me gather some more data and I'll be back soon. I don't think jumbo frames is going to be allowed by our network team. I do need clarification from Hein regarding the RMS settings that he used for his FTP test in the other thread regarding the DEGXA throughput. How did you set the MBC and MBF and are they documented anywhere to tell you what to see them to?
Robert Gezelter
Honored Contributor

Re: Netbackup performance issues

Burleson,

I will concur with Hoff, and amplify that the OR should be read as "inclusive or", to wit, it may be any individual or combination of the categories.

A few years ago, I encountered a client called me in to assist with the configuration of their new GS/ES cluster, purchased to "improve performance". In the process, I discovered that the major component of the "performance" problem was the remote shadowing of temporary files to a remote site over the WAN. Indeed, the entire new equipment purchase could likely have been avoided if we had investigated the "performance" issue before purchasing hardware.

- Bob Gezelter, http://www.rlgsc.com
Jim_McKinney
Honored Contributor

Re: Netbackup performance issues

> How did you set the MBC and MBF

$ show rms_default
$ help set rms_default
Hoff
Honored Contributor

Re: Netbackup performance issues

FTP would not be my choice of network performance tools. FTP throughput tends to degrade (severely) in the presence of network latency.

http://64.223.189.234/node/181

If you measure, do use a tool that blasts data. (Do check with your network folks, too, as you probably don't want to blast GbE speeds during production.)

As for MBC and MBF settings, the settings are typical of iterative tuning. Try a few tests. For instance, run a test to gather baseline data, and record the results. Double or triple MBC. Try again. Set MBC back, and double or triple MBF. Repeat. If you keep at this for a while, you'll find a knee.

Rinse, lather, repeat.

http://h71000.www7.hp.com/doc/732final/6631/6631pro_contents.html

For determining and graphing performance data around various OpenVMS subsystems, T4 can be quite useful.

Or fire up something akin to, say, Cricket and watch the show when your netbackup lights up the LAN...

http://cricket.sourceforge.net/

Your networking folks may have something similar. Various managed switches offer this or similar tools.

For grins, you could load up a big RAM disk with zeros and run that over the wire using netbackup. That bypasses (most of) the local file system.

This could be pretty much anything from the source of the data to the process of writing the bits onto the destination storage medium, or anything in between.
Robert Gezelter
Honored Contributor

Re: Netbackup performance issues

Burleson,

A cautionary note regarding setting the RMS parameters: They are defaultable at both the system (in the system parameter file, manipulatable using SYSGEN, SYSMAN, or AUTOGEN [via modparams]), or at the per process level. It is strongly recommended to experiment at the per process level. However, note that when a sub-process is created, the RMS parameters ARE NOT copied, they are set from the system defaults.

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: Netbackup performance issues

A humble correction for a posting from Hein early on in this thread:

AFAIK, the PROCIO widget is available from JFP, and not from Volker.

http://www.pi-net.dyndns.org/jfp/english/ProcIO.html
Hein van den Heuvel
Honored Contributor

Re: Netbackup performance issues

Hoff>> A humble correction ...
Ah yes, it appears to be originally by Jean-François back in 2001. Volker made a version available through champs and encompaserve, but the original atribution was not present. Thanks for clarifying.

Burleson>> I don't think jumbo frames is going to be allowed by our network team.
... Typical. Because they did not invent it?
... Is their mission in live to help or to hinder? To serve or to rule?

Still I would not worry about that just yet. Not until you hit 40+ MB/sec or so.

Burleson>> I do need clarification from Hein regarding the RMS settings that he used for his FTP
I failed to make my setting. I was just observing an odd, FTP provided, explicit setting. I tried for a moment to 'see' with the debugger where / how sys$system:tcpip$ftp_child.exe sets up the rab, but that looked it it would be too tedious.
IF that was a typical image, THEN you could do a SET RMS in sys$system:TCPIP$FTP_SERVER.COM, just after the "run:" label. But that does not appear to work.

Still I would not worry about that just yet. Why speculate if we have not even learned whether FTP plays a role?

>> How did you set the MBC and MBF
30 years of RMS experience and 3 years of RMS Engineering. :-).

10 buffer of 50 blocks may have been right for some specific system at some point in time, but it is not right, right now.
Too many, too small... IMHO of course.

http://h30266.www3.hp.com/odl/axpos/network/tcpip56/6526/6526pro_041.html

I now realize why I got in trouble (a little).
My file was RFM=VAR,LRL=0,MRS=0.
With that combination FTP opted to ignore my BIN request, or more precise it did not do what I expected to be BIN: Raw disk block bytes. It used $GET to get raw record bytes... of which there were none.


Swithing to rfm=fix,lrl=8192,mrs=8192 it switched to BLOCK IO $READ calls, with a buffer size of 60 blocks (30720. bytes) which is just about the 32-bit RAB max.
Maybe I should have defined TCPIP$FTP_RAW_BINARY to TRUE, or used PUT/RAW.

http://h30266.www3.hp.com/odl/axpos/network/tcpip56/6526/6526pro_041.html#ftp_logicals_tab

For BIN transfers of fixed length record files it looks like it does use Async $READs, with multiple buffers. That's good, but maybe not as good as RMS RAH can do. (Larger buffers, user control)

Hoff>> As for MBC and MBF settings
I don't think you can set it. The is no obvious logical name or such. "strings" only revealed TCPIP$$FTP_SERVER_MAX_QIO_BLOCKS

Bob> when a sub-process is created, the RMS parameters ARE NOT copied,
Correct, but not applicable for FTP best I can tell.

regards,
Hein.