- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: speed up backups
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-28-2009 11:53 PM
тАО10-28-2009 11:53 PM
Re: speed up backups
I'd expect the read path to be the bottleneck in this. VMS will get a return status for the writes having completed as soon as the write hits the disk controller. The controller can then write the data to physical disk as and when it wants to or feels like it. The read, on the other hand, has to go through to the disks unless the data have already been cached.
How fragmented are the input files that you're backing up? If they're continually growing then are the disks full of hundreds of small fragments?
Has anyone looked at locking on the files being backed up?
Has anyone looked at the processor modes during the backup?
Is FastPath enabled? If not, is the primary CPU becoming a bottleneck for operations on the node doing the backup?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 12:03 AM
тАО10-29-2009 12:03 AM
Re: speed up backups
I've not seen any info on the actual hardware being used on the Alpha side of things, or the SAN switch infrastructure.
Are you still using 1Gb/s HBAs ?
Are there lots of interconnected SAN switches ?
Cheers,
Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 07:26 AM
тАО10-29-2009 07:26 AM
Re: speed up backups
Disks are fastpath setup. Primary function during backups is other processes updating other applications and reporting. Disks are locked by the application and the application kicks off a backup (file by file). Files are RMS and are pre-allocated space. Checked fragmentation and there are some files that have 50+ extents with extent sizes of 65535.
My Test however is a single file with 2 extents with a size of 31million blocks (approx 15 gig). I also created a new file of 5gig on other disks with sysgen just to try the read on different formatted files.
Late yesterday I made all systems have same path (use fabric A or B) for the backup2 disk which has a disk at each site via shadow (I will have to setup a script so that they get set at boot instead of auto choosing). Read I/O rate fluctuated from 2k I/O to 400 I/O on the shadowed disk. I grabbed a disk local to each site and tested a backup to null on each disk multiple times. On the disks that were local to the site I was getting 2k i/o and on the remote site disk 300-400 i/o. Didn't matter which site.
So it looks like something slowed down between sites. I now have network involved looking at lines between sites. We also contacted our carrier to see if they have something.
Went back into older historical performance data too - found the drop from approximately 1200 disk i/o to averaging 800 disk i/o from a day to day for the main backup job. Captured that data and a few days around that date and sent that to networks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 07:54 AM
тАО10-29-2009 07:54 AM
Re: speed up backups
you may want to use the HP Tool T4 to measure and document performance data. It provides very detailled performance information on fibre channel disks as well.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 08:00 AM
тАО10-29-2009 08:00 AM
Re: speed up backups
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 09:18 AM
тАО10-29-2009 09:18 AM
Re: speed up backups
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 12:43 PM
тАО10-29-2009 12:43 PM
Re: speed up backups
How exactly are the discs at the remote site presented to the local site? Is it an extended SAN using DWDM over dark fibre? Is it an extended SAN using FC-IP gateways? Is it just MSCP serving over SCS?
How are the cluster members linked for SCS layer 2 traffic (it won't be IPCI unless you're a very early V8.4 test site) - so how are you extendeding the layer 2 LAN? Is it DWDM over dark fibre, or is it some form of layer 2 (SCS) encapsulation and an IP (layer 3) managed service?
If it's DWDM over dark fibre for both FC extension and LAN extension, then you shouldn't be seeing a big slow-down, unless the telco has re-configured your inter-site links to use much bigger distances.
If it's an IP managed service with SCS encapsulated in IP packets and you're using MSCP ovr SCS - it could be going pretty much anywhere at the telco's whim.
What does SCACP show you for delays and round trip times on the virtual circuits? Have those changed significantly? Do you need to increase SCACP's buffering to allow for worst-case round trip delays with highly variable latency (use SCACP CALCULATE)? The SCS algorithms are pretty good for reasonably consistent latency, but I've found it useful to increase buffer counts for intermittently variable latency to get enough packets in flight before you stop sending and have to wait for ACKs back. COnversely, do you have a lot of retransmits going on? Again SCACP can help you by looking at the error counts and the way they change over time.
Are you making use of SCS compression - which can be useful for sending a lot of big packet traffic (such as MSCP serving or lots of mini-copy / mini-merge bitmap data)?
It does sound rather like an inter-site link problem of a randomly variable nature, so a bit of low-level exploration is probably worth doing.
Good luck.
Cheers, Colin (http://www.xdelta.co.uk).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 01:33 PM
тАО10-29-2009 01:33 PM
Re: speed up backups
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 01:44 PM
тАО10-29-2009 01:44 PM
Re: speed up backups
Network config info:
Each system has 3 nics. 2 Nics are on the DWDM (one for main IP and one for Decnet) and are same network VLAN. 3rd Nic is local site only for private VLAN for backups (netbackup).
Setting up for a test run tonight - eliminating remote site disks by dismounting them from shadow sets for a couple hours. Will then have the automatic backups run and will capture the disk i/o via monitor. Will pull up performance data and t4 tomorrow - hoping for quicker speeds. Then that pretty much would isolate it down.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 02:37 PM
тАО10-29-2009 02:37 PM
Re: speed up backups
Any contention going on within the LAN side or the FC side - people adding new devices / systems to the SAN or LAN that you're now sharing the bandwidth with?
This is just the kind of situation where you really do want to avoid shared use of bandwidth. Give me well-bounded systems any day, especially for trouble-shooting in high-availability environments...
Cheers, Colin (http://www.xdelta.co.uk).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 02:56 PM
тАО10-29-2009 02:56 PM
Re: speed up backups
No path switching on VMS - only the manual stuff I have been doing to force disks to all same fabric. (Had that issue on EMC so thought it might be similar).
Had the counters zeroed last night - no update on any errors from today. Probably get that tomorrow.
Got an update from Telco they evaluated a card and are replacing it tomorrow. I will have to cross my fingers in the morning. :D
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2009 11:40 PM
тАО10-29-2009 11:40 PM
Re: speed up backups
Steve
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2009 02:19 AM
тАО10-30-2009 02:19 AM
Re: speed up backups
Hopefully it's a Telco issue and normal service will be resumed shortly.
Cheers, Colin (http://www.xdelta.co.uk).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2009 11:20 AM
тАО10-30-2009 11:20 AM
Re: speed up backups
Today I moved paths for the disk to what I think is the non-erroring network - 15gig in 8 min. Definitely better.
Sure hope the card fixes both paths.
Thanks everyone for the suggestions - doing the read dump to maximize where I get read was probably the most used one.
Still looking at seeing about tweaking an account just for the backups.
I will update once I can test replaced card.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-10-2009 09:32 PM
тАО11-10-2009 09:32 PM
Re: speed up backups
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-11-2009 07:12 AM
тАО11-11-2009 07:12 AM
Re: speed up backups
I was given a window where we were disabling circuits one at a time and the circuits were tested. I also forced the disk reads through the specific fabrics when one path was down. We still have 2 circuits that are slower. One on each fabric. We are trying to get an estimate to test out each fiber and see how much light is going through each cable. Also looking at where the cable routes to see if there is any common things between the two slower lines.
So for now its really the luck of the draw if I get a slow line or a faster line when it goes between sites because the switch uses whichever path. We keep them both on for redundancy.
Using only Local disk my backup is 42 min.
Using the shadowed disk with faster line its 1:30 min.
Using the shadowed disk and slower line its 2:30 min.
Link between switches is a 1gig pipe (even though some documentation indicated 2gig).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-11-2009 09:45 AM
тАО11-11-2009 09:45 AM
Re: speed up backups
Sounds like marketing specmanship. It is probably 1Gbs in each direction, therefore 2Gbs is quoted.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-11-2009 10:20 AM
тАО11-11-2009 10:20 AM
Re: speed up backups
But with the redundancy I KNOW we can lose a line/circuit and not impact anything. We originally started with only 1 circuit / switch - but then a backhoe showed us that we needed two per with different physical paths. Of course that also showed the people that had to pay for the downtime and the circuits that we needed the reduncancy :D
I will update more once we actually find out what the final problems are. So far multiple lines with different issues and still 2 other circuits impacted.
- « Previous
-
- 1
- 2
- Next »