StoreEver Tape Storage
Showing results for 
Search instead for 
Did you mean: 

StorageWorks 1/8 G2 Tape Autoloader

Go to solution
New Member

StorageWorks 1/8 G2 Tape Autoloader

So we have 4 vendors involved with this problem and I would guess most would point the finger at the others.

HP - StorageWorks 1/8 G2 Tape Autoloader SCSI
Apple - G5 OSX 10.3.9 3GB RAM
Dantz - Retrospect 6.1.230
ATTO - ExpressPCI UL4D

So the Autoloader touts having a "Sustained Transfer Rate" of 576 GB/hr. I'm getting about 35-75 depending on machine on network. Most are on Gigabit.

I've installed the driver that came with the ATTO.

Any ideas?



P.S. This thread has been moved from Tape Backup (Small and Medium Business) to Tape Libraries and Drives. - Hp forum Moderator

Honored Contributor

Re: StorageWorks 1/8 G2 Tape Autoloader

Sorry to have to say this but the 1/8 G2 is not yet qualified or supported on the ATTO controllers. I can't say if that's your problem or not but I do know that every controller we have tried so far has had problems initially and we have had to work with the supplier to get them fixed. The ATTO card is in the queue but not yet tested so I can't say if there are any issues or not.

If you are pulling data over Gigabit Ethernet about the maximum I would ever expect to see would be about 250 GB/hr with a very light non-backup network load but that's quite a bit above what you are getting so it doesn't sound like the network should be the primary issue.
New Member

Re: StorageWorks 1/8 G2 Tape Autoloader

Thanks Curtis. Starting to regret buying this device.
New Member

Re: StorageWorks 1/8 G2 Tape Autoloader

Actually, I tried backing up the very machine the drive was connected to and still only got 60 GB/hr. So that would tell me that the network is not the primary issue. Down to the controller or even Retrospect. I'm having Retrospect do data compression and maybe that extra task is slowing things down.
Respected Contributor

Re: StorageWorks 1/8 G2 Tape Autoloader

A couple of points...

- Compression is a very CPU-intensive task. It's very likely that trying to compress data on the server is slowing your backup performance down. Check in your OS tools for something to monitor CPU use and see the difference between compression on and off.

- I believe that all modern tape drives perform compression natively in the hardware of the tape drive itself -- certainly that LTO drive does. It is only necessary to ensure that your backup application knows the drive supports compression, and to not turn it off! There is no benefit to compressing on the server and then on the tape drive; in fact, since well-compressed data can't be compressed again, this could well slow things down (and will certainly make the size on tape larger than after one compression, as you're putting compressed data in two wrappers, instead of just one.)

- The 576GB/hour assumes that the data hitting the tape drive is compressible at 2:1. If you're already compressing the data on the server, it's compressible at *less than* 1:1. The native speed for this tape drive (the speed it would write uncompressible data, or with compression turned off, is 288GB/hour, or 80MB/second.

- Get a tool to test how fast you can read from your disks. Although the tape drive itself can do 80MB/sec native, it's very possible that the source systems can only read data at a fraction of that speed. Because of file system fragmentation, small files, contention from other processes, etc., even fast servers may not be able to read the data from disk faster than 10- or 20MB/second.

- I've had no experience with how OSX handles network traffic, but I know that on Windows, trying to back up too many simultaneous clients through a single NIC, or, trying to use too many NICs simultaneously for backup, can cause a lot of CPU load, and therefore decrease the aggregate backup speed once you get to a certain point. Monitor CPU use as well as LAN traffic and backup speed as you go from 1 to 2 to ... simultaneous jobs and find your 'sweet spot'.

Solutions, if the network clients are the bottleneck (and assuming you are not using Dantz to do compression!)? I'd suggest a few possibilities.

1) If it supports it, consider using Dantz to interleave or multiplex -- that is, in a single backup job, have the data from several clients concurrently go to one tape. The advantage is, you can use your tape drive more efficiently for backups. The disadvantage is, that a restore will take longer (since the tape drive will have to read through more blocks to get to the one server you want to restore). Or,

2) Consider staging backups to a disk partition, then copying them to physical tape. I don't know if Dantz supports this, but if it does, it's a reasonably inexpensive HW solution, that can be scripted to make it pretty automatic. Or,

3) Use a VTL (Virtual Tape Library) product, like the HP D2D Backup System, to give you the ability to run many simultaneous backup jobs to virtual tape targets, and then copy those jobs to physical tape afterward for archiving and disaster recovery.
Liberty breeds responsibility; Government breeds dependence