Around the Storage Block
Showing results for 
Search instead for 
Did you mean: 

SSDs or disk? Buffer or cache? A closer look at tape-as-NAS options



Mark.jpgBy Mark Fleischhauer, HP StoreEver Tape Storage Solutions Engineering Manager


Part 1: A closer look at tape-as-NAS - disks

A few weeks ago I wrote about a new emerging architecture for long-term data retention that combines FLash drives and tAPE called FLAPE. Flash drives, or solid-state drives (SSDs), offer better read-and-write operation performance, lower latency and lower power when compare to a standard spinning disk. In traditional transactional environments, these factors make a big difference. However you may wonder how this impacts an archive work flow? In this two-part series, I’ll show when you may want to use a disk or an SSD depending on your archiving workflow. Part 1 here focuses on disk and Part 2 focuses on SSDs.


Buffer or cache?

Recall in the StoreEver tNAS solution using QStar Archive Storage Manager (ASM), there is a storage staging area where all data gets written as it comes in from the NAS client or read from tape. Depending on your access requirements to the archive data on tape, this storage may be used more like a cache (enabling faster access to data for subsequent reads) or a buffer (temporary storage while data is moved to tape). In either case, the reads or writes to or from the tNAS solution is exactly the same.


If you want fast access to the recent archive data (either most recently written or recently read) from the tNAS solution, you will need enough storage that can hold an adequate amount of content while the data is in demand. In this case, the storage would act more like a cache. Subsequent reads would result in a “cache hit” not requiring a read from the tape itself for each access.


Perhaps you need just enough storage and throughput performance of the staging storage so the data can stream to the tape drive from multiple clients. The data on the client is no longer active and just needs to be sent to the tape archive. Future reads, if any, by the client would be so unique that the chance of experiencing a “cache hit” would be very low resulting in a tape read operation. In this scenario, only a small amount of storage would be needed and would act like more like a buffer. It is merely storing the data for a short period of time while it gets written to tape.


Disk for cache

A disk solution is probably the best choice if you want quick access to recently read or written data, much like a traditional cache environment. Reads from the tNAS solution would come directly from the disk without having to read the data from tape. With disks in a RAID configuration, you can create a very large cache with good performance.


You should size the disk storage to address the access needs you require. For example, if you archive a video project that is 500 GB, you need to decide how many recently written projects you want available for fast access.


You also need to decide how many projects that are recalled that you want to make available for fast access as well. In this case, if you want to keep the three most recent projects readily available plus one more that was pulled from tape, you would need at least 2TB plus a small amount of space for the meta data.


Keep in mind that the storage is “stateless” and can be recreated if a failure occurs. The meta data is stored frequently on the tape and can be used to recreate the QStar host environment. It’s important that the solution be configured properly to ensure that the data is written to tape before the original data on the client is deleted.



Using disk as a cache in front of your HP StoreEver tape library within the QStar tape-as-NAS solutions allows quicker access to data that is active for a period of time. Sizing this component of the solution is dependent on your specific workflow and access needs. If you use your tNAS archiving solution in the more traditional “write once, read maybe” scenario, an SSD buffer in front of your StoreEver tape library may be better solution. I will explore this solution along with useful information on how to choose an appropriate SSD in Part 2 of this series.




0 Kudos
About the Author


Our team of Hewlett Packard Enterprise storage experts helps you to dive deep into relevant infrastructure topics.


as someone who uses StoreOnce NAS for backups -- I would like to see transparent integration with StoreOnce and this tNAS, with StoreOnce being the point you'd read from, a sizable disk "cache" on the tNAS to buffer tape writes(last time I used tape was a few years ago and the source media couldn't come close to what the tape drive was capable of in throughput on paper anyway (which wasn't the worst thing in the world it was a very very busy primary storage system and the slower backups meant other workloads weren't impacted when they ran)


Also would love to be able to keep the data dehydrated as it goes to tape, backup the deduped bits, get more on there.


I hope to deploy tape again next year, we have backups solved and off site backups solved - tape will give me offline backups which I want - I want data backed up in a place that can't be accidently or intentionally deleted without physically interacting with the tape media (in the past I always backed up to duplicate tapes, one stored on site the other off site - plan to do the same this time around).


I poked around for some docs on this tNAS the last time I saw your blog post, certainly has me interested, but I don't recall finding a whole lot of info on the topic at the time.


Thanks for your comments. Using StoreOnce as the tNAS cache is an interesting idea.  We'll take a look at this concept.  Using a tape library for deduped data comes with a set of challenges.  First and foremost,  rehydrating data may require reads for multiple objects.  On a disk system these objects can be accessed readily.  On tape, those objects could be spread across multiple cartridges which would require multiple cartridge loads and tape reads, taking significant time to retrieve a requested file.  The concept is good but putting it into practice is not that easy.  Thanks again for your interest and inputs.