- Community Home
- >
- Storage
- >
- Data Protection and Retention
- >
- StoreEver Tape Storage
- >
- LTO-4 under HP-UX 11.11 and NBU 6.0mp4 - Poor Thro...
StoreEver Tape Storage
1752808
Members
6264
Online
108789
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2008 05:55 AM
07-31-2008 05:55 AM
LTO-4 under HP-UX 11.11 and NBU 6.0mp4 - Poor Throughput?
Fibre Connected LTO Drives - currently with 2G connections.
When I use dd if=/dev/zero of=/dev/rmt/55mn (or BEST) -- with blocksizes ranging from 256k to 1024K, I am able to verify my LTO Drives are reaching their rated 2Gbit hookup speed of 100 to 120 MB/sec.
I have test filesystems -- also 2Gbit connected to a fast storrage array. dd test from both non-striped and concat show I am able to read close to Fibre Speed -- 100 to 120MB/sec.
Under NBU, I set up the LTO drives as HCART type per Symantec DOcumentation to use the latest Mapping File to auto-discover and type the drives -- which it did as type HCART.
But when backing up the filesystems I only get a mere 8 to 10 MB/sec. Even if I back up the raw device beneath the filesystem (using NBU Advanced CLient) -- my tput stays at 8 to 10 MB/sec.
However, I am able to stream up to 12 of these filesystems to just one LTO drive and still keep the same per filesystem throughput -- which means my LTOs are being fed and able to digest near or at its rated speed.
I am puzzled. Is my issue with NBU configuration? I think I have my SIZE_BUFFERS and NUMBER_SIZE_BUFFERS set up properly -- 25k and 32 currently.
Initially I thought it's my storage layout -- my VxVM or LVM stripes and configuration and kernel tunables but dd tests show those filesystems and volumes are capable of being read/sucked intensely. Why does NBU seem incapable of sucking data that intense?
Any advice?
When I use dd if=/dev/zero of=/dev/rmt/55mn (or BEST) -- with blocksizes ranging from 256k to 1024K, I am able to verify my LTO Drives are reaching their rated 2Gbit hookup speed of 100 to 120 MB/sec.
I have test filesystems -- also 2Gbit connected to a fast storrage array. dd test from both non-striped and concat show I am able to read close to Fibre Speed -- 100 to 120MB/sec.
Under NBU, I set up the LTO drives as HCART type per Symantec DOcumentation to use the latest Mapping File to auto-discover and type the drives -- which it did as type HCART.
But when backing up the filesystems I only get a mere 8 to 10 MB/sec. Even if I back up the raw device beneath the filesystem (using NBU Advanced CLient) -- my tput stays at 8 to 10 MB/sec.
However, I am able to stream up to 12 of these filesystems to just one LTO drive and still keep the same per filesystem throughput -- which means my LTOs are being fed and able to digest near or at its rated speed.
I am puzzled. Is my issue with NBU configuration? I think I have my SIZE_BUFFERS and NUMBER_SIZE_BUFFERS set up properly -- 25k and 32 currently.
Initially I thought it's my storage layout -- my VxVM or LVM stripes and configuration and kernel tunables but dd tests show those filesystems and volumes are capable of being read/sucked intensely. Why does NBU seem incapable of sucking data that intense?
Any advice?
Hakuna Matata.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP