- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Best Practices for Terabyte file systems?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 08:40 AM
06-20-2002 08:40 AM
Background: HP-UX 11.0, JFS 3.x ...
I wondering what the best practices are for creating file systems. Specifically, how should one plan for file system growth - on an extremely large system. I understand that there is a default limit of 16 physical volumes per vg - which cannot be changed after vg creation. Are there other gotchas? What should the physical extent size be set to - especially if TB+ file-system sizes would be desireable? Are there formulas for inode allocation?
Indeed, what are the logical/theoretical limits of:
- Physical extent size (Depends upon disk size, perhaps?)
- Physical disk size? (Depends upon the state of drive technology, SCSI standards, other?)
- Total space in a volume group?
- Total space on a given logical volume?
This is less a specific question than a theoretical issue, though I would be interested in knowing about actual cases (i.e., vgcreate syntax/specifics providing for a Terabyte file system). I'd like to set myself up for the largest number of options as the system grows.
And, goodness, I forgot to even think about stripes ...
Thanks, all.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 08:55 AM
06-20-2002 08:55 AM
SolutionI am attaching a print manifest o/p of one of our TB systems , the basic principles we follow are like this ( base on we have a EMC frame )
1. Use the normal extents ie 4 MB ones.
2. Dont amke a volume more than 4 disks
3. Use a combination of tables ( preferably day wise ) to store date and round robin them .
4. like vg1-3 for 1-10 dates , and so son.
5. Use alternate links.
6. Power PAth if possible.
7. Tune once the system is in prodcution
8. Ofcourse u need to atleast start with 3-4 controllers as I assume the data will grow.
Manoj Srivastava
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 08:59 AM
06-20-2002 08:59 AM
Re: Best Practices for Terabyte file systems?
My following input isn't filled with technical information, but it may give you something to think about...
I recently attended a Veritas disaster recovery seminar in which during the course of the lecture the DR specialist mentioned on of his clients that "proudly" announced to him that they were his first customer to have terrabyte filesystems... until they realized what a nightmare it was to back them up by tape and using replication. They quickly lost that distiction.
Just my $0.02
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 09:07 AM
06-20-2002 09:07 AM
Re: Best Practices for Terabyte file systems?
While I have not dealt with terabyte file systems, I will offer some guidelines as I see them.
First, you are on-track recognizing that you get "one chance" (during 'vgcreate') to set your 'pe_size', 'max_pe', 'max_pv' and 'max_lv' so choose with the future in mind.
I don't recall any performance penalty for large logical volumes as opposed to smaller ones on a larger number of volume groups, except that about 4-8KB of lockable kernel memory is consumed for each volume group structure.
Remember that if you need or want more than 10 volume groups, you will need to tune the 'maxvgs' kernel parameter.
Remember too, that the LVM structures must fit into one physical disk extent. Thus, you would find that you may need to increase 'pe_size' if you are using large physical disk, and/or large values for 'max_pv' and/or 'max_pe'.
There's a good paper from Oracle which summarizes to "SAME" - Strip-And-Mirror-Everything. The paper is titled "Optimal Storage Configuration Made Easy":
http://technet.oracle.com/deploy/availability/pdf/oow2000_same.pdf
I'd certainly plan carefully, since with any kind of striping (true striping, or extent-based striping) expanding a filesystem' size essentially means doubling the physical disk that you start with in order to be able to match LVM's rules.
If, down the road, you find yourself needing to expand, one other option that does work is to use 'lvmerge' to replicate your filesystem from one set of disks to another. As long as the replication (which is really mirroring) involves identically sized filesystems (which is enforced), then you can relax some of these restraints.
In any event, these are some suggestions for you which may help.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 09:10 AM
06-20-2002 09:10 AM
Re: Best Practices for Terabyte file systems?
Marty
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 10:11 AM
06-20-2002 10:11 AM
Re: Best Practices for Terabyte file systems?
Is there an exceptionally good backup solution for a 15TB+ system?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 10:17 AM
06-20-2002 10:17 AM
Re: Best Practices for Terabyte file systems?
We currently have a .75 tb oracle
db.
The backup is scripted to break the backup into four groups of lvols. If and when we add more drives, this will be broken up further.
We found out the "hard way" this
was the only solution to a backup or restore in a resonable amount of time.
(Veritas Netbackup/HPSurestore 10/180)
Best of luck.
dl
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 10:17 AM
06-20-2002 10:17 AM
Re: Best Practices for Terabyte file systems?
Pete
Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 10:18 AM
06-20-2002 10:18 AM
Re: Best Practices for Terabyte file systems?
With that much data SAN and dedicated LAN/LANs.
You can use push and pull to get the data copied accross ASP.
Paula
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 10:22 AM
06-20-2002 10:22 AM
Re: Best Practices for Terabyte file systems?
We use/will expand usage to Veritas NetBackup with an ATL P7000. We still have backup groups using SureStore 4/40's, etc. Also going to an Gig-E network and SAN.
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 10:35 AM
06-20-2002 10:35 AM
Re: Best Practices for Terabyte file systems?
In multi-TB systems a dedicated backup LAN is not just desirable - it's mandatory. And you really must use gigabit equipment.
We have DBs approaching 4 TB and are are always fighting to stay w/in the backup window. I'm not sure exactly how the data is sliced & diced when it's backed up as our Backup team handles that. I just know that on our systems with > 1TB of data we use 1000SX cards on a dedicated LAN exclusively. There's no way we could backup daily w/o this architecture.
My 2 cents,
Jeff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 11:21 AM
06-20-2002 11:21 AM
Re: Best Practices for Terabyte file systems?
Mirror, quiess activity, break mirror, backup each disk separately (backing up multiple disks concurrently).
Marty
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2002 11:59 AM
06-20-2002 11:59 AM
Re: Best Practices for Terabyte file systems?
If possible. use the Bussiness Continuty Disks or try using database online backups (RMAN with oracle)
If you have multiple machine use dedicated LANs are share the drives using SANs.
Thanks.
Prashant.