Operating System - HP-UX
1834342 Members
1890 Online
110066 Solutions
New Discussion

Best Practices for Terabyte file systems?

 
SOLVED
Go to solution
A. Daniel King_1
Super Advisor

Best Practices for Terabyte file systems?

Hi, folks -

Background: HP-UX 11.0, JFS 3.x ...

I wondering what the best practices are for creating file systems. Specifically, how should one plan for file system growth - on an extremely large system. I understand that there is a default limit of 16 physical volumes per vg - which cannot be changed after vg creation. Are there other gotchas? What should the physical extent size be set to - especially if TB+ file-system sizes would be desireable? Are there formulas for inode allocation?

Indeed, what are the logical/theoretical limits of:

- Physical extent size (Depends upon disk size, perhaps?)
- Physical disk size? (Depends upon the state of drive technology, SCSI standards, other?)
- Total space in a volume group?
- Total space on a given logical volume?

This is less a specific question than a theoretical issue, though I would be interested in knowing about actual cases (i.e., vgcreate syntax/specifics providing for a Terabyte file system). I'd like to set myself up for the largest number of options as the system grows.

And, goodness, I forgot to even think about stripes ...

Thanks, all.
Command-Line Junkie
12 REPLIES 12
MANOJ SRIVASTAVA
Honored Contributor
Solution

Re: Best Practices for Terabyte file systems?

Hi Daniel


I am attaching a print manifest o/p of one of our TB systems , the basic principles we follow are like this ( base on we have a EMC frame )

1. Use the normal extents ie 4 MB ones.
2. Dont amke a volume more than 4 disks
3. Use a combination of tables ( preferably day wise ) to store date and round robin them .
4. like vg1-3 for 1-10 dates , and so son.
5. Use alternate links.
6. Power PAth if possible.
7. Tune once the system is in prodcution
8. Ofcourse u need to atleast start with 3-4 controllers as I assume the data will grow.


Manoj Srivastava



Christopher McCray_1
Honored Contributor

Re: Best Practices for Terabyte file systems?

Hello,

My following input isn't filled with technical information, but it may give you something to think about...

I recently attended a Veritas disaster recovery seminar in which during the course of the lecture the DR specialist mentioned on of his clients that "proudly" announced to him that they were his first customer to have terrabyte filesystems... until they realized what a nightmare it was to back them up by tape and using replication. They quickly lost that distiction.

Just my $0.02

Chris
It wasn't me!!!!
James R. Ferguson
Acclaimed Contributor

Re: Best Practices for Terabyte file systems?

Hi Daniel:

While I have not dealt with terabyte file systems, I will offer some guidelines as I see them.

First, you are on-track recognizing that you get "one chance" (during 'vgcreate') to set your 'pe_size', 'max_pe', 'max_pv' and 'max_lv' so choose with the future in mind.

I don't recall any performance penalty for large logical volumes as opposed to smaller ones on a larger number of volume groups, except that about 4-8KB of lockable kernel memory is consumed for each volume group structure.

Remember that if you need or want more than 10 volume groups, you will need to tune the 'maxvgs' kernel parameter.

Remember too, that the LVM structures must fit into one physical disk extent. Thus, you would find that you may need to increase 'pe_size' if you are using large physical disk, and/or large values for 'max_pv' and/or 'max_pe'.

There's a good paper from Oracle which summarizes to "SAME" - Strip-And-Mirror-Everything. The paper is titled "Optimal Storage Configuration Made Easy":

http://technet.oracle.com/deploy/availability/pdf/oow2000_same.pdf

I'd certainly plan carefully, since with any kind of striping (true striping, or extent-based striping) expanding a filesystem' size essentially means doubling the physical disk that you start with in order to be able to match LVM's rules.

If, down the road, you find yourself needing to expand, one other option that does work is to use 'lvmerge' to replicate your filesystem from one set of disks to another. As long as the replication (which is really mirroring) involves identically sized filesystems (which is enforced), then you can relax some of these restraints.

In any event, these are some suggestions for you which may help.

Regards!

...JRF...
Martin Johnson
Honored Contributor

Re: Best Practices for Terabyte file systems?

The most challenging task is doing timely backups of the data. With 15 TBs and growing, it is a daunting task. We've have gone to SAN storage and use robotic Super DLT tape drives. There are issues with network bandwidth on the SAN while doing multiple backups.

Marty
A. Daniel King_1
Super Advisor

Re: Best Practices for Terabyte file systems?

Thanks for the quick replies!

Is there an exceptionally good backup solution for a 15TB+ system?
Command-Line Junkie
Dave La Mar
Honored Contributor

Re: Best Practices for Terabyte file systems?

A follow up to Christopher's comment.
We currently have a .75 tb oracle
db.
The backup is scripted to break the backup into four groups of lvols. If and when we add more drives, this will be broken up further.
We found out the "hard way" this
was the only solution to a backup or restore in a resonable amount of time.
(Veritas Netbackup/HPSurestore 10/180)

Best of luck.
dl
"I'm not dumb. I just have a command of thoroughly useless information."
Pete Randall
Outstanding Contributor

Re: Best Practices for Terabyte file systems?

While there are many choices, I prefer OmniBack - primarily because of the excellent support available from HP.

Pete

Pete
Paula J Frazer-Campbell
Honored Contributor

Re: Best Practices for Terabyte file systems?

Hi

With that much data SAN and dedicated LAN/LANs.


You can use push and pull to get the data copied accross ASP.


Paula
If you can spell SysAdmin then you is one - anon
Christopher McCray_1
Honored Contributor

Re: Best Practices for Terabyte file systems?

Hello, again,

We use/will expand usage to Veritas NetBackup with an ATL P7000. We still have backup groups using SureStore 4/40's, etc. Also going to an Gig-E network and SAN.

Chris
It wasn't me!!!!
Jeff Schussele
Honored Contributor

Re: Best Practices for Terabyte file systems?

Hi,

In multi-TB systems a dedicated backup LAN is not just desirable - it's mandatory. And you really must use gigabit equipment.
We have DBs approaching 4 TB and are are always fighting to stay w/in the backup window. I'm not sure exactly how the data is sliced & diced when it's backed up as our Backup team handles that. I just know that on our systems with > 1TB of data we use 1000SX cards on a dedicated LAN exclusively. There's no way we could backup daily w/o this architecture.

My 2 cents,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
Martin Johnson
Honored Contributor

Re: Best Practices for Terabyte file systems?

Backing up 15 TBs+:

Mirror, quiess activity, break mirror, backup each disk separately (backing up multiple disks concurrently).

Marty
Deshpande Prashant
Honored Contributor

Re: Best Practices for Terabyte file systems?

Hi
If possible. use the Bussiness Continuty Disks or try using database online backups (RMAN with oracle)
If you have multiple machine use dedicated LANs are share the drives using SANs.

Thanks.
Prashant.
Take it as it comes.