- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Defragmentation utility on jfs
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-08-2001 02:30 AM
02-08-2001 02:30 AM
doing defragmentation of that filesystem is usefull or not? Is it possible to do that online?
Thanks
Regards
Roberto
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-08-2001 02:45 AM
02-08-2001 02:45 AM
SolutionHere are some "Student Workbook" lines :
Because block are allocated and deallocated as files are added, removed, expanded and truncated, block space can become fragmented. this can make it more difficult for JFS to take advantage of the benefits provided by a contiguous extent allocation.
the fsadm utility will bring the fragemented extents of files closer together, group them by type and frequency of access.
fsadm -F vxfs -D /mountpoint : report on directory fragmentation
fsadm -F vxfs -E /mountpoint : report on file extent fragmentation
fsadm -F vxfs -d /mountpoint : directory defragmentation
fsadm -F vxfs -e /mountpoint : file extent defragmentation
If your mountpoint contains oracle datafile, defragmentation is unuseful.
regards,
Patrice.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-08-2001 03:05 AM
02-08-2001 03:05 AM
Re: Defragmentation utility on jfs
but why for oracle files is unuseful to make defragmentation?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-08-2001 03:09 AM
02-08-2001 03:09 AM
Re: Defragmentation utility on jfs
Bests,
Fred.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-08-2001 04:47 AM
02-08-2001 04:47 AM
Re: Defragmentation utility on jfs
It is not always true that filesystems containing Oracle files don't need to be defragmented.
If any of your datafile has the AUTOEXTEND flag turned on, it could become fragmented.
Best regards,
Dan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-08-2001 02:33 PM
02-08-2001 02:33 PM
Re: Defragmentation utility on jfs
You can degragment within Oracle and that is a worthwhile task.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-08-2001 08:57 PM
02-08-2001 08:57 PM
Re: Defragmentation utility on jfs
One of the bigger causes of fragmentation is running a file system near it's upper limits. If you attempt to create a large file and the file system is up past 90% there is a good chance that there isn't a contiguous space available for your file to be created unfragmented.
I typically set a cron job to defragment the root file system areas one a month (usually mid-month). In data areas that change as I've noted above, I again process these monthly after all the end of month processes have completed and a full backup is done. A while back some people were suggesting some horror stories, but I have not seen any. Again, it's a good idea to have your backups completed before you start and you certainly don't want to be fighting large amounts of i/o that may be processing from normal system activity.