- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Fragmented Disks...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-06-2004 10:22 PM
тАО06-06-2004 10:22 PM
Fragmented Disks...
Am i going to have to do a full file backup and restore to sort this out or is there any other options as this would take forever i guess??
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-06-2004 10:32 PM
тАО06-06-2004 10:32 PM
Re: Fragmented Disks...
if your defrag can't run you could discover why it doesn't work.
Otherwise you could:
a) run another defrag software (I've seen DFU in HP page but I never used it);
b) make a full backup/image and restore.
Obviously, you can make backup only without user connecet to this disk (don't use /IGN=INTERLOCK).
Doen't exist any native vms command do make defrag.
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-06-2004 10:35 PM
тАО06-06-2004 10:35 PM
Re: Fragmented Disks...
If a file is open, then it will not be candidate for defragment. Can you dismount the disk, then do a offline defragment ?
Thanks & regards,
Lokesh Jain
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-06-2004 10:46 PM
тАО06-06-2004 10:46 PM
Re: Fragmented Disks...
Nobody was on the system at the time and i closed all open files.
Anyway it should still run but would just skip the open file but defrag the rest..
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-06-2004 10:51 PM
тАО06-06-2004 10:51 PM
Re: Fragmented Disks...
Obviously backup and restore is an option but the only problem is downtime and im not sure how long this would take with there being over 160000 files on this disk.
Maybe i will try different defrag software...
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 01:07 AM
тАО06-07-2004 01:07 AM
Re: Fragmented Disks...
if you got 160 K files on the disk, I would guess they are not real big files.
So, WHAT exactly IS your fragmentation?
I suspect that small files are NOT (or nearly not) fragmented. Maybe it is your free space that is fragmented?
Related question: what is your _activity_ on these files? Are they very steady (with perhaps some additional creating), are they continuously created & deleted, are they steady but growing?)
IF your files are NOT fragmented, but only free space is, THEN, is this really a problem? (it IS if your new files are BIGGER than the average small free fragment, but you just don't care if a new file will (most of the time) fit the next available free fragment. Accessing existing files is not in any way infuenced by defragging free space.
If most of your files are slowly growing, THEN you are continuously generating file fragmentation.
The best way to cope with that, is to have RMS do most of your work: make sure your caches are big enough (look through AUTOGEN's ACP_xxx reports, and remember AUTOGEN still tries to conserve memory, so you can be generous in adjusting the values, that uses memory to buy performance) and enable Cathedral Windows (SYSGEN ACP_WINDOW = 255).
Setting the volume's EXTENT_QUANTITY to slightly bigger then your typical file growth per cycle of open-grow-close will help you more than defrag.
Bottom line: on a well-configured VMS system fragmented disks tend to have not as much impacr as on most other OSses.
IF you have many-fragmented, random-accessed, big files, then defragging can give you gain.
If your free space is fragmented in chuncks that are smaller as your typical acclocation (or extent-) quantity, THEN you SHOULD defrag.
So: what are you trying to achieve, and IS defragging really your tool to achieve your goal?
hth
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 01:18 AM
тАО06-07-2004 01:18 AM
Re: Fragmented Disks...
> 160K files - in how many directories?
There can be quite a severe performance penalty if directory files (*.DIR) are over a certain size (I think the balance is 127 blocks). Defragmentation - if any - won't help in that case, not would backup/restore.
Another shot:
DFU requires some space on disk, I'm not sure how many, but it will complain if there is too little. It might be 'just enough' - but that could actually be 'just too little' in terms of a possibility for defragmentation. You won't see it but defragmention will not take place.
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 01:41 AM
тАО06-07-2004 01:41 AM
Re: Fragmented Disks...
I have just tried installing DFU and run but that would'nt run either.
See attached for fragmentation stats....
Over 229000 unwanted fragments!!!!!!
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 02:22 AM
тАО06-07-2004 02:22 AM
Re: Fragmented Disks...
> I have recently been trying to improve system performance on my machine(Alpha ES40).I decided to try and defrag the disks,
What performance indication did you use to decide that you had a fragmentation problem?
> on five of the disks it has done the job and improved the state of the disk greatly.
Great. Did it change the performance of the system at all?
> Am i going to have to do a full file backup and restore to sort this out or is there any other options as this would take forever i guess??
Does that disk have a 'normal' share for the IO ( 10% - 20% of all the IO as per MONI DISK?)
If it is below 5% you could try just ignoring that disk as it's overall system performance impact will be limited.
Thanks for the fragmentation reports listing, but some of us are not so familiar with the Executive Software products output.
Maybe attach a DFU or DFO (Also free) report on a future reply?
That "# of Unwanted Fragments: 229466" seems to largely map to "Avg Fragments per File" minus 1 times "Total Number of Files"
So it wants to turn 2.5 fragements to 1.0 fragments. Considering that you have a large nuber of small files on this drive, this might not be a critical problem. Are you reading the disk in general with normal (16 or 32 block IOs or are you trying to read those files with large IOs (100+ blocks).
The large reads would indeed appear to suffer unreasonably on this disk.
Even at 32, every other virtual IO will turn into 2 physical IOs whioch will be a problem on a seriously used disk.
The fragments for the little files will easily fit in a single header and will not cause too much more management and wil not cost too much if your usage leans to small IOs (Check the XFC historgrams! Powerfull stuff!)
You do have a fair number of extentions headers that might be caused by fragments or lots of ACLs/ACEs. That later can not be changed by defragging.
Finally, you biggest file over 1GB. It could be the single worst offender and it woulfd be pointless to try and defrag this when oll you have is lits of little chunks to build from. Please analyse that file in detail. How many headers/fragments? My current recommendation is to copy that large file away to an other spindle, delete it, defrag the remaining disk, and move it back if you like. Is is per chance an indexed file?
If it is then you may want to use CONVERT to move the file out (to a single key high compressed, large bucket file) and CONVERT back to the normal FDL definition (retuned for appropriate ALLOC and EXTENT. Check out my 'indexed_file_backup' tool on the VMS freeware:
http://h71000.www7.hp.com/freeware/freeware50/rms_tools/indexed_file_backup.exe
Good luck!
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 02:23 AM
тАО06-07-2004 02:23 AM
Re: Fragmented Disks...
I just had a quick look at DFU's output.
Seems that the largest contiguous amount of free space is only 6475 blocks. That is too small for a disk defragger to do any work.
I think that the only decent thing to do is a BACKUP/INIT/restore operation.
Greetz,
Kris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 08:41 PM
тАО06-07-2004 08:41 PM
Re: Fragmented Disks...
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 09:13 PM
тАО06-07-2004 09:13 PM
Re: Fragmented Disks...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2004 09:25 PM
тАО06-07-2004 09:25 PM
Re: Fragmented Disks...
The large file is actually an oracle tablespace
With the given fragmentation, I'm not surprized of the performance problems.
I agree with Kris your only alternative is to reorganize this tablespace, by allocating a new file of the required size (plus somewhat more) and copy all data into that. Your Oracle DBA should know how to achive this within Oracle.
Willem
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2004 01:43 AM
тАО06-08-2004 01:43 AM
Re: Fragmented Disks...
$1$DKA5: ( 2637824/2637845 blocks; 25691 fragments)
That's pretty darn bad.
Find opportune moment to shut down DB or to take that tablespace offline.
Now copy that file to a different drive with good big chunks of freespace.
Next, delete original and defrag original disk.
Then either copy back and use, or just use Oracle rename function or new control file to point to new location and oracle and/or tablespace back online.
Your DBA (you!?) will know to generate a backup control file 'just in case' and generate a file number/name/tablespace listing again 'just in case'.
Enjoy!
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2004 03:41 AM
тАО06-08-2004 03:41 AM
Re: Fragmented Disks...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2004 08:28 AM
тАО06-08-2004 08:28 AM
Re: Fragmented Disks...
Delete the physical tablespace file(s).
Defragment as best as you can to get the largest free space(s) that you can achieve.
Startup the database exclusive. Create the tablespace in smaller contiguous chunks.
e.g. if you have free space "chunks" of
1G, 1.1G, and 1.2G
CREATE TABLESPACE XXX ... SIZE 1200M
then
ALTER TABLESPACE XXX add datafile SIZE 1100M;
then
ALTER TABLESPACE XXX add ... SIZE 1000M;
one at a time.
OpenVMS builds Oracle tablespaces (which version of Oracle are you using?) best_try_contiguous.
After each create or add datafile,
$ DUMP/HEAD/RECORD=END:0 XXX*.DBF
to examine the fragmentation of each datafile.
If you have a fragmented disk file, shutdown Oracle, DFU the file, startup Oracle etc.
Jim, Data911 OpenVMS and Database Manager
Alameda, CA, USA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2004 03:08 PM
тАО06-08-2004 03:08 PM
Re: Fragmented Disks...
the next question you might want to ask yourself is if you do actually want to have your Oracle DB space on the same spindle as all these many little files occupying your disk. Oracle can be pretty demanding on I/O and mixing that with I/O to the other files does not sound like a good idea (this does assume that these files are actively used and not just sit around and collect dust ;-)
Greetings, Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2004 09:19 PM
тАО06-08-2004 09:19 PM
Re: Fragmented Disks...
How did your database _BECOME_ so fragmented?
You eigther CREATED a big database on an already very fragmented disk (which you are now paying the price of correcting, and should know how to prevent in future)
OR
you created (way back when maybe?) a much smaller database, which has been steadily growing in small chuncks.
If THAT is the case, and your typical appication use is expected to continue more or less the same, then _NOW_ is the time to think about the future!
It seems VERY desirable that you now estimate your database growth for the next 'period', and pre-allocate just over enough for that.
And 'period' should be the time when you will be prepared to repeat the exercise of internal & external defragging, and forward-viewing re-sizing, of your database.
HTH,
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2004 09:42 PM
тАО06-08-2004 09:42 PM
Re: Fragmented Disks...
The two tablespaces are called:
EURP_TMP1.DBS
EURP_RBS.DBS
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-08-2004 09:52 PM
тАО06-08-2004 09:52 PM
Re: Fragmented Disks...
The largest contiguous space is much too low to defragment properly. Is it possible that some fragmented file (e.g. a log file) is keeping lots of fragments open all over the disk. Thus interrupting every large contigous space and thus also preventing larger files of becoming defragmented.
Check on every node of the cluster :
$ show dev xxx/files
and try to close the users keeping these files open.
Also check the defragmenter log file : it may give you the reason why he's not defragmenting.
Also try to cleanup the disk ... if possible.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-09-2004 02:30 AM
тАО06-09-2004 02:30 AM
Re: Fragmented Disks...
> The two tablespaces are called:
>
> EURP_TMP1.DBS
> EURP_RBS.DBS
Looks like you are lucky. If those dbs files live up to their name then they wil not have production data. Any competent DBA can just drop them an recreate them. First in a different spot, then you do your thing, then (if needed) recreate them in the 'right place'
I'll attach a little SQL script to summarize the database file usage.
Hein.
column type format a4
column Tablespace format a15
column file format a45
column id format 99
column mb format 999999
set pages 9999
set heading off
set FEEDBACK off
select 'Redo', 'group ' || l.group# "Tablespace", l.group# "Id", l.bytes/(1024*1
024) "MB",
MEMBER "File" from v$logfile f, v$log l where l.group# = f.group#
union
select 'Data' "Type", tablespace_name "Tablespace", FILE_ID "Id", bytes/(1024*10
24) "MB",
file_name "File" from dba_data_files
union
select 'Temp' "Type", tablespace_name "Tablespace", FILE_ID "Id", bytes/(1024*10
24) "MB",
file_name "File" from dba_temp_files
union
select 'Ctrl' "Type", 'Control_file' "Tablespace", rownum "Id", 0 "MB",
name "File" from v$controlfile
order by 1,2
/