1819928 Members
3267 Online
109607 Solutions
New Discussion юеВ

Fragmented Disks...

 
Peter Clarke
Regular Advisor

Fragmented Disks...

I have recently been trying to improve system performance on my machine(Alpha ES40).I decided to try and defrag the disks,on five of the disks it has done the job and improved the state of the disk greatly.But on the sixth disk it will not defrag i have spoken to the company that supplies the defrag software and they told me to check the disk structure which i did and have now repaired the disk but still cannot run defrag.
Am i going to have to do a full file backup and restore to sort this out or is there any other options as this would take forever i guess??
20 REPLIES 20
Antoniov.
Honored Contributor

Re: Fragmented Disks...

Hi Peter,
if your defrag can't run you could discover why it doesn't work.
Otherwise you could:
a) run another defrag software (I've seen DFU in HP page but I never used it);
b) make a full backup/image and restore.
Obviously, you can make backup only without user connecet to this disk (don't use /IGN=INTERLOCK).
Doen't exist any native vms command do make defrag.

Antonio Vigliotti
Antonio Maria Vigliotti
Lokesh_2
Esteemed Contributor

Re: Fragmented Disks...

Hi Peter,

If a file is open, then it will not be candidate for defragment. Can you dismount the disk, then do a offline defragment ?

Thanks & regards,
Lokesh Jain
What would you do with your life if you knew you could not fail?
Peter Clarke
Regular Advisor

Re: Fragmented Disks...

Lokesh,

Nobody was on the system at the time and i closed all open files.
Anyway it should still run but would just skip the open file but defrag the rest..

Peter

Peter Clarke
Regular Advisor

Re: Fragmented Disks...

Antoniov,

Obviously backup and restore is an option but the only problem is downtime and im not sure how long this would take with there being over 160000 files on this disk.
Maybe i will try different defrag software...

Peter
Jan van den Ende
Honored Contributor

Re: Fragmented Disks...

Peter,

if you got 160 K files on the disk, I would guess they are not real big files.
So, WHAT exactly IS your fragmentation?
I suspect that small files are NOT (or nearly not) fragmented. Maybe it is your free space that is fragmented?
Related question: what is your _activity_ on these files? Are they very steady (with perhaps some additional creating), are they continuously created & deleted, are they steady but growing?)

IF your files are NOT fragmented, but only free space is, THEN, is this really a problem? (it IS if your new files are BIGGER than the average small free fragment, but you just don't care if a new file will (most of the time) fit the next available free fragment. Accessing existing files is not in any way infuenced by defragging free space.

If most of your files are slowly growing, THEN you are continuously generating file fragmentation.

The best way to cope with that, is to have RMS do most of your work: make sure your caches are big enough (look through AUTOGEN's ACP_xxx reports, and remember AUTOGEN still tries to conserve memory, so you can be generous in adjusting the values, that uses memory to buy performance) and enable Cathedral Windows (SYSGEN ACP_WINDOW = 255).
Setting the volume's EXTENT_QUANTITY to slightly bigger then your typical file growth per cycle of open-grow-close will help you more than defrag.

Bottom line: on a well-configured VMS system fragmented disks tend to have not as much impacr as on most other OSses.
IF you have many-fragmented, random-accessed, big files, then defragging can give you gain.
If your free space is fragmented in chuncks that are smaller as your typical acclocation (or extent-) quantity, THEN you SHOULD defrag.

So: what are you trying to achieve, and IS defragging really your tool to achieve your goal?


hth

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Willem Grooters
Honored Contributor

Re: Fragmented Disks...

Peter,
> 160K files - in how many directories?
There can be quite a severe performance penalty if directory files (*.DIR) are over a certain size (I think the balance is 127 blocks). Defragmentation - if any - won't help in that case, not would backup/restore.
Another shot:
DFU requires some space on disk, I'm not sure how many, but it will complain if there is too little. It might be 'just enough' - but that could actually be 'just too little' in terms of a possibility for defragmentation. You won't see it but defragmention will not take place.
Willem Grooters
OpenVMS Developer & System Manager
Peter Clarke
Regular Advisor

Re: Fragmented Disks...

Willem,

I have just tried installing DFU and run but that would'nt run either.

See attached for fragmentation stats....

Over 229000 unwanted fragments!!!!!!

Peter
Hein van den Heuvel
Honored Contributor

Re: Fragmented Disks...


> I have recently been trying to improve system performance on my machine(Alpha ES40).I decided to try and defrag the disks,

What performance indication did you use to decide that you had a fragmentation problem?

> on five of the disks it has done the job and improved the state of the disk greatly.

Great. Did it change the performance of the system at all?

> Am i going to have to do a full file backup and restore to sort this out or is there any other options as this would take forever i guess??

Does that disk have a 'normal' share for the IO ( 10% - 20% of all the IO as per MONI DISK?)
If it is below 5% you could try just ignoring that disk as it's overall system performance impact will be limited.

Thanks for the fragmentation reports listing, but some of us are not so familiar with the Executive Software products output.
Maybe attach a DFU or DFO (Also free) report on a future reply?

That "# of Unwanted Fragments: 229466" seems to largely map to "Avg Fragments per File" minus 1 times "Total Number of Files"
So it wants to turn 2.5 fragements to 1.0 fragments. Considering that you have a large nuber of small files on this drive, this might not be a critical problem. Are you reading the disk in general with normal (16 or 32 block IOs or are you trying to read those files with large IOs (100+ blocks).
The large reads would indeed appear to suffer unreasonably on this disk.
Even at 32, every other virtual IO will turn into 2 physical IOs whioch will be a problem on a seriously used disk.
The fragments for the little files will easily fit in a single header and will not cause too much more management and wil not cost too much if your usage leans to small IOs (Check the XFC historgrams! Powerfull stuff!)

You do have a fair number of extentions headers that might be caused by fragments or lots of ACLs/ACEs. That later can not be changed by defragging.

Finally, you biggest file over 1GB. It could be the single worst offender and it woulfd be pointless to try and defrag this when oll you have is lits of little chunks to build from. Please analyse that file in detail. How many headers/fragments? My current recommendation is to copy that large file away to an other spindle, delete it, defrag the remaining disk, and move it back if you like. Is is per chance an indexed file?
If it is then you may want to use CONVERT to move the file out (to a single key high compressed, large bucket file) and CONVERT back to the normal FDL definition (retuned for appropriate ALLOC and EXTENT. Check out my 'indexed_file_backup' tool on the VMS freeware:
http://h71000.www7.hp.com/freeware/freeware50/rms_tools/indexed_file_backup.exe

Good luck!
Hein.
Kris Clippeleyr
Honored Contributor

Re: Fragmented Disks...

Peter,

I just had a quick look at DFU's output.
Seems that the largest contiguous amount of free space is only 6475 blocks. That is too small for a disk defragger to do any work.
I think that the only decent thing to do is a BACKUP/INIT/restore operation.

Greetz,

Kris
I'm gonna hit the highway like a battering ram on a silver-black phantom bike...
Peter Clarke
Regular Advisor

Re: Fragmented Disks...

The large file is actually an oracle tablespace.Attached is the output from DFU...

Peter
Peter Clarke
Regular Advisor

Re: Fragmented Disks...

dfu file....
Willem Grooters
Honored Contributor

Re: Fragmented Disks...


The large file is actually an oracle tablespace


With the given fragmentation, I'm not surprized of the performance problems.
I agree with Kris your only alternative is to reorganize this tablespace, by allocating a new file of the required size (plus somewhat more) and copy all data into that. Your Oracle DBA should know how to achive this within Oracle.

Willem
Willem Grooters
OpenVMS Developer & System Manager
Hein van den Heuvel
Honored Contributor

Re: Fragmented Disks...

Most fragmented file :
$1$DKA5: ( 2637824/2637845 blocks; 25691 fragments)

That's pretty darn bad.
Find opportune moment to shut down DB or to take that tablespace offline.
Now copy that file to a different drive with good big chunks of freespace.
Next, delete original and defrag original disk.
Then either copy back and use, or just use Oracle rename function or new control file to point to new location and oracle and/or tablespace back online.
Your DBA (you!?) will know to generate a backup control file 'just in case' and generate a file number/name/tablespace listing again 'just in case'.


Enjoy!
Hein.
Dima Bessonov
Frequent Advisor

Re: Fragmented Disks...

If the fragmented file is an Oracle tablespace, its defragmentation will not necessarily solve your problem completely. It may as well be due to the _internal_ fragmentation of database indexes and/or tables. Oracle normally uses big caches in RAM to improve I/O performance, so some disk file fragmentation won't always hurt. On the other hand, Oracle's index and tablespace files are just fixed containers. Inside them, indexes and tables can become badly fragmented and that _will_ affect performance. I'm not an expert in Oracle but I think the simplest (though not the quickest) solution is export/import of the entire database.
Jim Strehlow
Advisor

Re: Fragmented Disks...

You might shutdown the database, mount it exclusive, export the tablespace, drop the tablespace, and shutdown the database.

Delete the physical tablespace file(s).
Defragment as best as you can to get the largest free space(s) that you can achieve.

Startup the database exclusive. Create the tablespace in smaller contiguous chunks.
e.g. if you have free space "chunks" of
1G, 1.1G, and 1.2G
CREATE TABLESPACE XXX ... SIZE 1200M
then
ALTER TABLESPACE XXX add datafile SIZE 1100M;
then
ALTER TABLESPACE XXX add ... SIZE 1000M;
one at a time.

OpenVMS builds Oracle tablespaces (which version of Oracle are you using?) best_try_contiguous.
After each create or add datafile,
$ DUMP/HEAD/RECORD=END:0 XXX*.DBF
to examine the fragmentation of each datafile.
If you have a fragmented disk file, shutdown Oracle, DFU the file, startup Oracle etc.

Jim, Data911 OpenVMS and Database Manager
Alameda, CA, USA
Martin P.J. Zinser
Honored Contributor

Re: Fragmented Disks...

Hello Peter,

the next question you might want to ask yourself is if you do actually want to have your Oracle DB space on the same spindle as all these many little files occupying your disk. Oracle can be pretty demanding on I/O and mixing that with I/O to the other files does not sound like a good idea (this does assume that these files are actively used and not just sit around and collect dust ;-)

Greetings, Martin
Jan van den Ende
Honored Contributor

Re: Fragmented Disks...

Peter,

How did your database _BECOME_ so fragmented?
You eigther CREATED a big database on an already very fragmented disk (which you are now paying the price of correcting, and should know how to prevent in future)
OR
you created (way back when maybe?) a much smaller database, which has been steadily growing in small chuncks.
If THAT is the case, and your typical appication use is expected to continue more or less the same, then _NOW_ is the time to think about the future!
It seems VERY desirable that you now estimate your database growth for the next 'period', and pre-allocate just over enough for that.
And 'period' should be the time when you will be prepared to repeat the exercise of internal & external defragging, and forward-viewing re-sizing, of your database.

HTH,


Jan
Don't rust yours pelled jacker to fine doll missed aches.
Peter Clarke
Regular Advisor

Re: Fragmented Disks...

Im not so sure it is the database as all the database tablespaces are held on a different disk.There is just two that are on this disk as it is mainly for reports,user home directories etc.

The two tablespaces are called:

EURP_TMP1.DBS
EURP_RBS.DBS

Peter
Wim Van den Wyngaert
Honored Contributor

Re: Fragmented Disks...

Peter,

The largest contiguous space is much too low to defragment properly. Is it possible that some fragmented file (e.g. a log file) is keeping lots of fragments open all over the disk. Thus interrupting every large contigous space and thus also preventing larger files of becoming defragmented.

Check on every node of the cluster :
$ show dev xxx/files
and try to close the users keeping these files open.

Also check the defragmenter log file : it may give you the reason why he's not defragmenting.

Also try to cleanup the disk ... if possible.

Wim
Wim
Hein van den Heuvel
Honored Contributor

Re: Fragmented Disks...

Peter wrote...

> The two tablespaces are called:
>
> EURP_TMP1.DBS
> EURP_RBS.DBS

Looks like you are lucky. If those dbs files live up to their name then they wil not have production data. Any competent DBA can just drop them an recreate them. First in a different spot, then you do your thing, then (if needed) recreate them in the 'right place'

I'll attach a little SQL script to summarize the database file usage.

Hein.


column type format a4
column Tablespace format a15
column file format a45
column id format 99
column mb format 999999
set pages 9999
set heading off
set FEEDBACK off
select 'Redo', 'group ' || l.group# "Tablespace", l.group# "Id", l.bytes/(1024*1
024) "MB",
MEMBER "File" from v$logfile f, v$log l where l.group# = f.group#
union
select 'Data' "Type", tablespace_name "Tablespace", FILE_ID "Id", bytes/(1024*10
24) "MB",
file_name "File" from dba_data_files
union
select 'Temp' "Type", tablespace_name "Tablespace", FILE_ID "Id", bytes/(1024*10
24) "MB",
file_name "File" from dba_temp_files
union
select 'Ctrl' "Type", 'Control_file' "Tablespace", rownum "Id", 0 "MB",
name "File" from v$controlfile
order by 1,2
/