Operating System - HP-UX
1753866 Members
7350 Online
108809 Solutions
New Discussion юеВ

Re: migration of data from non-stripped lvs to stripped lvs

 
Prasanth V Aravind
Trusted Contributor

migration of data from non-stripped lvs to stripped lvs

We are facing high wio in one of our production servers.
The main reason for this is, none of the lvs are in stripe.
I want to make all the lvs to stripped, with lesser downtime pls suggest.

# sar 2 2
HP-UX B.11.11 U 9000/800 10/23/09

12:25:44 %usr %sys %wio %idle
12:25:46 15 6 65 14
12:25:48 8 4 69 18

Average 12 5 67 16
# vmstat 2 2
procs memory page faults cpu
r b w avm free re at pi po fr de sr in sy cs us sy id
2 10 0 880331 3562734 1904 195 1 0 0 0 4 9014 57392 6096 12 7 81
2 10 0 880331 3564218 1505 112 0 0 0 0 0 5174 27238 2488 12 4 84
#


Prasanth V Aravindakshan
6 REPLIES 6
Michael Steele_2
Honored Contributor

Re: migration of data from non-stripped lvs to stripped lvs

Hi

I don't like striping myself. It's a pain when you run out of space and have to extend the vg and find out you can't because the vg's striped.

I personally would spread the file system across more pv's in order to get more spindles involved.

But before you do, $wio is not the definitive metric to determine a disk bottle neck. Let's see what avwait and avserv are in sar -d.

%wio is more of a metric for measuring structure and unstructure data. A flat file is unstructured. A database is structured. It's more suitable to rehash a database or defrag a file system to get lower $wio.

Read the man page on sar and its definition of avwait and how a disk bottleneck appears when it is higher than avserver. Note: Rarely do the big disk array's like the EMC DMX ever exhibit a disk bottleneck of any kind. In fact, I haven't seen a disk bottlneck in three years last on a EMC Symmetrix.

From sar -d isolate the PV. Then pvdisplay and isolate the file system. Run fuser on the file system and count the processes.
Support Fatherhood - Stop Family Law
Prasanth V Aravind
Trusted Contributor

Re: migration of data from non-stripped lvs to stripped lvs

The server have 30 data filesystems & the application process is not accessing the data simultaniously on all fs.


May be today the approcationprocessing files on /data01 & after 3 days it will process files on /data05.

So the fisrt day i feel the bottleneck on /data01 pvs & after that the wait will move to another pv.

Moreover that my pain is /work filesystem , which having high number of very small files. doing ls -l in this filesystem directories is taking 5 to 10 seconds for output.
Michael Steele_2
Honored Contributor

Re: migration of data from non-stripped lvs to stripped lvs

Oh, you insist you have a disk bottle neck. OK.
Support Fatherhood - Stop Family Law
Armin Kunaschik
Esteemed Contributor

Re: migration of data from non-stripped lvs to stripped lvs

Are your logical volumes mirrored? You can't have mirroring AND striping in 11.11! You need to upgrade to 11.31 to be able to do this!
For the work directory it may be helpful to increase the buffer cache. Check the kernel parameters dbc_max_pct and dbc_min_pct. Check with glance (get the trial of necessary) for busy disks and decide further after that!
Striping is not a cure for every disk problem! Most of the time it just adds complexity for next to no effect. Any RAID configuration is done on the storage array these days.

My 2 cents,
Armin
And now for something completely different...
TTr
Honored Contributor

Re: migration of data from non-stripped lvs to stripped lvs

You have not provided any storage (PV) information on this. You must look into how the PVs are set up. How many spindles, raid type, logical disks LVM PVs), i/o sharing etc. What kind of LVM striping are you looking for? LVM extend based stripping does not help, you can search in this forum for it. And if the PVs come from the same raid group, there isn't much you can do.
Prasanth V Aravind
Trusted Contributor

Re: migration of data from non-stripped lvs to stripped lvs


THanks for support