1848214 Members
7756 Online
104022 Solutions
New Discussion

vxiod,vhand in top

 
dattu_1
Regular Advisor

vxiod,vhand in top

Hi guys,
Please tell me what should i do to stop vxiod daemon eunning in top........10 volume i/o deamons running........


CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU COMMAND
0 ? 39 root 152 20 1344K 1344K run 32:18 0.31 0.31 vxfsd
0 pts/0 6532 root 178 20 19200K 17244K run 0:00 0.74 0.19 top
1 ? 148 root 152 20 544K 544K run 0:04 0.08 0.08 vxiod
0 ? 64 root 148 20 32K 32K sleep 0:28 0.06 0.06 emcpwdd
1 ? 6461 root 152 20 8260K 664K run 0:00 0.05 0.05 sshd:
0 ? 666 root 152 20 2040K 324K run 0:31 0.04 0.04 syncer
0 ? 0 root 127 20 32K 0K sleep 2:12 0.02 0.02 swapper
1 ? 1 root 168 20 488K 204K sleep 0:00 0.02 0.02 init
0 ? 2 root 128 20 32K 32K sleep 0:02 0.02 0.02 vhand
1 ? 3 root 128 20 32K 32K sleep 1:39 0.02 0.02 statdaemon
0 ? 4 root 128 20 32K 32K sleep 0:03 0.02 0.02 unhashdaemon
1 ? 21 root 147 20 32K 32K sleep 0:00 0.02 0.02 lvmkd
1 ? 22 root 147 20 32K 32K sleep 0:00 0.02 0.02 lvmkd
1 ? 23 root 147 20 32K 32K sleep 0:00 0.02 0.02 lvmkd
1 ? 24 root 147 20 32K 32K sleep 0:00 0.02 0.02 lvmkd
0 ? 25 root 147 20 32K 32K sleep 0:00 0.02 0.02 lvmkd
0 ? 26 root 147 20 32K 32K sleep 0:00 0.02 0.02 lvmkd
1 ? 28 root 100 20 32K 32K sleep 0:00 0.02 0.02 smpsched
0 ? 29 root 100 20 32K 32K sleep 0:00 0.02 0.02 smpsched
1 ? 32 root 148 20 32K 32K sleep 0:00 0.02 0.02 lvmdevd

3 REPLIES 3
Mridul Shrivastava
Honored Contributor

Re: vxiod,vhand in top

This issue occurs with vxiod daemons with DRL logging enabled.
The workaround is to increase the number of dirty regions per a volume with DRL logging as well as the number of dirty regions per system.

Time has a wonderful way of weeding out the trivial
dattu_1
Regular Advisor

Re: vxiod,vhand in top

Hi Mridul,
how do i increase this parameter.....?
Mridul Shrivastava
Honored Contributor

Re: vxiod,vhand in top

you need to increase "voldrl_max_drtregs".

The maximum number of dirty regions that can exist for non-sequential DRL on a
volume. A larger value may result in improved system performance at the expense
of recovery time. This tunable can be used to regulate the worse-case recovery
time for the system following a failure.

The default value for this tunable is 2048.
Time has a wonderful way of weeding out the trivial