1846781 Members
4048 Online
110256 Solutions
New Discussion

Re: VxFSD Utilization

 
SOLVED
Go to solution
Bob Ferro
Regular Advisor

VxFSD Utilization

We are having i/o issues on a RP8420 server. The one vpartition is running 23 Oracle 9.2 instances/DBs. Jobs that run here are much slower than the RP3440 Development server. The RP8420 is attached to a EVA8000. I noticed looking at Glance Process List that VxFSD has high numbers especially "THD CNT". Has anyone had similiar problems?

B3692A GlancePlus C.04.50.00 15:42:51 phl1s422 9000/800 Current Avg High
---------------------------------------------------------------------------------------------------------------
CPU Util S SNNU U | 17% 15% 37%
Disk Util F F | 96% 93% 100%
Mem Util S SU UB | 51% 51% 51%
Swap Util U UR R | 37% 36% 37%
---------------------------------------------------------------------------------------------------------------
PROCESS LIST Users= 1
User CPU Util Cum Disk Thd
Process Name PID PPID Pri Name (1600% max) CPU IO Rate RSS Cnt
--------------------------------------------------------------------------------
oraclemedec 13608 1 148 oratns 55.6/10.6 432.5 892/ 142 21.2mb 1
ora_j000_cd 2546 1 149 supdba 45.0/18.6 65.2 193/ 159 53.2mb 1
oraclecdmis 7836 1 149 oratns 33.0/ 2.6 192.3 1677/92.8 17.3mb 1
vxfsd 64 0 134 root 25.5/13.5 174146 6.4/ 9.9 15.6mb 135
oraclecdmia 20630 1 201 oratns 23.4/ 0.1 1.4 570/ 2.4 52.1mb 1
oraclectwhs 24660 1 149 oratns 22.4/ 0.2 75.1 513/ 2.4 33.1mb 1
oraclemktpc 3236 1 154 oratns 15.3/ 3.8 6.1 8.0/ 2.3 14.2mb 1
ora_j000_ca 4380 1 148 oracle 12.4/ 6.7 0.7 191/95.4 22.8mb 1
ovcd 3992 1 154 root 4.1/ 3.0 38492.9 0.0/ 0.0 15.7mb 28
oraclemktpc 23353 1 154 oratns 1.6/ 0.9 6.6 130/15.7 12.6mb 1
midaemon 3940 1 -16 root 1.4/ 2.3 29317.5 0.0/ 0.0 70.4mb 2
oraclelkupp 4382 1 154 oratns 1.2/ 1.2 0.1 4.6/ 4.6 13.5mb 1
ora_dbw0_cd 17217 1 156 supdba 0.8/ 0.2 158.6 41.2/12.2 61.0mb 1
Page 1 of 25
11 REPLIES 11
Steven E. Protter
Exalted Contributor
Solution

Re: VxFSD Utilization

Shalom,

Lets find the hot disk.

http://www.hpux.ws/?p=6
system.perf.sh

Look at the sar -d output and see where the problem is.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Bob Ferro
Regular Advisor

Re: VxFSD Utilization

I know which device is busy unfortunately the EVA8000 is at another location and it was configured by HP. They told me that they have 240 disks striped for 6 volume groups. I don't know how they set it up. What tools can I look at to see the config of the EVA8000?

HP-UX phl1s422 B.11.23 U 9000/800 11/06/07

08:03:43 device %busy avque r+w/s blks/s avwait avserv
08:03:44 c2t6d0 7.00 0.50 17 194 0.62 9.10
c0t6d0 4.00 0.50 7 98 0.38 16.62
c8t0d1 50.00 0.50 470 18880 0.00 1.65
c10t0d1 26.00 0.50 306 13696 0.00 1.38
c14t0d1 1.00 0.50 2 96 0.00 0.57
c8t0d2 2.00 0.50 17 400 0.00 1.82
c10t0d2 1.00 0.50 5 128 0.00 3.07
c8t0d3 2.00 0.50 10 160 0.00 1.94
c8t0d4 7.00 0.50 35 720 0.00 2.21
c10t0d4 3.00 0.50 16 432 0.00 2.15
c8t0d5 1.00 0.50 6 96 0.00 0.30

Re: VxFSD Utilization

Bob,

The only 2 disks here which have any sort of IO issue are c0t6d0 and c2t6d0 and I'll bet that those disks aren't on the EVA, but are actually the local boot disks. All the other disks have no wait times and service times averaging below 5ms which I would consider good IO.

Use diskinfo and pvdisplay to determine what is on those 2 disks

diskinfo /dev/rdsk/c0t6d0
diskinfo /dev/rdsk/c2t6d0

pvdisplay /dev/dsk/c0t6d0
pvdisplay /dev/dsk/c2t6d0

At this stage I'd guess those are in the root volume group (vg00) and someone has added a none-system related filesystem to vg00...

If it is part of vg00 then:

bdf | grep vg00

would be interesting...

HTH


Duncan

I am an HPE Employee
Accept or Kudo
Murat SULUHAN
Honored Contributor

Re: VxFSD Utilization

Hi Bob

Do you have failover/load balance solution like SecurePath. Did you apply load balance settings with SP? autopath display will help you. You can also check SP patches.

RR based LB policies with SP sometimes generates high disk usage and it reduces performance.

Best Regards
Murat
Murat Suluhan
Tim Nelson
Honored Contributor

Re: VxFSD Utilization

Hey Bob,

From your posted sar info I do not see any issue either, things look good. Other than what Duncan mentioned about what probably is an internal OS disk all seems swell.

Post some more info..

As far as reviewing the EVA, CommandView is the utility. You will need to know the management station address, username and password.

Typically EVAs are configured as 1 or 2 disk groups (could be more), each group having many disks. There is no disk(s) to host relationship. If there is then a different array should have been used as it is defeating the main design of an EVA.



Bob Ferro
Regular Advisor

Re: VxFSD Utilization

Attached is another sar -d. I see %busy as being high. The disk in question are c8/c10/c12, these are on the EVA. Unfortunately, the DBA ran a script on the RP8420 which is attached to the EVA. The RP8420 is partitioned into 2 servers, each with 16 CPUs and 32GB memory. The RP3440 has 2 CPUs with 12GB memory. THe RP3440 outperforms the the RP8420. The RP3440 is attached to an XP128. According to the DBA, the script updates over 1,000,000+ rows and does a rollback (for testing). Attached are some of the stats. The DBs are configures exactly on both the test/prod servers. Certainly there are 23 Instances/DBA running on this server but if the EVA8000 is striped over 240 physical disk, I would expect better performance.
Murat SULUHAN
Honored Contributor

Re: VxFSD Utilization

Hi Bob

Did you use SecurePath ? If yes can you submit autopath display?

Best Regards
Murat
Murat Suluhan
Bob Ferro
Regular Advisor

Re: VxFSD Utilization

I have a printout on an Autopath display all. Unfortunately, we don't have root or sudo access. The RP8420 server with the EVA is at our HQ which was setup by HP. My supervisor wants me to gather as much info as I can and supply the info to Team HP for resolution (fix their problems). That ought to be fun, that's like telling the police to fix their radar. Whatever info you need I can send. I will try to scan it later and send.

Re: VxFSD Utilization

Bob,

looking at those new numbers I still see no problem except what I saw before. The % busy field simply tells you how much time during the interval this disk was in action, and for an EVA LUN (which is no doubt made up of many physical disks behind cache), this number is completely irrelevant.

The *really* important fields in a 'sar -d' output are avque and avserv. Having a nice low number in the queue like you have means there's not much outstanding IO against that LUN and thats a good thing. Service times of under 10ms usually indicate acceptable IO response times - so I don't think the EVA is the source of your problem here.

Now it could be a red herring, but again what I'd be more concerned about is thise 2 non-EVA disks - both have extremely high service times for a very low amount of IO which is slightly confusing...

How about collecting a similar sar -d output on the rp3440 so you can show simailr IO times there and discount this as the problem...

Incidentally, unless someone from HP actually configured your servers, I think you will find the chances of anyone in HP Support giving you what amounts to free performance consulting very slim indeed.

Of course if you manager wants to get his check book out...

HTH

Duncan

I am an HPE Employee
Accept or Kudo

Re: VxFSD Utilization

oops I just missed that post - HP did set it up!

I guess your manager may have a lever for getting some assistance!

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Bob Ferro
Regular Advisor

Re: VxFSD Utilization

Thanks