- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Interpreting Volume Statistics using VxVM
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2015 01:00 AM
07-23-2015 01:00 AM
Interpreting Volume Statistics using VxVM
Hi
I have a system running HP-UX 11.31 in which in glance show 100% disk utilization, and it could be impacting on the application because users are complaining that the application is slow, and sometimes does not open some modules, it time out.
The system is two cluster McService guard, and is using veritas file system, and I have little experience on it, I am more used with LVM. But searching on google I found a vxstat command, but I am not sure how to interpret its output. Please can you help in doing so:
vxstat -g dgrac -i 10 -S -d OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE Thu Jul 23 09:59:35 2015 dm dgrac01 8517402 891038 199778237k 13138009bl 0.02 0.12 dm dgrac02 8718779 914675 73222057bl 13421881bl 0.03 0.42 dm dgrac03 12480527 915683 201289601bl 6604776k 0.00 0.44 dm dgrac04 10727146 41574 85083700k 369595k 0.02 2.44 dm dgrac05 719 849 5734k 7241bl 0.40 0.59 dm dgrac06 679275 193462 70578759bl 6338419bl 0.11 1.38 dm dgrac07 292477 85577 2357616k 1705645bl 0.38 0.76 dm dgrac08 44852 118103 3896505bl 2446100k 1.29 2.60 dm dgrac09 41890 116087 334128k 4868407bl 0.26 2.71 dm dgrac10 14246 82932 117268k 816315k 7.04 1.05 Thu Jul 23 09:59:45 2015 dm dgrac01 136 6 1081k 131bl 1.67 0.77 dm dgrac02 90 6 711k 51bl 1.59 0.54 dm dgrac03 286 4 2279k 18k 2.06 1.18 dm dgrac04 238 0 3809bl 3k 1.91 0.56 dm dgrac05 0 0 0 0 0.00 0.00 dm dgrac06 0 0 0 0 0.00 13.55 dm dgrac07 2 0 19k 4k 1.83 0.42 dm dgrac08 0 0 2k 4k 7.83 4.25 dm dgrac09 0 0 2k 4k 7.99 0.46 dm dgrac10 0 0 0 0 0.00 0.00 Thu Jul 23 09:59:55 2015 dm dgrac01 143 5 1135k 37bl 1.75 0.38 dm dgrac02 104 6 820k 20k 1.55 0.48 dm dgrac03 308 4 2453k 31bl 1.58 0.45 dm dgrac04 248 0 1991k 0 1.75 0.00 dm dgrac05 0 0 0 0 0.00 0.00 dm dgrac06 0 0 0 0 0.00 0.51 dm dgrac07 2 0 19k 11bl 0.55 0.46 dm dgrac08 0 0 2k 11bl 0.24 0.41 dm dgrac09 0 0 2k 11bl 0.19 0.44 dm dgrac10 0 0 0 0 0.00 0.00
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2015 05:08 AM - edited 07-23-2015 05:10 AM
07-23-2015 05:08 AM - edited 07-23-2015 05:10 AM
Re: Interpreting Volume Statistics using VxVM
vxstat is interesting but is just confirming what you already know - the VG is busy. And as much as the users would like you to 'fix' it, there is no GO-FASTER button in HP-UX.
First, I would look at sar -d as it is much simpler to interpret. You are looking for abnormally high %busy and avserv times. For a modern disk array (not a SCSI JBOD), avserv should be less than 10ms. There will be spikes that could be up to 20-30 ms but the summary at the bottom of the sar output gives a good picture of LUN performance.
# sar -d 2 10
In some cases there will be hotspots where database and/or applications are hammering a particular disk group on the array. Your SAN administrator can look at the statistics for this host and see if that's the case. Heavy usage should correspond to long avserv times. For example, a very write-intensive load using a RAID 5 disk group will always be very slow compared to a RAID 10 or other disk mirror layout. (hopefully this system is *not* using VxVM's RAID 5 software mirroring)
Take a look at the busiest LUNs. Are there highly active mountpoints within the same LUN? Moving very busy mountpoints to a high performance RAID group is a good step.
Finally, the most effective step to improve performance is to analyze the application and/or database to see if it is poorly designed. For Oracle, a statspack analysis is mandatory. You may see that the allocated SGA is much too small, or there are sequential searches for certain keys that have not been indexed. Or the database hash lists are badly unbalanced.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2015 05:45 AM
07-23-2015 05:45 AM
Re: Interpreting Volume Statistics using VxVM
thanks a lot for the response, the ouput of the
sar -d 2 10
got the following output:
14:31:29 device %busy avque r+w/s blks/s avwait avserv 14:31:31 disk6 1.49 0.50 2 28 0.00 9.74 disk11 1.49 0.50 2 28 0.00 6.58 disk112 14.43 0.50 97 38860 0.00 1.50 disk116 29.85 0.50 173 61222 0.00 1.80 disk122 0.50 0.50 8 64 0.00 0.30 disk137 1.49 0.50 2 64 0.00 7.70 disk147 1.00 0.50 4 477 0.00 2.71 disk151 1.99 0.50 4 477 0.00 4.76 disk167 0.50 0.50 0 16 0.00 10.26 disk177 0.50 0.50 2 111 0.00 5.37 14:31:33 disk10 1.01 0.50 2 4 0.00 8.87 disk12 1.01 0.50 2 4 0.00 17.88 disk112 21.11 0.50 331 34274 0.00 0.67 disk116 1.51 0.50 35 4157 0.00 0.55 disk122 33.67 0.50 495 66788 0.00 0.76 disk126 0.50 0.50 4 113 0.00 1.32 disk137 4.02 0.50 63 3055 0.00 1.18 disk141 1.01 0.50 10 901 0.00 1.02 disk147 0.50 0.50 4 585 0.00 1.20 disk151 0.50 0.50 4 714 0.00 1.19 disk157 3.02 0.50 33 2348 0.00 1.34 disk167 4.52 0.50 50 3618 0.00 1.25 disk177 2.01 0.50 18 2685 0.00 1.68 disk178 1.01 0.50 3 80 0.00 3.48 disk182 0.50 0.50 6 370 0.00 1.41 14:31:35 disk12 0.50 0.50 0 32 0.00 8.83 disk112 35.82 0.50 629 34591 0.00 0.62 disk116 29.35 0.50 290 64454 0.00 1.14 disk122 27.36 0.50 559 7199 0.00 0.53 disk137 6.47 0.51 40 1847 0.01 2.12 disk147 1.00 0.50 6 644 0.00 1.93 disk151 1.99 0.50 9 1281 0.00 2.32 disk157 2.49 0.50 27 1910 0.00 1.46 disk167 3.48 0.50 34 1703 0.00 1.24 disk177 1.00 0.50 6 796 0.00 3.38 disk178 1.00 0.50 2 80 0.00 5.64 disk182 1.99 0.50 5 446 0.00 4.23 14:31:37 disk6 1.00 0.50 2 17 0.00 5.96 disk10 3.50 0.50 6 41 0.00 8.84 disk11 1.00 0.50 1 15 0.00 6.05 disk12 2.00 0.50 4 21 0.00 6.98 disk112 37.50 0.50 644 46491 0.00 0.62 disk122 37.50 0.50 826 60581 0.00 0.58 disk137 2.00 0.50 3 96 0.00 6.12 disk141 0.50 0.50 4 128 0.00 0.20 disk147 0.50 0.50 4 487 0.00 0.83 disk151 0.50 0.50 4 487 0.00 1.36 disk157 1.00 0.50 2 48 0.00 6.88 disk167 0.50 0.50 10 289 0.00 0.86 14:31:39 disk6 0.50 0.50 0 8 0.00 6.99 disk112 7.96 0.50 101 19929 0.00 0.87 disk116 26.87 0.50 316 63562 0.00 0.98 disk122 3.98 0.50 91 9981 0.00 0.53 disk137 3.98 0.61 61 2675 0.08 1.73 disk147 1.00 0.50 4 396 0.00 2.37 disk151 1.99 0.50 9 1479 0.00 2.51 disk157 1.99 0.50 24 1003 0.00 1.72 disk167 1.49 0.50 28 1146 0.00 1.29 disk177 0.50 0.50 4 557 0.00 1.07 disk178 0.50 0.50 3 96 0.00 1.74 disk182 0.50 0.50 6 302 0.00 1.84 14:31:41 disk6 0.50 0.50 1 16 0.00 4.23 disk11 0.50 0.50 1 16 0.00 3.96 disk112 17.09 0.50 105 45112 0.00 1.66 disk122 17.09 0.50 369 53804 0.00 0.75 disk137 6.03 0.54 78 3715 0.03 1.30 disk147 1.01 0.50 6 602 0.00 2.17 disk151 2.01 0.50 7 859 0.00 2.99 disk157 2.01 0.50 27 1061 0.00 0.98 disk167 2.01 0.50 51 2750 0.00 0.86 disk177 6.53 0.50 28 5403 0.00 6.49 disk178 0.50 0.50 7 241 0.00 0.64 disk182 1.01 0.50 10 482 0.00 1.46 14:31:43 disk10 0.50 0.50 1 16 0.00 6.23 disk12 0.50 0.50 1 16 0.00 7.80 disk112 7.00 0.50 124 20555 0.00 0.70 disk116 28.00 0.50 253 64328 0.00 1.20 disk122 4.00 0.50 223 14858 0.00 0.40 disk137 2.00 0.50 4 112 0.00 5.57 disk147 1.00 0.50 2 351 0.00 3.12 disk151 1.00 0.50 2 351 0.00 3.40 disk167 0.50 0.50 2 33 0.00 4.01 14:31:45 disk6 0.50 0.50 1 19 0.00 5.64 disk11 0.50 0.50 1 19 0.00 5.18 disk12 0.50 0.50 1 2 0.00 8.06 disk112 16.92 0.50 167 40261 0.00 1.30 disk116 0.50 0.50 9 160 0.00 0.45 disk122 23.38 0.50 433 47678 0.00 1.01 disk137 5.47 0.54 67 3264 0.08 2.65 disk141 0.50 0.50 3 111 0.00 0.45 disk147 1.00 0.50 4 541 0.00 2.48 disk151 2.49 0.50 7 1290 0.00 3.55 disk157 4.98 0.50 30 1481 0.00 2.85 disk167 4.48 0.50 39 2086 0.00 2.20 disk177 1.99 0.50 25 4712 0.00 1.35 disk178 0.50 0.50 4 143 0.00 1.25 disk182 0.50 0.50 2 127 0.00 0.79 14:31:47 disk10 1.51 0.50 3 22 0.00 8.46 disk12 1.01 0.50 2 18 0.00 8.92 disk112 11.06 0.50 57 24970 0.00 2.00 disk116 28.64 0.50 245 65222 0.00 1.30 disk122 4.52 0.50 41 13408 0.00 1.22 disk137 4.52 0.50 61 2943 0.00 1.46 disk151 1.51 0.50 10 1420 0.00 1.45 disk157 3.02 0.50 36 1383 0.00 1.31 disk167 2.01 0.50 37 1656 0.00 0.85 disk177 1.01 0.50 2 64 0.00 3.57 disk182 0.50 0.50 5 257 0.00 1.71 14:31:49 disk6 3.00 0.50 6 71 0.00 11.86 disk10 0.50 0.50 1 10 0.00 5.69 disk11 2.50 0.50 5 69 0.00 13.84 disk12 0.50 0.50 1 10 0.00 5.66 disk109 0.50 0.50 0 16 0.00 10.28 disk112 25.00 0.50 156 52845 0.00 1.63 disk116 0.50 0.50 7 72 0.00 0.31 disk122 16.50 0.50 330 51130 0.00 0.67 disk137 2.00 0.50 3 96 0.00 7.16 disk147 2.50 0.50 4 488 0.00 6.06 disk151 1.00 0.50 4 488 0.00 3.39 disk157 0.50 0.50 1 32 0.00 7.71 disk167 1.00 0.50 2 48 0.00 7.99 Average disk6 0.70 0.50 1 16 0.00 9.31 Average disk11 0.60 0.50 1 15 0.00 9.76 Average disk112 19.39 0.50 241 35786 0.00 0.87 Average disk116 14.54 0.50 133 32362 0.00 1.20 Average disk122 16.84 0.50 337 32498 0.00 0.67 Average disk137 3.80 0.53 38 1785 0.03 1.85 Average disk147 0.95 0.50 4 457 0.00 2.44 Average disk151 1.50 0.50 6 885 0.00 2.56 Average disk167 2.05 0.50 25 1332 0.00 1.32 Average disk177 1.35 0.50 9 1431 0.00 3.37 Average disk10 0.70 0.50 1 9 0.00 8.31 Average disk12 0.60 0.50 1 10 0.00 8.87 Average disk126 0.05 0.50 0 11 0.00 1.32 Average disk141 0.20 0.50 2 114 0.00 0.71 Average disk157 1.90 0.50 18 926 0.00 1.69 Average disk178 0.35 0.50 2 64 0.00 1.88 Average disk182 0.50 0.50 3 198 0.00 1.92 Average disk109 0.05 0.50 0 2 0.00 10.28 You have mail in /var/mail/root dbnode1[354]/tmp/fr #
and if I single out disks with longer that 10ms "avser" some of them belong to /uo1 , and others there are in VxVM layout, and as I dont have skills of it its proving to be difficult to identify, which file system they belong,
glance -u
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2015 10:36 AM
07-23-2015 10:36 AM
Re: Interpreting Volume Statistics using VxVM
The command vxprint will show you which mountpoints belong to a specific disk.
However, the sar report shows nothing unusual...the threshold of 10ms is just a guideline. 12, even an occasional 20 is not a problem. If you saw a continuous Average of 150ms numbers, then I'd talk to the SAN admin about the slow responses.
The busiest disks are 112, 116 and 122. Use vxprint to see what mountpoint(s) are involved. This is an application/database issue. The application is requesting a lot of disk I/O. This looks like an Oracle database, so what do the statspack numbers report? Are there specific SQL procedures that are killing the machine? Are the procedures badly written? Can the DBA tune SGA usage for better performance?
If you cannot change the application or move the busiest mountpoints to other volumes, you'll have to replace the system with something faster or live with the performance you have.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2015 12:57 PM
07-23-2015 12:57 PM
Re: Interpreting Volume Statistics using VxVM
I might be wrong but I think the problem could exist because of the following:
#vxdg list NAME STATE ID dgarc enabled,shared,cds 1257773995.53.rx7641 dgrac enabled,shared,cds 1254658389.45.rx7641 mgtmp enabled,shared,cds 1258705503.43.rx7641 dbnode1[378]/tmp/fr #
and
if I do:
vxprint -g dgarc TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dgarc dgarc - - - - - - dm dgarc01 c18t0d0 - 421424000 - NOHOTUSE - - dm dgarc02 c18t1d3 - 211713920 - NOHOTUSE - - dm dgrac11 c18t1d6 - 214884224 - - - - dm dgrac12 c18t1d7 - 214884224 - - - - dm dgrac13 c18t2d0 - 214884224 - - - - v data11 fsgen ENABLED 209715200 - ACTIVE - - pl data11-01 data11 ENABLED 209715200 - ACTIVE - - sd dgrac11-01 data11-01 ENABLED 209715200 0 - - - v data12 fsgen ENABLED 209715200 - ACTIVE - - pl data12-01 data12 ENABLED 209715200 - ACTIVE - - sd dgrac12-01 data12-01 ENABLED 209715200 0 - - - v data13 fsgen ENABLED 209715200 - ACTIVE - - pl data13-01 data13 ENABLED 209715200 - ACTIVE - - sd dgrac13-01 data13-01 ENABLED 209715200 0 - - - v vol1 fsgen ENABLED 203776000 - ACTIVE - - pl vol1-02 vol1 ENABLED 203776000 - ACTIVE - - sd dgarc02-01 vol1-02 ENABLED 203776000 0 - - - v vol2 fsgen ENABLED 408576000 - ACTIVE - - pl vol2-02 vol2 ENABLED 408576000 - ACTIVE - - sd dgarc01-01 vol2-02 ENABLED 408576000 0 - - - dbnode1[379]/tmp/fr #
so here I can see file systems:
/dev/vx/dsk/dgarc/data11 209715200 93804297 108666731 46% /data11 /dev/vx/dsk/dgarc/data13 209715200 35689085 163149849 18% /data13 /dev/vx/dsk/dgarc/data12 209715200 150064672 55922626 73% /data12
and if I do:
vxprint -g dgrac TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dgrac dgrac - - - - - - dm dgrac01 c18t0d1 - 106889088 - NOHOTUSE - - dm dgrac02 c18t0d2 - 106889088 - NOHOTUSE - - dm dgrac03 c18t0d3 - 106889088 - NOHOTUSE - - dm dgrac04 c18t0d4 - 106889088 - NOHOTUSE - - dm dgrac05 c18t0d5 - 106889088 - NOHOTUSE - - dm dgrac06 c18t0d6 - 106889088 - NOHOTUSE - - dm dgrac07 c18t0d7 - 106889088 - NOHOTUSE - - dm dgrac08 c18t1d0 - 106889088 - NOHOTUSE - - dm dgrac09 c18t1d1 - 106889088 - NOHOTUSE - - dm dgrac10 c18t1d2 - 106889088 - NOHOTUSE - - v data01 fsgen ENABLED 104791936 - ACTIVE - - pl data01-02 data01 ENABLED 104791936 - ACTIVE - - sd dgrac01-01 data01-02 ENABLED 104791936 0 - - - v data02 fsgen ENABLED 104791936 - ACTIVE - - pl data02-02 data02 ENABLED 104791936 - ACTIVE - - sd dgrac02-01 data02-02 ENABLED 104791936 0 - - - v data03 fsgen ENABLED 104791936 - ACTIVE - - pl data03-02 data03 ENABLED 104791936 - ACTIVE - - sd dgrac03-01 data03-02 ENABLED 104791936 0 - - - v data04 fsgen ENABLED 104791936 - ACTIVE - - pl data04-02 data04 ENABLED 104791936 - ACTIVE - - sd dgrac04-01 data04-02 ENABLED 104791936 0 - - - v data05 fsgen ENABLED 104791936 - ACTIVE - - pl data05-02 data05 ENABLED 104791936 - ACTIVE - - sd dgrac05-01 data05-02 ENABLED 104791936 0 - - - v data06 fsgen ENABLED 104791936 - ACTIVE - - pl data06-02 data06 ENABLED 104791936 - ACTIVE - - sd dgrac06-01 data06-02 ENABLED 104791936 0 - - - v data07 fsgen ENABLED 104791936 - ACTIVE - - pl data07-02 data07 ENABLED 104791936 - ACTIVE - - sd dgrac07-01 data07-02 ENABLED 104791936 0 - - - v data08 fsgen ENABLED 104791936 - ACTIVE - - pl data08-02 data08 ENABLED 104791936 - ACTIVE - - sd dgrac08-01 data08-02 ENABLED 104791936 0 - - - v data09 fsgen ENABLED 104791936 - ACTIVE - - pl data09-02 data09 ENABLED 104791936 - ACTIVE - - sd dgrac09-01 data09-02 ENABLED 104791936 0 - - - v data10 fsgen ENABLED 104791936 - ACTIVE - - pl data10-02 data10 ENABLED 104791936 - ACTIVE - - sd dgrac10-01 data10-02 ENABLED 104791936 0 - - - dbnode1[381]/tmp/fr #
I see other set of file systems:
should ´nt all this file systems be on the same disk group?
and when I do bdf:
bdf Filesystem kbytes used avail %used Mounted on /dev/vg00/lvol3 2097152 600960 1484632 29% / /dev/vg00/lvol1 2097152 528752 1556312 25% /stand /dev/vg00/lvol8 10485760 10267584 217360 98% /var /dev/vg00/lvol7 10485760 3274224 7155272 31% /usr /dev/vgora/orabin 71663616 54069996 16551318 77% /u01 /dev/vg00/lvol6 2097152 1300016 791840 62% /tmp /dev/vg00/lvol5 10485760 9028760 1445736 86% /opt /dev/vg00/lvol4 4194304 73672 4088536 2% /home /dev/odm 0 0 0 0% /dev/odm /dev/vx/dsk/dgrac/data05 104791936 63073031 39111862 62% /data05 /dev/vx/dsk/dgrac/data01 104791936 96026763 8217857 92% /data01 /dev/vx/dsk/dgrac/data02 104791936 85397614 18182813 82% /data02 /dev/vx/dsk/dgrac/data07 104791936 84641530 18891386 82% /data07 /dev/vx/dsk/dgrac/data10 104791936 72371316 30394847 70% /data10 /dev/vx/dsk/dgrac/data04 104791936 66531945 35869512 65% /data04 /dev/vx/dsk/dgrac/data03 104791936 90918355 13006988 87% /data03 /dev/vx/dsk/dgrac/data09 104791936 60128495 41872263 59% /data09 /dev/vx/dsk/dgarc/vol1 203776000 115789883 82497420 58% /arch01 /dev/vx/dsk/dgrac/data08 104791936 104787625 4311 100% /data08 /dev/vx/dsk/dgrac/data06 104791936 96343286 7920996 92% /data06 /dev/vx/dsk/dgarc/data11 209715200 93804297 108666731 46% /data11 /dev/vx/dsk/dgarc/data13 209715200 35689085 163149849 18% /data13 /dev/vx/dsk/dgarc/data12 209715200 150064672 55922626 73% /data12 /dev/vx/dsk/mgtmp/vol1 512000000 511938982 61018 100% /migtemp /dev/vx/dsk/dgarc/vol2 408576000 331414921 72403516 82% /backup01 dbnode1[382]/tmp/fr #
is this not impact on the performance??
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2015 05:45 PM - edited 07-23-2015 05:46 PM
07-23-2015 05:45 PM - edited 07-23-2015 05:46 PM
Re: Interpreting Volume Statistics using VxVM
Since I know almost nothing about your computer (model, HP-UX version, internal and external storage), I can't really comment about the question. Taking a wild guess, I am assuming that vg00 is on internal disks with very little I/O going on. And the VxVM volume is possibly an external array connected with unknown cables and the array is configured with unknown disk RAID groups. Changing the layout will require a lot of analysis first.
Can you post the analysis from statspack?
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2015 11:32 PM
07-23-2015 11:32 PM
Re: Interpreting Volume Statistics using VxVM
sorry for this late reply its time zone (GMT +2), but the systems are two rx7640 running hp-ux 11.31 running McServiceGuard Cluster, and the storage system is a NetApp. I will ask my colleague DBA to provide me the output of stapstack, as I am not a DBA.
I beleive this issue started 3 months ago when those 3 other file systems (data11 to data13) were added to the system.
The other interesting aspect, is that this morning (local time) when nobody at the office, so no activity on the network, the application was fast.....in all modules... so...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2015 10:28 AM
07-24-2015 10:28 AM
Re: Interpreting Volume Statistics using VxVM
>> The other interesting aspect, is that this morning (local time) when nobody at the office, so no activity on the network,
>> the application was fast.....in all modules... so...
I'm not sure why you find this interesting. If there is no activity on the system, I would expect everything to run fast. I'm going to take a wild guess that things run really slowly when you are backing up the data, correct?
The performance issue is based on comments from the end users and it sounds like things were OK before 3 months ago. When you say that disk disk volumes were added, I suspect the database was significantly changed, and perhaps additional users are now active on the system, or perhaps another instance or two of Oracle is now running, correct?
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2015 10:58 PM
07-26-2015 10:58 PM
Re: Interpreting Volume Statistics using VxVM
Database backups are run after office hours..
Nothing was changed on the data base, the dba assured me, and I was told that the addition of 3 other file systems on a different group does not matter, performance wise...