<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VxFSD Utilization in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078069#M601701</link>
    <description>Thanks</description>
    <pubDate>Wed, 28 Nov 2007 15:07:49 GMT</pubDate>
    <dc:creator>Bob Ferro</dc:creator>
    <dc:date>2007-11-28T15:07:49Z</dc:date>
    <item>
      <title>VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078058#M601690</link>
      <description>We are having i/o issues on a RP8420 server.  The one vpartition is running 23 Oracle 9.2 instances/DBs.  Jobs that run here are much slower than the RP3440 Development server.  The RP8420 is attached to a EVA8000.  I noticed looking at Glance Process List that VxFSD has high numbers especially "THD CNT".  Has anyone had similiar problems?&lt;BR /&gt;&lt;BR /&gt;B3692A GlancePlus C.04.50.00    15:42:51 phl1s422 9000/800                                   Current  Avg  High&lt;BR /&gt;---------------------------------------------------------------------------------------------------------------&lt;BR /&gt;CPU  Util   S     SNNU   U                                                                    | 17%   15%   37%&lt;BR /&gt;Disk Util   F                                                                            F    | 96%   93%  100%&lt;BR /&gt;Mem  Util   S             SU                        UB                                        | 51%   51%   51%&lt;BR /&gt;Swap Util   U         UR                 R                                                    | 37%   36%   37%&lt;BR /&gt;---------------------------------------------------------------------------------------------------------------&lt;BR /&gt;                                                 PROCESS LIST                                      Users=    1&lt;BR /&gt;                              User      CPU Util   Cum     Disk             Thd&lt;BR /&gt;Process Name  PID   PPID  Pri Name    (1600% max)  CPU    IO Rate    RSS    Cnt&lt;BR /&gt;--------------------------------------------------------------------------------&lt;BR /&gt;oraclemedec  13608      1 148 oratns   55.6/10.6   432.5  892/ 142  21.2mb    1&lt;BR /&gt;ora_j000_cd   2546      1 149 supdba   45.0/18.6    65.2  193/ 159  53.2mb    1                               &lt;BR /&gt;oraclecdmis   7836      1 149 oratns   33.0/ 2.6   192.3 1677/92.8  17.3mb    1&lt;BR /&gt;vxfsd           64      0 134 root     25.5/13.5  174146  6.4/ 9.9  15.6mb  135&lt;BR /&gt;oraclecdmia  20630      1 201 oratns   23.4/ 0.1     1.4  570/ 2.4  52.1mb    1&lt;BR /&gt;oraclectwhs  24660      1 149 oratns   22.4/ 0.2    75.1  513/ 2.4  33.1mb    1&lt;BR /&gt;oraclemktpc   3236      1 154 oratns   15.3/ 3.8     6.1  8.0/ 2.3  14.2mb    1&lt;BR /&gt;ora_j000_ca   4380      1 148 oracle   12.4/ 6.7     0.7  191/95.4  22.8mb    1&lt;BR /&gt;ovcd          3992      1 154 root      4.1/ 3.0 38492.9  0.0/ 0.0  15.7mb   28&lt;BR /&gt;oraclemktpc  23353      1 154 oratns    1.6/ 0.9     6.6  130/15.7  12.6mb    1&lt;BR /&gt;midaemon      3940      1 -16 root      1.4/ 2.3 29317.5  0.0/ 0.0  70.4mb    2&lt;BR /&gt;oraclelkupp   4382      1 154 oratns    1.2/ 1.2     0.1  4.6/ 4.6  13.5mb    1&lt;BR /&gt;ora_dbw0_cd  17217      1 156 supdba    0.8/ 0.2   158.6 41.2/12.2  61.0mb    1&lt;BR /&gt;                                                                                                  Page 1 of 25&lt;BR /&gt;</description>
      <pubDate>Mon, 05 Nov 2007 15:47:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078058#M601690</guid>
      <dc:creator>Bob Ferro</dc:creator>
      <dc:date>2007-11-05T15:47:08Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078059#M601691</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;Lets find the hot disk.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.hpux.ws/?p=6" target="_blank"&gt;http://www.hpux.ws/?p=6&lt;/A&gt;&lt;BR /&gt;system.perf.sh&lt;BR /&gt;&lt;BR /&gt;Look at the sar -d output and see where the problem is.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 05 Nov 2007 16:14:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078059#M601691</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2007-11-05T16:14:30Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078060#M601692</link>
      <description>I know which device is busy unfortunately the EVA8000 is at another location and it was configured by HP. They told me that they have 240 disks striped for 6 volume groups.  I don't know how they set it up.  What tools can I look at to see the config of the EVA8000?&lt;BR /&gt;&lt;BR /&gt;HP-UX phl1s422 B.11.23 U 9000/800    11/06/07&lt;BR /&gt; &lt;BR /&gt;08:03:43   device   %busy   avque   r+w/s  blks/s  avwait  avserv&lt;BR /&gt;08:03:44   c2t6d0    7.00    0.50      17     194    0.62    9.10&lt;BR /&gt;           c0t6d0    4.00    0.50       7      98    0.38   16.62&lt;BR /&gt;           c8t0d1   50.00    0.50     470   18880    0.00    1.65&lt;BR /&gt;          c10t0d1   26.00    0.50     306   13696    0.00    1.38&lt;BR /&gt;          c14t0d1    1.00    0.50       2      96    0.00    0.57&lt;BR /&gt;           c8t0d2    2.00    0.50      17     400    0.00    1.82&lt;BR /&gt;          c10t0d2    1.00    0.50       5     128    0.00    3.07&lt;BR /&gt;           c8t0d3    2.00    0.50      10     160    0.00    1.94&lt;BR /&gt;           c8t0d4    7.00    0.50      35     720    0.00    2.21&lt;BR /&gt;          c10t0d4    3.00    0.50      16     432    0.00    2.15&lt;BR /&gt;           c8t0d5    1.00    0.50       6      96    0.00    0.30&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Nov 2007 08:40:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078060#M601692</guid>
      <dc:creator>Bob Ferro</dc:creator>
      <dc:date>2007-11-06T08:40:18Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078061#M601693</link>
      <description>Bob,&lt;BR /&gt;&lt;BR /&gt;The only 2 disks here which have any sort of IO issue are c0t6d0 and c2t6d0 and I'll bet that those disks aren't on the EVA, but are actually the local boot disks. All the other disks have no wait times and service times averaging below 5ms which I would consider good IO.&lt;BR /&gt;&lt;BR /&gt;Use diskinfo and pvdisplay to determine what is on those 2 disks&lt;BR /&gt;&lt;BR /&gt;diskinfo /dev/rdsk/c0t6d0&lt;BR /&gt;diskinfo /dev/rdsk/c2t6d0&lt;BR /&gt;&lt;BR /&gt;pvdisplay /dev/dsk/c0t6d0&lt;BR /&gt;pvdisplay /dev/dsk/c2t6d0&lt;BR /&gt;&lt;BR /&gt;At this stage I'd guess those are in the root volume group (vg00) and someone has added a none-system related filesystem to vg00...&lt;BR /&gt;&lt;BR /&gt;If it is part of vg00 then:&lt;BR /&gt;&lt;BR /&gt;bdf | grep vg00 &lt;BR /&gt;&lt;BR /&gt;would be interesting...&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Duncan&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Nov 2007 13:49:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078061#M601693</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2007-11-06T13:49:35Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078062#M601694</link>
      <description>Hi Bob&lt;BR /&gt;&lt;BR /&gt;Do you have failover/load balance solution like SecurePath. Did you apply load balance settings with SP? autopath display will help you. You can also check SP patches. &lt;BR /&gt;&lt;BR /&gt;RR based LB policies with SP sometimes generates high disk usage and it reduces performance.&lt;BR /&gt;&lt;BR /&gt;Best Regards&lt;BR /&gt;Murat&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Nov 2007 13:54:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078062#M601694</guid>
      <dc:creator>Murat SULUHAN</dc:creator>
      <dc:date>2007-11-06T13:54:40Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078063#M601695</link>
      <description>Hey Bob,&lt;BR /&gt;&lt;BR /&gt;From your posted sar info I do not see any issue either, things look good.  Other than what Duncan mentioned about what probably is an internal OS disk all seems swell.&lt;BR /&gt;&lt;BR /&gt;Post some more info..&lt;BR /&gt;&lt;BR /&gt;As far as reviewing the EVA, CommandView is the utility.  You will need to know the management station address, username and password.&lt;BR /&gt;&lt;BR /&gt;Typically EVAs are configured as 1 or 2 disk groups (could be more), each group having many disks.  There is no disk(s) to host relationship.  If there is then a different array should have been used as it is defeating the main design of an EVA.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Nov 2007 14:02:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078063#M601695</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2007-11-06T14:02:36Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078064#M601696</link>
      <description>Attached is another sar -d.  I see %busy as being high.  The disk in question are c8/c10/c12, these are on the EVA.  Unfortunately, the DBA ran a script on the RP8420 which is attached to the EVA.  The RP8420 is partitioned into 2 servers, each with 16 CPUs and 32GB memory.  The RP3440 has 2 CPUs with 12GB memory.  THe RP3440 outperforms the the RP8420.  The RP3440 is attached to an XP128.  According to the DBA, the script updates over 1,000,000+ rows and does a rollback (for testing).  Attached are some of the stats.  The DBs are configures exactly on both the test/prod servers.  Certainly there are 23 Instances/DBA running on this server but if the EVA8000 is striped over 240 physical disk, I would expect better performance.</description>
      <pubDate>Tue, 06 Nov 2007 14:27:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078064#M601696</guid>
      <dc:creator>Bob Ferro</dc:creator>
      <dc:date>2007-11-06T14:27:29Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078065#M601697</link>
      <description>Hi Bob&lt;BR /&gt;&lt;BR /&gt;Did you use SecurePath ? If yes can you submit autopath display?&lt;BR /&gt;&lt;BR /&gt;Best Regards&lt;BR /&gt;Murat</description>
      <pubDate>Tue, 06 Nov 2007 14:31:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078065#M601697</guid>
      <dc:creator>Murat SULUHAN</dc:creator>
      <dc:date>2007-11-06T14:31:10Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078066#M601698</link>
      <description>I have a printout on an Autopath display all.  Unfortunately, we don't have root or sudo access.  The RP8420 server with the EVA is at our HQ which was setup by HP.  My supervisor wants me to gather as much info as I can and supply the info to Team HP for resolution (fix their problems).  That ought to be fun, that's like telling the police to fix their radar.  Whatever info you need I can send.  I will try to scan it later and send.</description>
      <pubDate>Tue, 06 Nov 2007 14:47:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078066#M601698</guid>
      <dc:creator>Bob Ferro</dc:creator>
      <dc:date>2007-11-06T14:47:34Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078067#M601699</link>
      <description>Bob,&lt;BR /&gt;&lt;BR /&gt;looking at those new numbers I still see no problem except what I saw before. The % busy field simply tells you how much time during the interval this disk was in action, and for an EVA LUN (which is no doubt made up of many physical disks behind cache), this number is completely irrelevant.&lt;BR /&gt;&lt;BR /&gt;The *really* important fields in a 'sar -d' output are avque and avserv. Having a nice low number in the queue like you have means there's not much outstanding IO against that LUN and thats a good thing. Service times of under 10ms usually indicate acceptable IO response times - so I don't think the EVA is the source of your problem here.&lt;BR /&gt;&lt;BR /&gt;Now it could be a red herring, but again what I'd be more concerned about is thise 2 non-EVA disks - both have extremely high service times for a very low amount of IO which is slightly confusing...&lt;BR /&gt;&lt;BR /&gt;How about collecting a similar sar -d output on the rp3440 so you can show simailr IO times there and discount this as the problem...&lt;BR /&gt;&lt;BR /&gt;Incidentally, unless someone from HP actually configured your servers, I think you will find the chances of anyone in HP Support giving you what amounts to free performance consulting very slim indeed.&lt;BR /&gt;&lt;BR /&gt;Of course if you manager wants to get his check book out...&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan&lt;BR /&gt;</description>
      <pubDate>Thu, 08 Nov 2007 16:24:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078067#M601699</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2007-11-08T16:24:27Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078068#M601700</link>
      <description>oops I just missed that post - HP did set it up!&lt;BR /&gt;&lt;BR /&gt;I guess your manager may have a lever for getting some assistance!&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Thu, 08 Nov 2007 16:26:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078068#M601700</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2007-11-08T16:26:32Z</dc:date>
    </item>
    <item>
      <title>Re: VxFSD Utilization</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078069#M601701</link>
      <description>Thanks</description>
      <pubDate>Wed, 28 Nov 2007 15:07:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-utilization/m-p/5078069#M601701</guid>
      <dc:creator>Bob Ferro</dc:creator>
      <dc:date>2007-11-28T15:07:49Z</dc:date>
    </item>
  </channel>
</rss>

