<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Socket or DIsk performance in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158519#M691805</link>
    <description>I'll give you some information on VXVM opposites numbers from the LVM command set.  Here's a great URL:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.bhami.com/rosetta.html" target="_blank"&gt;http://www.bhami.com/rosetta.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;pvdisplay     vxdisk list&lt;BR /&gt;vgdisplay     vxdg list / vxprint</description>
    <pubDate>Mon, 17 Mar 2008 20:31:50 GMT</pubDate>
    <dc:creator>Michael Steele_2</dc:creator>
    <dc:date>2008-03-17T20:31:50Z</dc:date>
    <item>
      <title>Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158507#M691793</link>
      <description>I have a java client application that reads files from a directory "A" and sends that data to C++ application on a socket connection. Java client application then moves these files to directory "B" directories. Directory "A" has subdirectories based on the day and time when those files were received. What I am seeing is that when we have 1M+ files the process really slows down. When I take the stack trace I see the following:&lt;BR /&gt;&lt;BR /&gt;"0" prio=10 tid=00055ac0 nid=36 lwp_id=4707656 runnable [21240000..21240738]&lt;BR /&gt;        at java.io.UnixFileSystem.rename0(Native Method)&lt;BR /&gt;        at java.io.UnixFileSystem.rename(UnixFileSystem.java:318)&lt;BR /&gt;                at java.io.File.renameTo(File.java:1212)                at com.i.e.R.moveFile(Unknown Source)&lt;BR /&gt;&lt;BR /&gt;It looks like it's taking time to move files. I am not sure why it would be slow in moving files if there are lot of files. I am also not sure if writing on socket is slow. How can I tell which resource is slowing this App. I tried to look at the sar data but I didn't know how to relate the device name with the file system. I've done basic analysis I am looking for something advanced that would tell me about the resources that are slowing this app.</description>
      <pubDate>Mon, 10 Mar 2008 14:17:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158507#M691793</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-10T14:17:56Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158508#M691794</link>
      <description>&amp;gt;&amp;gt; What I am seeing is that when we have 1M+ files the process really slows down.&lt;BR /&gt;&lt;BR /&gt;Sure! Those directories have on on-disk structure which needs to be maintained. As they grow that's more work adn there will be less cache to go arounf.&lt;BR /&gt;&lt;BR /&gt;Check out the tail end of:&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/5576/JFS_Tuning.pdf" target="_blank"&gt;http://docs.hp.com/en/5576/JFS_Tuning.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; It looks like it's taking time to move files.&lt;BR /&gt;&lt;BR /&gt;May we assume the moves are just between directories onthe same underlying volume?&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; I am not sure why it would be slow in moving files if there are lot of files.&lt;BR /&gt;&lt;BR /&gt;Because there is more data to trounce through?&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; I am also not sure if writing on socket is slow. How can I tell which resource is slowing this App.&lt;BR /&gt;&lt;BR /&gt;Isolate and measure! Try 1000 moves outside application context on shell level&lt;BR /&gt;&lt;BR /&gt;What else is the application doing beside a 'rename'? Could it be doing readdir's galore? Or doing a stat on all file before moving 1? Maybe it is blowing the inode cache?&lt;BR /&gt;&lt;BR /&gt;You might want to try 'truss' or other system call trace tool&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein van den Heuvel (at gmail dot com)&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
      <pubDate>Mon, 10 Mar 2008 14:51:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158508#M691794</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-03-10T14:51:07Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158509#M691795</link>
      <description>- I haven't looked at the JFS tunning , but I'll look at it. thanks&lt;BR /&gt;- Yes it's moving file within same volume&lt;BR /&gt;- So when file is moved does it slow down if that directory has lot of files ?&lt;BR /&gt;- Yes this application does read files from directory, how can I tell if it blowing the inode cache. Also what's the role of inode cache and how can I say if that's the problem&lt;BR /&gt;- How can I use truss and how to intrepret the output in this context ?&lt;BR /&gt;- If I find writing on socket is a problem then how can I tell if it's a network related issue. Could it be a network related issue if client and server are on the same box ?</description>
      <pubDate>Mon, 10 Mar 2008 23:52:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158509#M691795</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-10T23:52:54Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158510#M691796</link>
      <description>Its as easy (* for the problem file systems *) as looking for a disk bottleneck with 'sar -d'.  Refer to monthly data (* if you have it *) in /var/adm/sa/sa##.  And use :&lt;BR /&gt;&lt;BR /&gt;sar -d -f /var/adm/sa/sa10 (* for 3/10 *)&lt;BR /&gt;&lt;BR /&gt;-or-&lt;BR /&gt;&lt;BR /&gt;sar -d 5 5 in a cron every fifteen minutes and outputed into a file.  Look for disk entries where avwait is greater than avserv.  Use 'pvdisplay' to id the file system on the disk.&lt;BR /&gt;&lt;BR /&gt;avwait    Average time (in milliseconds) that transfer requests waited idly on queue for the device;&lt;BR /&gt; &lt;BR /&gt;avserv    Average time (in milliseconds) to service each transfer request (includes seek, rotational latency, and data transfer times) for the device.&lt;BR /&gt; &lt;BR /&gt;Look into fragmentation.  If you have online JFS you should be defragging once a week.&lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -d -D -e -E /filesystem&lt;BR /&gt;&lt;BR /&gt;Note:  Once started don't interrupt or kill.  If too long then read man page under number of passes and how to reduce the number of passes.&lt;BR /&gt;&lt;BR /&gt;As for sockets, 'lsof' (* list of open files *) provides you with part of the information.  You'll also neet 'netstat' to cound collisions, etc.  Or, the GNU HP-UX version of the Sun command 'snoop'.&lt;BR /&gt;&lt;BR /&gt;'lsof' is also a GNU command.  Here's 'lsof'.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://hpux.cs.utah.edu/hppd/hpux/Sysadmin/" target="_blank"&gt;http://hpux.cs.utah.edu/hppd/hpux/Sysadmin/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;As for a 'snoop' equivalient.  look into 'tcpdump', tcptrace, and tcpflow which can be found here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://hpux.cs.utah.edu/hppd/auto/ia64-11.31-T.html" target="_blank"&gt;http://hpux.cs.utah.edu/hppd/auto/ia64-11.31-T.html&lt;/A&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 11 Mar 2008 04:04:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158510#M691796</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2008-03-11T04:04:12Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158511#M691797</link>
      <description>I am trying to relate my file system to the devices that I get in sar output. How can I use pvdisplay or lvdisplay to list all the devices in sar. I tries pvdisplay &lt;FILE system=""&gt; and also pvdisplay &lt;MOUNT point=""&gt; but I don't get the information.&lt;BR /&gt;&lt;BR /&gt;I was reading about "Directory Name Lookup Cache" on &lt;A href="http://docs.hp.com/en/5576/JFS_Tuning.pdf." target="_blank"&gt;http://docs.hp.com/en/5576/JFS_Tuning.pdf.&lt;/A&gt; How can I verify if this is the problem ? And could this be increased ? Is it advisable to increase this cache ? Since client application reads the file name and then opens it and writes it to the socket I was thinking probably fast lookup from DNLC could help.&lt;/MOUNT&gt;&lt;/FILE&gt;</description>
      <pubDate>Tue, 11 Mar 2008 17:24:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158511#M691797</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-11T17:24:48Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158512#M691798</link>
      <description>Could somebody reply to my questions.</description>
      <pubDate>Fri, 14 Mar 2008 17:27:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158512#M691798</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-14T17:27:33Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158513#M691799</link>
      <description>lvdisplay will show you the device files used for each logical volume.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;lvdisplay -v /dev/vgabc/lvol1 &lt;BR /&gt;&lt;BR /&gt;or the reverse.&lt;BR /&gt;&lt;BR /&gt;pvdisplay /dev/dsk/cxtxdx will show you what lvols the device is used in.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 14 Mar 2008 18:11:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158513#M691799</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2008-03-14T18:11:35Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158514#M691800</link>
      <description>pvdisplay /dev/dsk/cXtYdZ -or-&lt;BR /&gt;pvdisplay -v /dev/dsk/cXtYdZ&lt;BR /&gt;&lt;BR /&gt;where X, Y and Z come from from the sar output.&lt;BR /&gt;&lt;BR /&gt;However, I suggest that you start with:&lt;BR /&gt;&lt;BR /&gt;"bdf /your/filesystem" ......which will tell you what logical volume the filesystem resides in.&lt;BR /&gt;&lt;BR /&gt;Then do a lvdisplay of the logical volume...from above&lt;BR /&gt;&lt;BR /&gt;"lvdisplay -v /dev/vgXXX/lvolXXX" where the logical volume is taken from the output of the prior "bdf" above.  That should give you a list of the physical volumes used by the LV.&lt;BR /&gt;&lt;BR /&gt;You can use the PV Names listed by the LV display to match to the output from SAR&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 14 Mar 2008 18:17:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158514#M691800</guid>
      <dc:creator>OldSchool</dc:creator>
      <dc:date>2008-03-14T18:17:51Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158515#M691801</link>
      <description>Sorry.  &lt;BR /&gt;&lt;BR /&gt;'pvdisplay -v /dev/dsk/c#t#d# | more'&lt;BR /&gt;&lt;BR /&gt;This will list out several pages, you only need the first and second.  The other pages will display PE (* physical extent *) data but isn't needed for this.</description>
      <pubDate>Fri, 14 Mar 2008 19:16:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158515#M691801</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2008-03-14T19:16:18Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158516#M691802</link>
      <description>When I do lvdisplay I get &lt;BR /&gt;&lt;BR /&gt;$ /usr/sbin/lvdisplay /dev/vx/dsk/app1dg/app1vol02&lt;BR /&gt;lvdisplay: Illegal path "/dev/vx/dsk/app1dg".&lt;BR /&gt;lvdisplay: Cannot display logical volume "/dev/vx/dsk/app1dg/app1vol02".&lt;BR /&gt;&lt;BR /&gt;I took "/dev/vx/dsk/app1dg/app1vol02" this from bdf.&lt;BR /&gt;&lt;BR /&gt;Also, if someone can help me understand how to diagnose if DNLC is a problem. And if increasing this will help.</description>
      <pubDate>Fri, 14 Mar 2008 23:34:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158516#M691802</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-14T23:34:50Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158517#M691803</link>
      <description>"/dev/vx/dsk/app1dg/app1vol02".&lt;BR /&gt;&lt;BR /&gt;This is for VXVM not an LVM logical volume</description>
      <pubDate>Sat, 15 Mar 2008 11:28:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158517#M691803</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2008-03-15T11:28:03Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158518#M691804</link>
      <description>Then how can I measure the performance for VXVM. Could somebody help me understand how can I see how disks are performing for VxVM. Looks previous examples are applicable to LVM.</description>
      <pubDate>Mon, 17 Mar 2008 17:44:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158518#M691804</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-17T17:44:32Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158519#M691805</link>
      <description>I'll give you some information on VXVM opposites numbers from the LVM command set.  Here's a great URL:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.bhami.com/rosetta.html" target="_blank"&gt;http://www.bhami.com/rosetta.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;pvdisplay     vxdisk list&lt;BR /&gt;vgdisplay     vxdg list / vxprint</description>
      <pubDate>Mon, 17 Mar 2008 20:31:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158519#M691805</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2008-03-17T20:31:50Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158520#M691806</link>
      <description>So I finally found a way using "vxdisk list" to get the device names for the volume. Then ran grep from sar data. I do see devices for that having more avwait than avserv. I am not sure what to do next. How can I tell what needs to be tuned ?&lt;BR /&gt;&lt;BR /&gt;Snap shot of sar (I've cut the device name)&lt;BR /&gt;--&lt;BR /&gt;device %busy avque r+w/s blks/s avwait avserv&lt;BR /&gt;&lt;BR /&gt; d0 59.69 44.89 515 4094 38.55 3.46&lt;BR /&gt; d1 59.24 42.16 507 3990 36.60 3.43&lt;BR /&gt; d2 59.86 40.40 509 4014 34.65 3.44&lt;BR /&gt; d3 59.43 41.09 509 4029 35.16 3.45&lt;BR /&gt; d4 60.90 44.55 517 4126 37.97 3.50&lt;BR /&gt; d5 60.74 40.59 507 3984 34.69 3.50&lt;BR /&gt; d6 59.01 43.72 513 4117 36.65 3.46&lt;BR /&gt;--</description>
      <pubDate>Mon, 17 Mar 2008 21:55:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158520#M691806</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-17T21:55:45Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158521#M691807</link>
      <description>Finding a disk bottleneck is not as clear cut in VXVM as it is in LVM.  You have to go through the /etc/pat_to_inst file.  Here's a Solaris disk bottleneck doc that refers to disk suite and metastat.  It's related to your situation because of /etc/path_to_inst is involved and because slices are used instead of logical volumes.&lt;BR /&gt;&lt;BR /&gt;Paste in your vxdisk list and vxprint data if you can't get what you need.</description>
      <pubDate>Mon, 17 Mar 2008 22:04:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158521#M691807</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2008-03-17T22:04:30Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158522#M691808</link>
      <description>Attached document tells how to identify associated file system. I already know which file system these disks point to. I am trying to understand how much of a problem it is and how it can be tuned.</description>
      <pubDate>Mon, 17 Mar 2008 23:49:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158522#M691808</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-17T23:49:27Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158523#M691809</link>
      <description>Oh, well, in that case you have a number of options beginning with your hardware and raid level.  If this is a raid 5 consider mirroring only or stripping and mirroring, raids 0, 1, or 01 or 10.  Raids 01 and 10 will be striped out in disk groups and disk groups can be tricky when one disk in the group fails.  Depends on the type of disk array that you have.&lt;BR /&gt;&lt;BR /&gt;Check the rotation speed on your disks and consider getting faster disks.&lt;BR /&gt;&lt;BR /&gt;From a O/S level consider additional file system on additional disk to load balance better.  This is probably your best choice.  &lt;BR /&gt;&lt;BR /&gt;But reviewing your sar -d report all of your avwait times are 10 times greater than avserv times.  This is a significant bottleneck.&lt;BR /&gt;&lt;BR /&gt;I'd throw more disks at the problem.  Usually adding in more spindles is what Oracle and other databases will also recommend.&lt;BR /&gt;&lt;BR /&gt;Consult your dba's for advice on what the database and application recommend for optimal performance and compatiabilty.  For example, they may not be able to handle a new file system well.  Especailly if code changes are involved.</description>
      <pubDate>Tue, 18 Mar 2008 17:47:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158523#M691809</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2008-03-18T17:47:20Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158524#M691810</link>
      <description>One thing I don't understand is that when writing files it's fast. But, while moving files it slows down. What could be the theory ? Is there a way to balance I/O within existing resources.</description>
      <pubDate>Tue, 18 Mar 2008 18:00:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158524#M691810</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-18T18:00:01Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158525#M691811</link>
      <description>If you are using online JFS then you can defrag your file systems to increase performance.&lt;BR /&gt;&lt;BR /&gt;To test for online JFS try&lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -D -E /filesystem&lt;BR /&gt;&lt;BR /&gt;What does this return?</description>
      <pubDate>Tue, 18 Mar 2008 18:03:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158525#M691811</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2008-03-18T18:03:42Z</dc:date>
    </item>
    <item>
      <title>Re: Socket or DIsk performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158526#M691812</link>
      <description>I don't even see -D as an option.&lt;BR /&gt;&lt;BR /&gt;This is what I get for fsadm -h:&lt;BR /&gt;&lt;BR /&gt;usage: fsadm [-F FStype] [-V] [-o specific_options] special&lt;BR /&gt;</description>
      <pubDate>Tue, 18 Mar 2008 18:36:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-or-disk-performance/m-p/4158526#M691812</guid>
      <dc:creator>MohitAnchlia</dc:creator>
      <dc:date>2008-03-18T18:36:47Z</dc:date>
    </item>
  </channel>
</rss>

