<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Strange DD performance issue in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002171#M778071</link>
    <description>Recently ran into something peculiar that almost got system to a halt. I have 2 PA-RISC dome partitions in a cluster with large number of disk devices. &lt;BR /&gt;&lt;BR /&gt;   A monitoring job that runs through OVO, checks disk connectivity by running the dd command in this format &lt;BR /&gt;&lt;BR /&gt;dd count=1 if=/dev/dsk/cXtYDZ of=/dev/null&lt;BR /&gt;&lt;BR /&gt;Everytime the dd runs on a device which is active on the other node, resource utilization would shoot up dramatically. For disks active on the current node, it works like a charm. &lt;BR /&gt;&lt;BR /&gt;I was just curious to know why this behaviour. &lt;BR /&gt;&lt;BR /&gt;The other question that I have is what is a good way to check disk connectivity. &lt;BR /&gt;&lt;BR /&gt;Acc. to HP, ioscan is not a good way to check it. At first I couldnt beleive this response until yesterday when we found out a PV status had changed to an unavailable status, with EMS pouring out error messages and ioscan showed the disk as CLAIMED. &lt;BR /&gt;&lt;BR /&gt;So the rationale behind us using the DD command was to send IO to check the disk status. But it now seems that it has some overheads in a cluster environment.</description>
    <pubDate>Fri, 08 Sep 2006 14:13:39 GMT</pubDate>
    <dc:creator>Chetan_5</dc:creator>
    <dc:date>2006-09-08T14:13:39Z</dc:date>
    <item>
      <title>Strange DD performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002171#M778071</link>
      <description>Recently ran into something peculiar that almost got system to a halt. I have 2 PA-RISC dome partitions in a cluster with large number of disk devices. &lt;BR /&gt;&lt;BR /&gt;   A monitoring job that runs through OVO, checks disk connectivity by running the dd command in this format &lt;BR /&gt;&lt;BR /&gt;dd count=1 if=/dev/dsk/cXtYDZ of=/dev/null&lt;BR /&gt;&lt;BR /&gt;Everytime the dd runs on a device which is active on the other node, resource utilization would shoot up dramatically. For disks active on the current node, it works like a charm. &lt;BR /&gt;&lt;BR /&gt;I was just curious to know why this behaviour. &lt;BR /&gt;&lt;BR /&gt;The other question that I have is what is a good way to check disk connectivity. &lt;BR /&gt;&lt;BR /&gt;Acc. to HP, ioscan is not a good way to check it. At first I couldnt beleive this response until yesterday when we found out a PV status had changed to an unavailable status, with EMS pouring out error messages and ioscan showed the disk as CLAIMED. &lt;BR /&gt;&lt;BR /&gt;So the rationale behind us using the DD command was to send IO to check the disk status. But it now seems that it has some overheads in a cluster environment.</description>
      <pubDate>Fri, 08 Sep 2006 14:13:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002171#M778071</guid>
      <dc:creator>Chetan_5</dc:creator>
      <dc:date>2006-09-08T14:13:39Z</dc:date>
    </item>
    <item>
      <title>Re: Strange DD performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002172#M778072</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;I recommend using EMS to monitor disks. You can use sam to set it up and it won't touch resources not allocated to the partition its running in.&lt;BR /&gt;&lt;BR /&gt;I also used to run a script that simply checked the syslog or dmesg output for the string lbolt.&lt;BR /&gt;&lt;BR /&gt;That is a sign that either a hot swap was hot swapped or a disk has gone bad.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 08 Sep 2006 14:18:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002172#M778072</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-09-08T14:18:55Z</dc:date>
    </item>
    <item>
      <title>Re: Strange DD performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002173#M778073</link>
      <description>dd command won't tell you 100% sure that disk is bad or good. Sometimes it says good but disk may be bad due bad blocks. &lt;BR /&gt;&lt;BR /&gt;It seems dd runs on the passive node and disk activity may be busy during IO activity from active node.</description>
      <pubDate>Fri, 08 Sep 2006 14:21:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002173#M778073</guid>
      <dc:creator>IT_2007</dc:creator>
      <dc:date>2006-09-08T14:21:07Z</dc:date>
    </item>
    <item>
      <title>Re: Strange DD performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002174#M778074</link>
      <description>When multiple hosts access common SCSI devices all the hosts have to play by the rules. Each of the host's must use SCSI reserve/release commands to control access. This locks the drive for access by only one host. The other host gets a SCSI reservation conflict status until the current host sends a SCSI release cmd (or a SCSI bus reset is done). It's the waiting for the release that you are seeing.&lt;BR /&gt;&lt;BR /&gt;It makes much more sense to use EMS and/or syslog to feed OV/O and that is much less intrusive. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 08 Sep 2006 15:03:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002174#M778074</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2006-09-08T15:03:02Z</dc:date>
    </item>
    <item>
      <title>Re: Strange DD performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002175#M778075</link>
      <description>Thanks for all the replies, but Clays response is what I was looking for. &lt;BR /&gt;&lt;BR /&gt;We already have EMS, syslog monitoring in place, but this script was born to quickly check SAN connectivity for disks in a complex environment like ours which had gone thru a couple of migrations.</description>
      <pubDate>Fri, 08 Sep 2006 16:01:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002175#M778075</guid>
      <dc:creator>Chetan_5</dc:creator>
      <dc:date>2006-09-08T16:01:57Z</dc:date>
    </item>
    <item>
      <title>Re: Strange DD performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002176#M778076</link>
      <description>.</description>
      <pubDate>Fri, 08 Sep 2006 16:02:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-dd-performance-issue/m-p/5002176#M778076</guid>
      <dc:creator>Chetan_5</dc:creator>
      <dc:date>2006-09-08T16:02:42Z</dc:date>
    </item>
  </channel>
</rss>

