<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: disk storage solution in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434766#M4510</link>
    <description>Hi Martha,&lt;BR /&gt;&lt;BR /&gt;Firstly, as promised, here is my assessment of the SC10 and HVD10. &lt;BR /&gt;&lt;BR /&gt;The SC10 is Ultra2SCSI (80MB/s) which is great, the HVD10 by default is FWD (20MB/s) but you can get it with Ultra2SCSI cards (a must!).&lt;BR /&gt;All disks inside are the same as the really fast Jamaica disks, 10k rpm, 15MB/s each. Lovely and quick. &lt;BR /&gt;&lt;BR /&gt;The differences are that the HVD10 is much more expandable (8TB) as opposed to only 2TB for the SC10.  Both have 2 controllers but the HVD10 mentions 2 connections to the host per controller (total 4) as for 2 for the SC10. The more controllers the better (still, I couldnt imagine 8TB on only 4 controllers!) Also the HVD10 can have RAID software installed which means you can have a lot more available space as opposed to mirroring where you lose half. &lt;BR /&gt;&lt;BR /&gt;I cant comment on price, you will need to speak to an HP reseller. Im sure the SC10 will be a lot cheaper. Certainly the performance on either with Ultra2SCSI and if you used striped lvols over as many disks and both controllers should be excellent, a large jump up over the older Jamaicas (20 MB/s -&amp;gt; 80 MB/s). &lt;BR /&gt; However, a friend of mine looked into all this in detail when they wanted to replace their Autoraids with something really fast - fiber. The HP FC60 is fiber to the server but its internal disks are only SCSI connected, and HP wanted 288k for 1TB and yet the Clariion Nikes were fiber to the host, and fiber to its internal disks and the price was 120k for 1TB. Also the Nikes came with 512Mb of cache, the FC60 only 128. It was a no contest. Theyve now got 2x Model30s installed, LVM striped (+RAID) and performance is awesome.&lt;BR /&gt;&lt;BR /&gt;Now, for Dragans question on how to get Nikes working as well as a Jamaica. &lt;BR /&gt;Firstly, both have 2 controllers connected to a server so maximum throughput is the same (20MB/s x2). Now, lets imagine both have the same size and speed disks. A Jamaica can take 8 disks, a Model20 Nike 20 disks. Now, if we use extent based striping (or distributed) over all 8 disks on both the Jamaica and the Nike then when you read a large amount of data you will be accessing 20 disks on the Nike as opposed to only 8 on the Jamaica. As well as more disks the Nike has 64Mb of cache so its reading ahead etc. Cant you see how this will be faster ? &lt;BR /&gt;For writes, if you use striping+mirroring on all the Jamaica disks (for disk protection) and the Nike uses RAID (for its disk protection) then every write on the Jamaica has to send 2 writes down our SCSI channels to the Jamaica, if were using RAID-S on the Nike then we dont need mirroring so only 1 write goes down our SCSI channel to the Nike, but internally the Nike has to do multiple writes depending on how many disks are in a lun (usually 5), but it caches it all up so no write delay to the host. So in this example the write throughput down our SCSI channels is double on a Nike system than a Jamaica.&lt;BR /&gt;&lt;BR /&gt;And of course a Nike has intelligent controllers that allow alternate paths and controller replacement on the fly, which Jamaicas dont.&lt;BR /&gt;&lt;BR /&gt;Im not a believer in people who measure disk peformance by milliseconds to access etc. I prefer to measure it the way we/users use it - by LVM. The number of times Ive heard EMC say to a customer that the peformance is excellent because access time is 300ms or so when I simply create a striped lvol over multiple EMC disks (logical) and controllers and the performance thru LVM jumps up by a quantum amount!&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Tue, 08 Aug 2000 11:51:10 GMT</pubDate>
    <dc:creator>Stefan Farrelly</dc:creator>
    <dc:date>2000-08-08T11:51:10Z</dc:date>
    <item>
      <title>disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434730#M4474</link>
      <description>On my N4000 with 3 440 MHz cpu's and 4 gig RAM, I am using jamaica subsystems (product A3312AZ) from Fast/Wide/Differential controller cards (A4800A).  The data transfer speed is 20 MB/sec.  These disks are mirrored and hold database data.  I have two tempdb areas that are striped, not mirrored, and the data is on filesystem (vxfs), not raw partition, as the rest of the data is.  Sybase suggestion so that the OS could help with some of the load.  It did help the throughput. These tempdb areas are each on separate controllers, with no other data. They are striped across two and four disks, respectively.  The disks are not busy, as shown by perfview's metric BYDSK_UTIL, but the request queue, BYDSK_REQUEST_QUEUE, is constantly around 20.  Alan Riggs had mentioned something about the vxfs journal log having to be written at one location on the disk and therefore mechanical head movement may account for the long request queue.  CPU usage is about 50%.  Does anyone else have an idea, and does anyone have any suggestions to alleviate the bottleneck?&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;&lt;BR /&gt;Martha</description>
      <pubDate>Fri, 04 Aug 2000 12:11:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434730#M4474</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T12:11:35Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434731#M4475</link>
      <description>What vxfs mount options are you using? Options tmplog and nolog are available (see man mount_vxfs) .&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;John&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 12:21:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434731#M4475</guid>
      <dc:creator>John Palmer</dc:creator>
      <dc:date>2000-08-04T12:21:58Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434732#M4476</link>
      <description>What you have pictured is about as far as you can accomplish with a Jamaica enclosure.  More spindles in the stripe set could help, but I doubt you would gain much, particularly with the cost involved.  Sounds like time to start considering an array where some hardware cache can help things.  Avoid the model 12 though (it's slower if anything).</description>
      <pubDate>Fri, 04 Aug 2000 12:29:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434732#M4476</guid>
      <dc:creator>Tim Malnati</dc:creator>
      <dc:date>2000-08-04T12:29:43Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434733#M4477</link>
      <description>John, &lt;BR /&gt;&lt;BR /&gt;/dev/volumegroup/logicalvolume /tmpdbdata vxfs rw,suid,largefiles,convosync=delay,mincache=tmpcache 0 2&lt;BR /&gt;&lt;BR /&gt;I hope this displays properly, but it should be all on one line.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;&lt;BR /&gt;Martha&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 12:46:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434733#M4477</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T12:46:43Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434734#M4478</link>
      <description>Tim, &lt;BR /&gt;&lt;BR /&gt;Would you have any recommendations?  I would probably like to stick with SCSI, due to the cost of Fibre Channel.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;&lt;BR /&gt;Martha</description>
      <pubDate>Fri, 04 Aug 2000 12:49:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434734#M4478</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T12:49:29Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434735#M4479</link>
      <description>I'm an EMC guy, but I doubt you want spend that kind of money.  They are just not very cost effective until you have significant storage needs.  The same is true with the XP256 although the initial frame cost is less.  Avoid the model 12h; it's slower than the Jamaica's.  I've always liked the model 20 array in SCSI.  But my thought is that the info that I have is more than a year old.  Maybe HP has upgraded this array to improve the connection throughput or maybe HP has come out with something else to more effectively handle 80 mbs (ultra) drives.  Your sales channel is probably the best bet.</description>
      <pubDate>Fri, 04 Aug 2000 14:04:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434735#M4479</guid>
      <dc:creator>Tim Malnati</dc:creator>
      <dc:date>2000-08-04T14:04:43Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434736#M4480</link>
      <description>&lt;BR /&gt;What speed are the disks on your Jamaica ? They can vary by a huge amount. Use ioscan -fknCdisk to find out. The newer models have much larger cache and are considerably faster, here is a table of the different Jamica disks;&lt;BR /&gt;&lt;BR /&gt;Disk_ProductID Size  Speed(1) &lt;BR /&gt; ======================== &lt;BR /&gt; ST15150W   4Gb  ~6.5 MB/s &lt;BR /&gt; ST34371W   4Gb  ~8.5 MB/s &lt;BR /&gt; ST34572WC 4Gb  ~10 MB/s &lt;BR /&gt; ST34573WC 4GB  ~14 MB/s &lt;BR /&gt;&lt;BR /&gt; ST19171       9GB  ~10 MB/s &lt;BR /&gt; ST39173WC  9GB  ~14.5 MB/s &lt;BR /&gt; ST39175LC    9GB  ~18 MB/s &lt;BR /&gt; (1) using time dd if=/dev/rdsk/xxx &lt;BR /&gt;&lt;BR /&gt;As you can see some of the newer models are massively faster than the older ones. We just replaced some of ours here with the 18 MB/s 9Gb models and the performance on our striped lvols increased wonderfully!&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 14:46:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434736#M4480</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2000-08-04T14:46:26Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434737#M4481</link>
      <description>Stefan,&lt;BR /&gt;&lt;BR /&gt;That is very important information.  I have a mixture of disks in one of the tempdb areas:  the slowest is ST34572 at 10 MB/s.  Since this is a striped logical volume, can I assume that the entire logical volume can only write at this slowest disk's speed?</description>
      <pubDate>Fri, 04 Aug 2000 15:44:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434737#M4481</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T15:44:37Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434738#M4482</link>
      <description>&lt;BR /&gt;Hi Martha,&lt;BR /&gt;&lt;BR /&gt;yes, indeed, any striped lvol will be contrained by the slowest disk in the stripeset. Try using pvmove online to move any high usage lvols to the fastest disks. &lt;BR /&gt;&lt;BR /&gt;If you have a good HP VAR they may swap your disks for the faster ones at a not too expensive price because they can reuse yours in part exchange. Its certainly cheaper than buying and configuring a new disk subsystem. Also whenever we lose a Jamaica disk I always ask for the fastest model as a replacement. Has worked so far and its free!&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;Stefan</description>
      <pubDate>Fri, 04 Aug 2000 15:50:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434738#M4482</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2000-08-04T15:50:23Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434739#M4483</link>
      <description>Stefan,&lt;BR /&gt;&lt;BR /&gt;This is very enlightening.  Would this explain the phenomenon of the long request queue but the disk not busy?  Or do I need to look furthur for that answer?&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;&lt;BR /&gt;Martha</description>
      <pubDate>Fri, 04 Aug 2000 16:01:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434739#M4483</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T16:01:40Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434740#M4484</link>
      <description>Hi Martha.&lt;BR /&gt;&lt;BR /&gt;The performance drop from the slowest disk in the volume will affect spin time and data transfer, so it might be a contributing factor.  I would be surpised, though, if it accounted for the full problem.  I have a couple of questions:&lt;BR /&gt;&lt;BR /&gt;1) do all disks in the 4 disk stripe show similar UTIL/QUEUE pattern?&lt;BR /&gt;2) does sar -d report a similar uutilization pattern (sar queries different structures than the midaemon).&lt;BR /&gt;3) What are the BYDSK_AVG_SERVICE_TIMEs for the disks?  Is there much divergence between the 4 disks?&lt;BR /&gt;4) What is the BYDSK_PHYS_IO_RATE for each disk?&lt;BR /&gt;&lt;BR /&gt;Unfortunately, there are seldom quick and easy anserw to these type of performance issues.  But the aboe wuestions might help pin something down.&lt;BR /&gt;&lt;BR /&gt;BTW: please say hello to Sandy, Bob, et al fpr me.  I hope you are all doing well.</description>
      <pubDate>Fri, 04 Aug 2000 18:15:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434740#M4484</guid>
      <dc:creator>Alan Riggs</dc:creator>
      <dc:date>2000-08-04T18:15:41Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434741#M4485</link>
      <description>Hi, Alan, nice to hear from you. &lt;BR /&gt;&lt;BR /&gt;1.  They are identical.&lt;BR /&gt;&lt;BR /&gt;2.  sar -d shows patterns that the disks activities are similar to each other, but the patterns are not necessarily similar to those shown by perfview.  The average queue length on every disk, not just the ones in question, is 0.50 from sar.  The percent busy for all four disks shown by sar was lower than that shown by perfview, but I don't have a good sar sample...I just ran it a few minutes.&lt;BR /&gt;&lt;BR /&gt;3.  BYDSK_AVG_SERVICE times as shown by glanceplus are about 2.2 msec, but, again, I don't have a long collection time.  This metric isn't available from perfview.  The four disks are within 0.5 msec of each other.&lt;BR /&gt;&lt;BR /&gt;4.  BYDSK_PHYS_IO_RATE is averaging around 4 requests per second, with gusts up to 25.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 18:46:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434741#M4485</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T18:46:55Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434742#M4486</link>
      <description>hmmm . . . what about cache hit ratio for reads and writes?</description>
      <pubDate>Fri, 04 Aug 2000 18:58:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434742#M4486</guid>
      <dc:creator>Alan Riggs</dc:creator>
      <dc:date>2000-08-04T18:58:52Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434743#M4487</link>
      <description>sar -b shows %rcache around 99 and %wcache around 90.</description>
      <pubDate>Fri, 04 Aug 2000 19:11:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434743#M4487</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T19:11:11Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434744#M4488</link>
      <description>See, I told you this wouldn't be simple.  I am a bit swamped ATM, but I will do some poking through a reference or two.  Probably won't have anything meaningful before Monday -- maybe someone else can come up with something quicker.  There are some smart folks hanging around here some days.</description>
      <pubDate>Fri, 04 Aug 2000 19:22:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434744#M4488</guid>
      <dc:creator>Alan Riggs</dc:creator>
      <dc:date>2000-08-04T19:22:16Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434745#M4489</link>
      <description>I agree with both points...Monday and the VERY smart people out there that are generously sharing knowledge.  I have been spending most of the week just going through the answers on this forum and jotting down notes.</description>
      <pubDate>Fri, 04 Aug 2000 19:23:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434745#M4489</guid>
      <dc:creator>Martha Mueller</dc:creator>
      <dc:date>2000-08-04T19:23:54Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434746#M4490</link>
      <description>I wouldt like to comment in general what I observed about disks under hp-ux:&lt;BR /&gt;&lt;BR /&gt;RAIDs are nonsense. They are supposed to reduce the number of disks, like disks are &lt;BR /&gt;expensive. Most often the cost of the RAID controller is even higher than the value of &lt;BR /&gt;disks it was supposed to save. In a recent offer for 140 GB net, a model 12 with 5 &lt;BR /&gt;disks,  36 GB each, spares 3 disks at level 5 versus straight mirroring (which can be &lt;BR /&gt;accomplished by Mirror/UX). The saving of 3 disks ($7,150) was more than offset by &lt;BR /&gt;the cost of the controller ($13,400). However, for this higher price one gets significantly &lt;BR /&gt;lower performance.&lt;BR /&gt;&lt;BR /&gt;The RAID controllers perforce introduce moderate to severe latencies and are never&lt;BR /&gt;nearly as fast as Mirror/UX.&lt;BR /&gt;&lt;BR /&gt;My advice is, keep away from RAIDs in general. Just a bunch of mirrored fastest disks available, professionally installed in a decently redundant enclosure (multiple power&lt;BR /&gt;supplies, multiple fans, multiple SCSI cabling), saves a lot of dough and increases the &lt;BR /&gt;performance. For legacy F/W differential systems my supplier uses an SE/Diff adapter &lt;BR /&gt;(a passive device) which converts off-the-shelf UW and LVD drives to F/W differential &lt;BR /&gt;(HP way). My most recent enclosure was 9 times 73 GB cyclically stripe-mirrored for &lt;BR /&gt;a net capacity of 315 GB at the total price of $16,000. The new disks just fly. &lt;BR /&gt;&lt;BR /&gt;Unfortunately, many system bottlenecks are psychological. IT managers tend to build &lt;BR /&gt;up their status based on the $$$ invested in the hardware they manage with scant &lt;BR /&gt;concern for performance or sound cost/benefit analysis.</description>
      <pubDate>Sun, 06 Aug 2000 09:56:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434746#M4490</guid>
      <dc:creator>Dragan Krnic</dc:creator>
      <dc:date>2000-08-06T09:56:17Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434747#M4491</link>
      <description>My previous posting was garbled by the editor. 2nd take:&lt;BR /&gt;&lt;BR /&gt;I wouldt like to comment in general what I observed about disks under hp-ux: &lt;BR /&gt;&lt;BR /&gt;RAIDs are nonsense. They are supposed to reduce the number of disks, like disks are  expensive. Most often the cost of the RAID controller is even higher than the value of disks it was supposed to save. In a recent offer for 140 GB net, a model 12 with 5 disks, 36 GB each, spares 3 disks at level 5 versus straight mirroring (which can be accomplished by Mirror/UX). The saving of 3 disks ($7,150) was more than offset by the cost of the controller ($13,400). However, for this higher price one gets significantly lower performance. &lt;BR /&gt;&lt;BR /&gt;The RAID controllers perforce introduce moderate to severe latencies and are never &lt;BR /&gt;nearly as fast as Mirror/UX. &lt;BR /&gt;&lt;BR /&gt;My advice is, keep away from RAIDs in general. Just a bunch of mirrored fastest disks available, professionally installed in a decently redundant enclosure (multiple power supplies, multiple fans, multiple SCSI cabling), saves a lot of dough and increases the  performance. For legacy F/W differential systems my supplier uses an SE/Diff adapter (a passive device) which converts off-the-shelf UW and LVD drives to F/W differential (HP way). My most recent enclosure was 9 times 73 GB cyclically stripe-mirrored for a net capacity of 315 GB at the total price of $16,000. The new disks just fly. &lt;BR /&gt;&lt;BR /&gt;Unfortunately, many system bottlenecks are psychological. IT managers tend to build up their status based on the $$$ invested in the hardware they manage with scant concern for performance or sound cost/benefit analysis.</description>
      <pubDate>Sun, 06 Aug 2000 10:02:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434747#M4491</guid>
      <dc:creator>Dragan Krnic</dc:creator>
      <dc:date>2000-08-06T10:02:22Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434748#M4492</link>
      <description>&lt;BR /&gt;Hi Martha, &lt;BR /&gt;&lt;BR /&gt;a lot has passed since I went home Friday. Have you made any progress ? My only comment is I think the different speed disks will make a big difference. I have seen this happen before. Next step is to move your cricitcal lvols to a stripe set all on the same speed and fastest disks then see how that improves things. &lt;BR /&gt;&lt;BR /&gt;Im afraid I have to disagree with Dragan. Here at HP weve got our Nike Raid disks performing much faster than Jamaicas. The hundreds of MB of cache makes a big difference - especially to write performance. For the extra money you get dual pathing so protecting against controller failure and RAID so more space for your money. We do have slightly more failures, but this is temperature related, keep em cool and their failure rate is same as any other hardware.&lt;BR /&gt;</description>
      <pubDate>Mon, 07 Aug 2000 06:47:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434748#M4492</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2000-08-07T06:47:53Z</dc:date>
    </item>
    <item>
      <title>Re: disk storage solution</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434749#M4493</link>
      <description>Hello Martha, just to cover your bases, have you tried to approach this problem from a different angle?  Take a look at the process acually doing all the work.  Use Glance and check out the wait states that the process is consuming.  Check to see if the process is able to work harder than it is.  Look for things like priority, IPC time delays, local network trafic.  Is your database using the local UNIX domain protocol or the high overhead TCP/IP to do its local database access?  (This is a common problem with local database access.) &lt;BR /&gt;&lt;BR /&gt;It seems your buffer cache hit rates are very good but don't let this fool you. I've seen stranger things.&lt;BR /&gt;&lt;BR /&gt;You say CPU usage is about 50%. If the %wio parameter from sar -u is not very high as you would expect from an I/O bottleneck, then SOMETHING has to be consuming the resources. (I know, that was a general statement.) This approach will also be valid if the time spent in system mode is also higher than expected.&lt;BR /&gt;&lt;BR /&gt;Tony</description>
      <pubDate>Mon, 07 Aug 2000 11:52:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-storage-solution/m-p/2434749#M4493</guid>
      <dc:creator>Anthony deRito</dc:creator>
      <dc:date>2000-08-07T11:52:35Z</dc:date>
    </item>
  </channel>
</rss>

