<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Quality Control on a Server rollout. in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033679#M133083</link>
    <description>I've seen alot of very detailed how-to, here is something different&lt;BR /&gt;&lt;BR /&gt;From a Process perspective:&lt;BR /&gt;&lt;BR /&gt;1. The Request&lt;BR /&gt;1a. is the request valid ?&lt;BR /&gt;1b. verify requirements&lt;BR /&gt;1c. determine scope for all systems/applications involved&lt;BR /&gt;2. Analyze Requirements and Maintenance&lt;BR /&gt;2a. any templates/standards to adhere to ?&lt;BR /&gt;2b. determine if BOM is correct&lt;BR /&gt;2c. verify infrastructure requirements&lt;BR /&gt;3. Install and configure&lt;BR /&gt;3a. install OS&lt;BR /&gt;3b. patch OS&lt;BR /&gt;3c. install other OS pkg's&lt;BR /&gt;3d. mirror root&lt;BR /&gt;3e. configure non-root disks&lt;BR /&gt;3f. implement security features&lt;BR /&gt;3g. implement data backup solution&lt;BR /&gt;3h. create and test OS backup&lt;BR /&gt;4. Document and Review&lt;BR /&gt;4a. update inventory records&lt;BR /&gt;4b. update Disaster Recovery Plans&lt;BR /&gt;5. Sign-Off&lt;BR /&gt;5a. obtain customer/client signoff&lt;BR /&gt;5b. notify support staff&lt;BR /&gt;&lt;BR /&gt;Help this helps you and others!</description>
    <pubDate>Thu, 07 Aug 2003 16:13:44 GMT</pubDate>
    <dc:creator>Robert Gamble</dc:creator>
    <dc:date>2003-08-07T16:13:44Z</dc:date>
    <item>
      <title>Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033613#M133017</link>
      <description>I am rolling an rp5450 L2000 server into production this week.&lt;BR /&gt;&lt;BR /&gt;I have done a lot of quality control checks but know that 6000 itrc members are better than just me.&lt;BR /&gt;&lt;BR /&gt;So here is the thread.&lt;BR /&gt;&lt;BR /&gt;3 Points for any unduplicated suggestion&lt;BR /&gt;2 Bonus Points for those with perfect point giving records&lt;BR /&gt;1 Bonus point for those like Pete Randall who have more hand outs than questions.&lt;BR /&gt;&lt;BR /&gt;If your suggestion results in me catching something, I will post a notice and you can come back for a rabbit.&lt;BR /&gt;&lt;BR /&gt;Only a representitive of the Chosen people we, who invented buearacracy but can't spell it could come up with such a convoluted system.&lt;BR /&gt;&lt;BR /&gt;There is no limit to the number of suggestions, but be realistic, If I can't run the check, the best you'll do is 3-6 points.  Try and be thoughtful and provide procedures for running the checks.  &lt;BR /&gt;&lt;BR /&gt;Its 1 rabbit per every suggestion that actually results in me finding and eliminating a quality control problem.&lt;BR /&gt;&lt;BR /&gt;Please read carefully, because I want you to know what I've done so your suggestion is relavent.&lt;BR /&gt;&lt;BR /&gt;Old System:&lt;BR /&gt;D380 2 Way  All apps 32 bit. OS 32 bit&lt;BR /&gt;11.00 32 Bit&lt;BR /&gt;Oracle 8.1.7.0 &lt;BR /&gt;Oracle 9ias 1.0.2.2 Patch Level 12&lt;BR /&gt;Cyborg 4.5.3(staying on this server)&lt;BR /&gt;adabas/natural from Software AG Legacy Application.&lt;BR /&gt;Not Trusted&lt;BR /&gt;&lt;BR /&gt;New System&lt;BR /&gt;rp5450&lt;BR /&gt;B.11.11 June 2003 and lots of other patches 64 bit.&lt;BR /&gt;Oracle 8.1.7.4 64 bit&lt;BR /&gt;Oracle 9ias 1.0.2.2 Patch Level 12 32 bit&lt;BR /&gt;adabas/natural from Software AG 64 bit fully tested Legacy Application.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The system was creaetd with Ignite.&lt;BR /&gt;&lt;BR /&gt;Major issues caught thus far:&lt;BR /&gt;Audit id for non root cron users was not set up correctly, used itrc tsconvert utility to fix it.  Invented the restart utility because of it.&lt;BR /&gt;&lt;BR /&gt;We have made Ignite correctly distribute the /etc/hosts and nsswitch.conf file.&lt;BR /&gt;&lt;BR /&gt;We have practiced the Oracle 32 bit to 64 bit conversion and the data migration of the adabas data.  funny how the adabas database can migrate to 64 bits in 15 minutes and it takes 6 hours for Oracle.&lt;BR /&gt;&lt;BR /&gt;We have fully tested the actual Oracle application with test plans. Same for the legacy.&lt;BR /&gt;&lt;BR /&gt;We have developed a memo with pictures for the users that will be forced to change their passwords and the fail rate on that is 50%. We're setting passwords and notifying those users. We didn't try and migrate the non-trusted password to the trusted systems.&lt;BR /&gt;&lt;BR /&gt;I know we've done a good job, because 60 days ago, I put an rp5450 server in for the developers.  But I want it to be perfect.&lt;BR /&gt;&lt;BR /&gt;Why?  Because I'm anal.  Also becasue I want to pitch a promotion to Senior Systems Specialist and I want this to come off clean.&lt;BR /&gt;&lt;BR /&gt;Thanks in Advance&lt;BR /&gt;&lt;BR /&gt;Steve</description>
      <pubDate>Fri, 25 Jul 2003 13:38:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033613#M133017</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-07-25T13:38:25Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033614#M133018</link>
      <description>A couple spring to mind;&lt;BR /&gt;&lt;BR /&gt;1. Have you tested the new server under a typical user load (ie. a normal days usage with tons of users on) to really see any problem induced by load - something very very hard to otherwise test.&lt;BR /&gt;&lt;BR /&gt;2. If you have done 1. then you have noticed that if you dont tune vx_ninode to 90% of ninode as soon as tons of users get on your used ram figure will go astronomical and only setting vx_ninode and a reboot will fix. &lt;BR /&gt;&lt;BR /&gt;Recently encountered these problems myself on a new server rollout!&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Jul 2003 13:43:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033614#M133018</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-07-25T13:43:52Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033615#M133019</link>
      <description>Things just off the top of my head:&lt;BR /&gt;&lt;BR /&gt;1) Buffer cache - have you reduce dbc_max_pct from the default value of 50?&lt;BR /&gt;&lt;BR /&gt;2) Are any users going to be connecting directly to the box?  If so, have you reset the npty, nstrpty and nstrtel kernel parameters, regenerated the kernel and made sure that the additional pty / tty device files are created.  &lt;BR /&gt;</description>
      <pubDate>Fri, 25 Jul 2003 13:44:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033615#M133019</guid>
      <dc:creator>Patrick Wallek</dc:creator>
      <dc:date>2003-07-25T13:44:43Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033616#M133020</link>
      <description>Steve,&lt;BR /&gt;&lt;BR /&gt;(what happened to SEP, or Steven, for that matter)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;In the past when I've faced a similar situation, I arranged for a dry run with actual users doing actual work.  At the appointed time, I shut down the production server, switched the name and IP of the new server to be that of the production server, and turned the users loose.  After a couple hours of quasi-production work, we had unearthed a few unforseen problems which we took care of before the actual roll-out.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Jul 2003 13:47:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033616#M133020</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2003-07-25T13:47:15Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033617#M133021</link>
      <description>Stephan,&lt;BR /&gt;&lt;BR /&gt;We had our vx_ninode crisis on the prior server.  Great idea.  We have load tested as best we can, and have further tests set for next week. The default was zero which means the system sets it, which is ridiculous.  We have set a value on that based on HP's recommendation.&lt;BR /&gt;&lt;BR /&gt;Patrick,&lt;BR /&gt;&lt;BR /&gt;We are pushing a configuration file for telnet connection changing the hostname to all of our users who still use the legacy app, which is telnet(I wish management would switch to ssh).&lt;BR /&gt;&lt;BR /&gt;The dbc_ max_pct issue was resolved in the same performance problem above, mentioned by Stephan with regards to vx_ninode.&lt;BR /&gt;&lt;BR /&gt;Good stuff.&lt;BR /&gt;&lt;BR /&gt;This is going to be very helpful.&lt;BR /&gt;&lt;BR /&gt;Some Oracle suggestions would be cool too.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 25 Jul 2003 13:52:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033617#M133021</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-07-25T13:52:15Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033618#M133022</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;some checks:&lt;BR /&gt;&lt;BR /&gt;- some "+ +" left in .rhosts &lt;BR /&gt;- MWC and Bad Block Relocation on vg00's lv&lt;BR /&gt;&lt;BR /&gt;  ...thinking...&lt;BR /&gt;&lt;BR /&gt;       Massimo</description>
      <pubDate>Fri, 25 Jul 2003 13:56:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033618#M133022</guid>
      <dc:creator>Massimo Bianchi</dc:creator>
      <dc:date>2003-07-25T13:56:01Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033619#M133023</link>
      <description>Steve - what was your HP recommendation for vx_ninode ? HP told us to set it to 90% of ninode - not the same for you ?&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;Stefan&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Jul 2003 13:57:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033619#M133023</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-07-25T13:57:07Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033620#M133024</link>
      <description>Did you test anything from the backup software side? Since you have Oracle, there will be Open file issues and you may have to come up with a good solution (cold or hot backups). You can check sample backups and see what impact it makes for your database and consult with DBAs too. &lt;BR /&gt;&lt;BR /&gt;Did you test a Disaster recovery?&lt;BR /&gt;&lt;BR /&gt;Did you document all changes you made to the server?&lt;BR /&gt;&lt;BR /&gt;Just some thoughts ...</description>
      <pubDate>Fri, 25 Jul 2003 13:59:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033620#M133024</guid>
      <dc:creator>Helen French</dc:creator>
      <dc:date>2003-07-25T13:59:38Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033621#M133025</link>
      <description>Steve,&lt;BR /&gt;&lt;BR /&gt;I didn't catch where your data resides.  Will you be using existing or is that getting migrated as well?  If new, have you looked at replicating any FS tuning that may have been applied previously?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Jul 2003 14:02:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033621#M133025</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2003-07-25T14:02:49Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033622#M133026</link>
      <description>Massimo,&lt;BR /&gt;&lt;BR /&gt;rhosts and all Berkley protocols are totally disabled.&lt;BR /&gt;&lt;BR /&gt;I am checking on your Bad Blocks Allocation item, possible Bunny alert, check back.&lt;BR /&gt;&lt;BR /&gt;Stephen,&lt;BR /&gt;&lt;BR /&gt;vx_ninode was set with HP's assistance to a figure greater than ninode.&lt;BR /&gt;&lt;BR /&gt;This was handled by a support call, and as I recall I resisted setting this figure lower.&lt;BR /&gt;&lt;BR /&gt;I am actually having some issues with this box, its running some stuff slwoer than a box with half the memory and the same kernel configuration(swap is bigger).&lt;BR /&gt;&lt;BR /&gt;So Stephan, check for a bunny on that suggestion as well.  It might take a few days to figure that one out.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 25 Jul 2003 14:03:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033622#M133026</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-07-25T14:03:30Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033623#M133027</link>
      <description>Oracle ?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;That's here:&lt;BR /&gt;&lt;BR /&gt;hpux parameters&lt;BR /&gt;&lt;BR /&gt;FS for oracle&lt;BR /&gt;&lt;BR /&gt;take care of the mount option like convosync=direct and mincache=direct. Some says they are usefull, some says they can unprove performance, in SGA is not properly set.&lt;BR /&gt;&lt;BR /&gt;- oracle parameters:&lt;BR /&gt;&lt;BR /&gt;sessions = 1.2 * processes&lt;BR /&gt; process number: enough for the connection of all your users + 20% for security. Remember that you need as many semaphores as process&lt;BR /&gt;&lt;BR /&gt;db_files&lt;BR /&gt;default is 256, and you may incur fast in problem if the db is growing. raise it  to 512 at least.&lt;BR /&gt;&lt;BR /&gt;   Massimo</description>
      <pubDate>Fri, 25 Jul 2003 14:04:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033623#M133027</guid>
      <dc:creator>Massimo Bianchi</dc:creator>
      <dc:date>2003-07-25T14:04:08Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033624#M133028</link>
      <description>Not rally QC, but CYA.&lt;BR /&gt;If another will have root access, a backdoor root login to protect against password change.</description>
      <pubDate>Fri, 25 Jul 2003 14:05:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033624#M133028</guid>
      <dc:creator>doug mielke</dc:creator>
      <dc:date>2003-07-25T14:05:43Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033625#M133029</link>
      <description>Pete,&lt;BR /&gt;&lt;BR /&gt;Data is stored on a Xiotech Magnitude disk array, dual Fiber card connection.  PVLINKS is not set up yet.&lt;BR /&gt;&lt;BR /&gt;DR tests have been done with Ignite, fbackup and Veritas netbackup.&lt;BR /&gt;&lt;BR /&gt;We are running Veritas netbackup and all backups as if it were really production next week to work out the kinks.&lt;BR /&gt;&lt;BR /&gt;Looking into Massimo's second suggestion as well.&lt;BR /&gt;&lt;BR /&gt;The gerbel is running as fast as he can.&lt;BR /&gt;&lt;BR /&gt;Security: Secure shell fully implmented with public keys exchanged. root password is secure.  Bastille was run on the box where the Golden Image was created.&lt;BR /&gt;&lt;BR /&gt;Security audit we had done three years ago is being run to make sure all issues were handled.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 25 Jul 2003 14:10:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033625#M133029</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-07-25T14:10:06Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033626#M133030</link>
      <description>Steve,&lt;BR /&gt;&lt;BR /&gt;You may want to check your DBA requirements for maxdsiz_64bit, maxssiz_64bit, and maxtsiz_64bit. &lt;BR /&gt;&lt;BR /&gt;Elena.</description>
      <pubDate>Fri, 25 Jul 2003 14:14:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033626#M133030</guid>
      <dc:creator>Elena Leontieva</dc:creator>
      <dc:date>2003-07-25T14:14:35Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033627#M133031</link>
      <description>I need elaboration and others to discuss Massimo's FS recommendations.  I admit thats a bit over my head.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;vx_ninodes is being dropped severely during this weekend maintenance.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 25 Jul 2003 14:15:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033627#M133031</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-07-25T14:15:09Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033628#M133032</link>
      <description>On OnlineJFS&lt;BR /&gt;&lt;BR /&gt;&amp;gt; ??     mincache=direct ? The default read operation for JFS copies data&lt;BR /&gt;&amp;gt; from&lt;BR /&gt;&amp;gt; disk to the HP-UX buffer cache, and then copies data to the Oracle SGA.&lt;BR /&gt;&amp;gt; Setting this mount options causes the data to be moved directly into the&lt;BR /&gt;&amp;gt; Oracle SGA; this may provide a minor improvement in the performance for&lt;BR /&gt;&amp;gt; non-sequential read operations.  In 8.x versions of Oracle, this mount&lt;BR /&gt;&amp;gt; option will cause unnecessary physical I/O for sequential I/Os.  This&lt;BR /&gt;&amp;gt; mount&lt;BR /&gt;&amp;gt; option should NOT BE USED with Oracle 8.x tablespace files, however it is&lt;BR /&gt;&amp;gt; recommended for Oracle 8.x redo and archive file systems.&lt;BR /&gt;&amp;gt; ??     convosync=direct ? This option changes the behavior of files opened&lt;BR /&gt;&amp;gt; with the Osync flag enable, which Oracle always uses. This will enable&lt;BR /&gt;&amp;gt; Osync I/O operations to operate the same as non-osync file operations and&lt;BR /&gt;&amp;gt; thus use the mincache=direct mount option.  In 8.x versions of Oracle,&lt;BR /&gt;&amp;gt; this&lt;BR /&gt;&amp;gt; mount option will cause unnecessary physical I/O for sequential I/Os.&lt;BR /&gt;&amp;gt; This&lt;BR /&gt;&amp;gt; mount option should NOT BE USED with Oracle 8.x tablespace files, however&lt;BR /&gt;&amp;gt; it is recommended for Oracle 8.x redo and archive file systems.&lt;BR /&gt;&amp;gt; &lt;BR /&gt;&amp;gt; Why does mincache=direct impact the performance of Oracle sequential&lt;BR /&gt;&amp;gt; access&lt;BR /&gt;&amp;gt; (table scans)?&lt;BR /&gt;&amp;gt; &lt;BR /&gt;&amp;gt; 1)    Oracle 8.x uses the system call readv rather than the read system&lt;BR /&gt;&amp;gt; call, which is used in Oracle 7.x.  In Oracle 7.x using the readv system&lt;BR /&gt;&amp;gt; call was an option that was enabled by a parameter in the init.ora file.&lt;BR /&gt;&amp;gt; Oracle 8.x provides no provision for using the read system call.&lt;BR /&gt;&amp;gt; 2)    Using readv changes the behavior of large I/Os performed for&lt;BR /&gt;&amp;gt; sequential access on JFS file system mounted with the mincache=direct&lt;BR /&gt;&amp;gt; option.  The readv system call (read vector) passes and array of vectors&lt;BR /&gt;&amp;gt; (blocks) to be transferred for sequential operations.  How this works:&lt;BR /&gt;&amp;gt; i)    A common value for the db_file_multiblock_read_count is 8.&lt;BR /&gt;&amp;gt; ii)   When using readv with JFS file systems mounted with mincache=direct,&lt;BR /&gt;&amp;gt; JFS performs a separate physical I/O for each block.&lt;BR /&gt;&amp;gt; iii)  This results in 8 physical I/Os of 8k each rather than a single 64 k&lt;BR /&gt;&amp;gt; I/O (assuming an 8k block size).&lt;BR /&gt;&amp;gt; 3)    When the mincache=direct mount option is not used, the readv system&lt;BR /&gt;&amp;gt; call passes the requests through the HP-UX buffer cache.  This allows JFS&lt;BR /&gt;&amp;gt; to coalesce the (8) vectors into a single I/O.&lt;BR /&gt;&amp;gt; 4)    An added benefit of using the HP-UX buffer cache is the JFS&lt;BR /&gt;&amp;gt; read-ahead facility.  JFS will identify a table scan (sequential access)&lt;BR /&gt;&amp;gt; and initiate 1 MB of read-ahead, further increasing the performance of&lt;BR /&gt;&amp;gt; table scans.&lt;BR /&gt;&amp;gt; 5)    Just to keep things interesting:&lt;BR /&gt;&amp;gt; &lt;BR /&gt;&amp;gt; &lt;BR /&gt;&amp;gt; If the Oracle  db_file_multi_block_read_count is set to a value greater&lt;BR /&gt;&amp;gt; than 16, Oracle will revert to using the read system call.  Using the&lt;BR /&gt;&amp;gt; mincache=direct mount option in this environment will not result in the&lt;BR /&gt;&amp;gt; readv performance penalty for sequential I/O; however there will not be&lt;BR /&gt;&amp;gt; the benefit of the JFS read-ahead.  This may be appropriate for some&lt;BR /&gt;&amp;gt; large data warehouse environments.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;But i suggest to wait for other experts hints. i heard many controverse voices on this subject, and looks like the answer is "depends"&lt;BR /&gt;&lt;BR /&gt;   Massimo</description>
      <pubDate>Fri, 25 Jul 2003 14:19:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033628#M133032</guid>
      <dc:creator>Massimo Bianchi</dc:creator>
      <dc:date>2003-07-25T14:19:22Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033629#M133033</link>
      <description>Do you have a printer queue setup requirements.&lt;BR /&gt;Did you customize your inetd.conf file.</description>
      <pubDate>Fri, 25 Jul 2003 14:19:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033629#M133033</guid>
      <dc:creator>Ken Hubnik_2</dc:creator>
      <dc:date>2003-07-25T14:19:26Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033630#M133034</link>
      <description>Elona,&lt;BR /&gt;&lt;BR /&gt;yes.&lt;BR /&gt;&lt;BR /&gt;Ken,&lt;BR /&gt;&lt;BR /&gt;yes.&lt;BR /&gt;&lt;BR /&gt;Keep em coming ....&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 25 Jul 2003 14:22:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033630#M133034</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-07-25T14:22:17Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033631#M133035</link>
      <description>Steve,&lt;BR /&gt;&lt;BR /&gt;youve got to get your pvlinks setup and tested under heavy load (ie. pull a fibre while many dd's running) - to see how lvm behaves and how it behaves when you re-insert the pulled fibre. ie, it need to be able to behave properly and cope with both events without your app going down or hp-ux seeming to grind to a halt or go nuts.&lt;BR /&gt;&lt;BR /&gt;Over the years ive always noticed slightly different lvm behaviour with the patch bundles and sometimes a larger difference necessitating some more current patches on top of the patch bundle - in order to get lvm behaviour back to whats expected (reliable, accurate and quick).&lt;BR /&gt;&lt;BR /&gt;Also - when you add the pvlinks arent you going to balance your vg's across both fibre channels (primary and pvlink) so that you get redundancy and a doubling of i/o throughput ?&lt;BR /&gt;We always do this. Usually this is done at vg creation time so pvlinks need to be setup right at the start otherwise its vg recreation or constant use of pvchange -s.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Jul 2003 14:28:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033631#M133035</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-07-25T14:28:18Z</dc:date>
    </item>
    <item>
      <title>Re: Quality Control on a Server rollout.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033632#M133036</link>
      <description>Hi,&lt;BR /&gt;Stefan suggested me another check:&lt;BR /&gt;&lt;BR /&gt;PV timeouts !!!!&lt;BR /&gt;&lt;BR /&gt;For Xp/EMC/similar FC attached devices, usually 90 or higher may be raccomended&lt;BR /&gt;&lt;BR /&gt;  Massimo&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Jul 2003 14:32:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/quality-control-on-a-server-rollout/m-p/3033632#M133036</guid>
      <dc:creator>Massimo Bianchi</dc:creator>
      <dc:date>2003-07-25T14:32:19Z</dc:date>
    </item>
  </channel>
</rss>

