<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Query regarding response time in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/query-regarding-response-time/m-p/4841655#M102074</link>
    <description>&lt;P&gt;Short (brute-force) answer: upgrade your hardware.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Long (long-winded) answer follows.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You're on ancient and slow hardware, with ancient and slow processors, and running non-solid-state and slow storage. &amp;nbsp; (Ancient? &amp;nbsp;Yes. &amp;nbsp;The AlphaServer ES45 series boxes are about a decade old now.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Given the age of this gear, this is a target-rich environment for hardware upgrades. &amp;nbsp;A QPI-class Xeon in a Mac Pro desktop, Windows or Linux box will very likely outrun this Alpha processor configuration, as will most any recent Integrity server.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you're tied to VMS application software and if the prerequisites products are available, then start looking at Integrity servers, and potentially at upgrading Fibre Channel Storage Area Network, or (for higher-end loads) at a migration off a SAN. &amp;nbsp;SANs are expensive and often slow and comparatively awkward and opaque to manage, and the older iterations of the FC HBAs (and the HBAs out storage controllers) are particularly slow. &amp;nbsp; The central advantage SANs is the scale of the storage you can connect to the SAN. &amp;nbsp;They're not usually good choices for brute-force performance, and they're increasingly becoming the choice for archival storage. &amp;nbsp;But if you're not up in the top-end range and just need fast storage out on your SAN, then start migrating to the 8 Gb HBAs on PCIe-class buses, and to 8 Gb storage controllers. &amp;nbsp;(Unfortunately, Alpha lacked both PCIe buses and the 8 Gb HBA support, when last I checked.)﻿&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For I/O speed with random-access requirements, the usual sequence involves caches and in-board RAM-based storage, then SSDs (which obliterate hard disk speeds), then direct-attached and fast hard disks, then direct-attached and slower hard disks, and then something like 10 GbE NAS or SAN storage.﻿&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you want faster, then you're going to have to look at your requirements and your applications and at your application designs, and likely spend some money. &amp;nbsp; You might get lucky, but likely only if you profile your applications and see where all the wall-clock is going, and particularly if you can find a significant bottleneck somewhere in the application or the environment, and can remove it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As an alternative to the SANs that's an option with VMS (and where some redundancy is required), you can also look to multi-host SCSI. &amp;nbsp;For redunancy (and at a performance cost) you can have three shared SCSI buses here paired between each pair of the three servers, and with one volume of a host-based shadowset on each of the three buses. &amp;nbsp;(This where redundancy is a requirement; that's a trade-off with performance.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Partitioning your I/O access can also be involved, as clustering comes at a fairly high cost when there's contention among multiple hosts in the cluster. &amp;nbsp;This is a central reason why clustering isn't used in the high end; the coordination nukes the performance. &amp;nbsp;The other option here is to move away from the three-host configuration, and run this on one Integrity box. &amp;nbsp;A single rx2800 i2 box can likely run the entire load here, and a bigger Integrity box certainly can. &amp;nbsp;I'd be tempted to try a top-end rx2660, just for grins, given the upgrades to the speeds and feeds in the box over these decade-old AlphaServer boxes.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With VMS, I/O tends to be written to disk, which means that the boxes are often hamstrung by the speeds and feeds of the I/O path. &amp;nbsp;RMS is particularly bad (or good, depending on how you look at it) here. &amp;nbsp;RMS record handling is also inherently heavier overhead than is unmarshalling the whole file into memory and working on it there. &amp;nbsp;This means you either need to increase your I/O speeds and feeds, or rework your application I/O to avoid RMS or other non-cached writes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And in general, this is a huge topic area. &amp;nbsp; In general terms? &amp;nbsp;Collect data. &amp;nbsp;Collect data. &amp;nbsp;Collect more data. &amp;nbsp; Examine that data for bottlenecks, and for trends. &amp;nbsp;Continue tracking that data. &amp;nbsp;Start looking at what your applications are hitting hardest. &amp;nbsp;What your peak and average loads are. &amp;nbsp;And the (free) tool for trend analysis on VMS (and now apparently officially unsupported, but still functional) is T4. &amp;nbsp;Additionally, skim through tthe OpenVMS Guide to System Performance manual, the T4 documentation, DECset PCA and related tools, and the many previous postings and articles on application monitoring and tuning. &amp;nbsp;Characterize your load. &amp;nbsp;Find the current bottlenecks, and then any trending or impending bottlenecks. &amp;nbsp;Remove them. &amp;nbsp;Iterate.&lt;/P&gt;</description>
    <pubDate>Wed, 27 Jul 2011 15:08:46 GMT</pubDate>
    <dc:creator>Hoff</dc:creator>
    <dc:date>2011-07-27T15:08:46Z</dc:date>
    <item>
      <title>Query regarding response time</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/query-regarding-response-time/m-p/4840933#M102072</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are maintaining Alpha ES45 cluster where OpenVMS 7.3-2 is running. Stoarge is HP -XP and Net Apps for different clusters.We would like to know &amp;nbsp;preferred storage response times are.&lt;/P&gt;&lt;P&gt;Specifically:&lt;BR /&gt;-&amp;nbsp;Is there any difference in recommended/best-practice for response times between local disk vs. SAN storage?&lt;BR /&gt;-&amp;nbsp;Is there any difference is recommended/best-practice for response times between system (O/S) volumes and application (program/data) volumes?&lt;BR /&gt;-&amp;nbsp;Do any database technologies have any special response time requirements?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Jul 2011 08:04:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/query-regarding-response-time/m-p/4840933#M102072</guid>
      <dc:creator>mrityunjoy</dc:creator>
      <dc:date>2011-07-27T08:04:52Z</dc:date>
    </item>
    <item>
      <title>Re: Query regarding response time</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/query-regarding-response-time/m-p/4841529#M102073</link>
      <description>&lt;P&gt;Your question, while interesting, really has no discrete answer.&amp;nbsp; There are too many factors involved here.&amp;nbsp; What is the actual performance question you are trying to answer?&amp;nbsp; For example, the size of any I/O may have an effect on response time.&amp;nbsp; Whether or not there are any caches involved will have a significant effect.&amp;nbsp; As time goes on, the actual hardware involved seems to have less of an overall impact on the transaction performance (assuming typical small blocksize transfers).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Usually, these types in inquiries start with performance related isues on a particular system or application.&amp;nbsp; At this point, I would suggest that a better description of the environment (other than V7.3) be posted along with the areas of performance concern.&amp;nbsp; Armed with that info, we should be able to answer the hardware path performacne questions with more applicability to your situation.&amp;nbsp; Generalities won't help you much.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Dan&lt;/P&gt;</description>
      <pubDate>Wed, 27 Jul 2011 13:52:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/query-regarding-response-time/m-p/4841529#M102073</guid>
      <dc:creator>abrsvc</dc:creator>
      <dc:date>2011-07-27T13:52:59Z</dc:date>
    </item>
    <item>
      <title>Re: Query regarding response time</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/query-regarding-response-time/m-p/4841655#M102074</link>
      <description>&lt;P&gt;Short (brute-force) answer: upgrade your hardware.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Long (long-winded) answer follows.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You're on ancient and slow hardware, with ancient and slow processors, and running non-solid-state and slow storage. &amp;nbsp; (Ancient? &amp;nbsp;Yes. &amp;nbsp;The AlphaServer ES45 series boxes are about a decade old now.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Given the age of this gear, this is a target-rich environment for hardware upgrades. &amp;nbsp;A QPI-class Xeon in a Mac Pro desktop, Windows or Linux box will very likely outrun this Alpha processor configuration, as will most any recent Integrity server.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you're tied to VMS application software and if the prerequisites products are available, then start looking at Integrity servers, and potentially at upgrading Fibre Channel Storage Area Network, or (for higher-end loads) at a migration off a SAN. &amp;nbsp;SANs are expensive and often slow and comparatively awkward and opaque to manage, and the older iterations of the FC HBAs (and the HBAs out storage controllers) are particularly slow. &amp;nbsp; The central advantage SANs is the scale of the storage you can connect to the SAN. &amp;nbsp;They're not usually good choices for brute-force performance, and they're increasingly becoming the choice for archival storage. &amp;nbsp;But if you're not up in the top-end range and just need fast storage out on your SAN, then start migrating to the 8 Gb HBAs on PCIe-class buses, and to 8 Gb storage controllers. &amp;nbsp;(Unfortunately, Alpha lacked both PCIe buses and the 8 Gb HBA support, when last I checked.)﻿&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For I/O speed with random-access requirements, the usual sequence involves caches and in-board RAM-based storage, then SSDs (which obliterate hard disk speeds), then direct-attached and fast hard disks, then direct-attached and slower hard disks, and then something like 10 GbE NAS or SAN storage.﻿&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you want faster, then you're going to have to look at your requirements and your applications and at your application designs, and likely spend some money. &amp;nbsp; You might get lucky, but likely only if you profile your applications and see where all the wall-clock is going, and particularly if you can find a significant bottleneck somewhere in the application or the environment, and can remove it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As an alternative to the SANs that's an option with VMS (and where some redundancy is required), you can also look to multi-host SCSI. &amp;nbsp;For redunancy (and at a performance cost) you can have three shared SCSI buses here paired between each pair of the three servers, and with one volume of a host-based shadowset on each of the three buses. &amp;nbsp;(This where redundancy is a requirement; that's a trade-off with performance.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Partitioning your I/O access can also be involved, as clustering comes at a fairly high cost when there's contention among multiple hosts in the cluster. &amp;nbsp;This is a central reason why clustering isn't used in the high end; the coordination nukes the performance. &amp;nbsp;The other option here is to move away from the three-host configuration, and run this on one Integrity box. &amp;nbsp;A single rx2800 i2 box can likely run the entire load here, and a bigger Integrity box certainly can. &amp;nbsp;I'd be tempted to try a top-end rx2660, just for grins, given the upgrades to the speeds and feeds in the box over these decade-old AlphaServer boxes.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With VMS, I/O tends to be written to disk, which means that the boxes are often hamstrung by the speeds and feeds of the I/O path. &amp;nbsp;RMS is particularly bad (or good, depending on how you look at it) here. &amp;nbsp;RMS record handling is also inherently heavier overhead than is unmarshalling the whole file into memory and working on it there. &amp;nbsp;This means you either need to increase your I/O speeds and feeds, or rework your application I/O to avoid RMS or other non-cached writes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And in general, this is a huge topic area. &amp;nbsp; In general terms? &amp;nbsp;Collect data. &amp;nbsp;Collect data. &amp;nbsp;Collect more data. &amp;nbsp; Examine that data for bottlenecks, and for trends. &amp;nbsp;Continue tracking that data. &amp;nbsp;Start looking at what your applications are hitting hardest. &amp;nbsp;What your peak and average loads are. &amp;nbsp;And the (free) tool for trend analysis on VMS (and now apparently officially unsupported, but still functional) is T4. &amp;nbsp;Additionally, skim through tthe OpenVMS Guide to System Performance manual, the T4 documentation, DECset PCA and related tools, and the many previous postings and articles on application monitoring and tuning. &amp;nbsp;Characterize your load. &amp;nbsp;Find the current bottlenecks, and then any trending or impending bottlenecks. &amp;nbsp;Remove them. &amp;nbsp;Iterate.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Jul 2011 15:08:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/query-regarding-response-time/m-p/4841655#M102074</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2011-07-27T15:08:46Z</dc:date>
    </item>
  </channel>
</rss>

