Operating System - OpenVMS
1756012 Members
2458 Online
108839 Solutions
New Discussion

Query regarding response time

 
mrityunjoy
Advisor

Query regarding response time

Hi,

 

We are maintaining Alpha ES45 cluster where OpenVMS 7.3-2 is running. Stoarge is HP -XP and Net Apps for different clusters.We would like to know  preferred storage response times are.

Specifically:
- Is there any difference in recommended/best-practice for response times between local disk vs. SAN storage?
- Is there any difference is recommended/best-practice for response times between system (O/S) volumes and application (program/data) volumes?
- Do any database technologies have any special response time requirements?

 

 

 

Mrityunjoy Kundu -AST (TCS)
2 REPLIES 2
abrsvc
Respected Contributor

Re: Query regarding response time

Your question, while interesting, really has no discrete answer.  There are too many factors involved here.  What is the actual performance question you are trying to answer?  For example, the size of any I/O may have an effect on response time.  Whether or not there are any caches involved will have a significant effect.  As time goes on, the actual hardware involved seems to have less of an overall impact on the transaction performance (assuming typical small blocksize transfers).

 

Usually, these types in inquiries start with performance related isues on a particular system or application.  At this point, I would suggest that a better description of the environment (other than V7.3) be posted along with the areas of performance concern.  Armed with that info, we should be able to answer the hardware path performacne questions with more applicability to your situation.  Generalities won't help you much.

 

Dan

Hoff
Honored Contributor

Re: Query regarding response time

Short (brute-force) answer: upgrade your hardware.

 

Long (long-winded) answer follows.

 

You're on ancient and slow hardware, with ancient and slow processors, and running non-solid-state and slow storage.   (Ancient?  Yes.  The AlphaServer ES45 series boxes are about a decade old now.)

 

Given the age of this gear, this is a target-rich environment for hardware upgrades.  A QPI-class Xeon in a Mac Pro desktop, Windows or Linux box will very likely outrun this Alpha processor configuration, as will most any recent Integrity server.

 

If you're tied to VMS application software and if the prerequisites products are available, then start looking at Integrity servers, and potentially at upgrading Fibre Channel Storage Area Network, or (for higher-end loads) at a migration off a SAN.  SANs are expensive and often slow and comparatively awkward and opaque to manage, and the older iterations of the FC HBAs (and the HBAs out storage controllers) are particularly slow.   The central advantage SANs is the scale of the storage you can connect to the SAN.  They're not usually good choices for brute-force performance, and they're increasingly becoming the choice for archival storage.  But if you're not up in the top-end range and just need fast storage out on your SAN, then start migrating to the 8 Gb HBAs on PCIe-class buses, and to 8 Gb storage controllers.  (Unfortunately, Alpha lacked both PCIe buses and the 8 Gb HBA support, when last I checked.)

 

For I/O speed with random-access requirements, the usual sequence involves caches and in-board RAM-based storage, then SSDs (which obliterate hard disk speeds), then direct-attached and fast hard disks, then direct-attached and slower hard disks, and then something like 10 GbE NAS or SAN storage.

 

If you want faster, then you're going to have to look at your requirements and your applications and at your application designs, and likely spend some money.   You might get lucky, but likely only if you profile your applications and see where all the wall-clock is going, and particularly if you can find a significant bottleneck somewhere in the application or the environment, and can remove it.

 

As an alternative to the SANs that's an option with VMS (and where some redundancy is required), you can also look to multi-host SCSI.  For redunancy (and at a performance cost) you can have three shared SCSI buses here paired between each pair of the three servers, and with one volume of a host-based shadowset on each of the three buses.  (This where redundancy is a requirement; that's a trade-off with performance.)

 

Partitioning your I/O access can also be involved, as clustering comes at a fairly high cost when there's contention among multiple hosts in the cluster.  This is a central reason why clustering isn't used in the high end; the coordination nukes the performance.  The other option here is to move away from the three-host configuration, and run this on one Integrity box.  A single rx2800 i2 box can likely run the entire load here, and a bigger Integrity box certainly can.  I'd be tempted to try a top-end rx2660, just for grins, given the upgrades to the speeds and feeds in the box over these decade-old AlphaServer boxes.)

 

With VMS, I/O tends to be written to disk, which means that the boxes are often hamstrung by the speeds and feeds of the I/O path.  RMS is particularly bad (or good, depending on how you look at it) here.  RMS record handling is also inherently heavier overhead than is unmarshalling the whole file into memory and working on it there.  This means you either need to increase your I/O speeds and feeds, or rework your application I/O to avoid RMS or other non-cached writes.

 

And in general, this is a huge topic area.   In general terms?  Collect data.  Collect data.  Collect more data.   Examine that data for bottlenecks, and for trends.  Continue tracking that data.  Start looking at what your applications are hitting hardest.  What your peak and average loads are.  And the (free) tool for trend analysis on VMS (and now apparently officially unsupported, but still functional) is T4.  Additionally, skim through tthe OpenVMS Guide to System Performance manual, the T4 documentation, DECset PCA and related tools, and the many previous postings and articles on application monitoring and tuning.  Characterize your load.  Find the current bottlenecks, and then any trending or impending bottlenecks.  Remove them.  Iterate.