- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- may I use FDDI via gigabit sw to make VMS cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-05-2003 11:50 PM
тАО11-05-2003 11:50 PM
may I use FDDI via gigabit sw to make VMS cluster
But my inter switch connect is use Gigabit ethernet.
Does SCS protocol support FDDI via Gigabit switch then go back FDDI interface.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-06-2003 12:13 AM
тАО11-06-2003 12:13 AM
Re: may I use FDDI via gigabit sw to make VMS cluster
If your FDDI card is connected to a FDDI concentrator or similar and that is part of your Gb backbone I would expect SCS to work as long as it's a valid/supported network topology and network equipment is not set to block SCS protocol.
It might be a good idea to get your local HP support to check your network topology and give their recomendations.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-06-2003 12:18 AM
тАО11-06-2003 12:18 AM
Re: may I use FDDI via gigabit sw to make VMS cluster
I have managed a FDDI based cluster. The FDDI's were connected to FDDI concentrator.
Best regards,
Lokesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-06-2003 05:41 AM
тАО11-06-2003 05:41 AM
Re: may I use FDDI via gigabit sw to make VMS cluster
both FDDI and Gigabit are supported cluster interconnects (in fact we just moved from FDDI to GB as interconnect in our cluster). What you will need to address is the transition from FDDI to GB (and back) if you do want to use a mixed setup
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-06-2003 12:28 PM
тАО11-06-2003 12:28 PM
Re: may I use FDDI via gigabit sw to make VMS cluster
The Gigabit Ethernet switches must be able to bridge the SCS protocol. A pure IP router will not work -- it must be a bridge or bridge/router.
If you are using "large" packets on FDDI (i.e. your setting for the SYSGEN parameter NISCS_MAX_PKTSZ is greater than 1498) then more care will be required.
In versions of VMS prior to 7.3, VMS depended on the clearing of a bit in the FDDI header when a packet traversed from FDDI to Ethernet to let the cluster software know that packets larger than standard 1498-byte Ethernet size could not pass through that path (but typically only DIGITAL -- now dnpg.com -- switches cleared that bit as expected.) So you could have problems trying to use packet sizes larger than 1498 bytes if the switches don't act as the cluster software expected.
In VMS version 7.3 and above, PEDRIVER actively probes with different-sized packets to ascertain the actual maximum size of packet which gets through, which is a much better method than trying to rely on switch behavior that is not standardized.
Gigabit Ethernet can in theory support Jumbo packets of up to 9KB in size (twice the maximum size for FDDI) but not all switch hardware actually supports the full maximum packet length the standard allows. For example, some switches only support 2000-byte "Mini-Jumbo" packtes. For more info for Cisco switches, see http://www.cisco.com/warp/public/473/148.html . If your Gigabit Ethernet equipment cannot support full-size Jumbo frames, you'll have to settle for lower values of NISCS_MAX_PKTSZ, perhaps even as low as the default setting of 1498. Having to move from 4,474-byte FDDI packets to 1,498-byte Ethernet packets will result in more interrupt-state overhead on the Primary CPU on systems if there was a lot of use of large packets. Note that if larger-size packets get dropped, connections may still be able to be formed between cluster nodes initially (as that uses small packets), but then problems like closed virtual circuits will occur later when load is applied. Things which can use larger-size packets include MSCP-served disk transfers (and other large block data transfers).
Potential problems to watch out for include excessive latency and excessive packet loss. I'd recommend checking your existing configuration's round-trip lock latency with the LOCKTIME.COM tool from http://encompasserve.org/~parris/kp_locktools_v6.bck so you'll have a baseline to which to compare the performance in the new configuration. I'd also check the ReXmt and ReRcv counters in SDA (or the equivalents in SCACP for 7.3 and above) to check for retransmitted or duplicated packets. See REXMT.COM from [KP_CLUSTERTOOLS] on the V5 Freeware CD for an example of looking at these counters under SDA.
I'd also make sure you had the latest version of any patch kits with LAN driver or PEDRIVER content.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-06-2003 01:12 PM
тАО11-06-2003 01:12 PM
Re: may I use FDDI via gigabit sw to make VMS cluster
All network device we use is switch/bridge.
We try to make a VMS cluster in this kind of architecture.
Our VMS machine have FDDI interface card and connect to different CISCO CATALYST 5500 switch. We connect Gigabitethernet switch between both of the switchs.
So I must make sure cluster function will work properly.
Thanks for you are kindly support in this case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-07-2003 07:28 AM
тАО11-07-2003 07:28 AM
Re: may I use FDDI via gigabit sw to make VMS cluster
We had two x HSJ with multi-pathed disks ranging from 2 to 9 Gb, maybe about 40 such disks after shadowing. Each HSJ had one Alpha 1000 CPU running OpenVMS 6.2-1H2, 64 Mby RAM each, both acting as MSCP servers, plus one local system disk for booting, swap, and paging activities.
We also had 3 x Alpha 2100 (4 CPU per box) with 3 x DSSI bus local disks for boot, page, & swap ops, but all other disks were MSCP served from the Alpha 1000s through FDDI in twin single-attached configuration (as opposed to one dual-attached FDDI, which took up the same number of slots).
The twin FDDI token rings (aligned to be counter-rotating with respect to the token) gave us two things. First, since they were independent paths, there was no single point of failure. Second, dual-attached paths peaked at 160 mbps but twin single-attached paths peaked at 200 mbps.
This system ran like a champ except that the extra layer between the disks (behind the Alpha 1000s) and the Alpha 2100s (where the applications ran) was a bit slower than some folks liked. But on the other hand, it was still 5 to 20 times faster than the system that it replaced (on VAXen), so I guess it is all relative.
As to setting it up, we had some trouble until the HS241-J was released because that was the first controller in the HSJ class that supported multi-pathing (another part of the "no single point of failure" issue, which was a major hot button for our upper management team). Once that was set up with the right network description, we didn't have to do anything much with it.
We also had to be careful about rebooting it because at least one of the Alpha 1000s had to come up first to support MSCP for the other disks. We also COULD NOT boot from the HS241J even if we had wanted to. At least, not until a much later release of the firmware in the HSJ, and by then we didn't care at all.