- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Gigabit ethernet & FDDI
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-19-2003 12:45 AM
тАО11-19-2003 12:45 AM
Gigabit ethernet & FDDI
Options I've considered are:
1. Find a gigabit ethernet card for Alphas & drop FDDI altogether.
2. Use Alpha's current 100Mb ethernet card and
connect using Gigabit switches, again dropping FDDI.
3. Find equivalent of DECswith 900ef which has FDDI interface and ideally a GB ethernet port instead of 10M/b.
Can anyone recommend options for 1 or 3 please.
I started down option 2 and as an interim step, dropped FDDI and used 10M/b ethernet port on 900EF, adjusting NISCS_MAX_PKTSZ to 1498. Cluster stayed up but ran like a dog - worse than hoped - and I've reverted to original config for now.
If I can get 100M/b ethernet connectivity, is there a recommended value for NISCS_MAX_PKTSZ and/or am I likely to see little improvement over 10M/b connection.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-19-2003 07:14 AM
тАО11-19-2003 07:14 AM
Re: Gigabit ethernet & FDDI
if you do buy the equipment now have a look for the DEGXA card. This works nicely for us and is better than the older DEGPA.
We have/will replace(d) FDDI with GBit cards.
Greetings, Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-20-2003 01:43 AM
тАО11-20-2003 01:43 AM
Re: Gigabit ethernet & FDDI
The maximum packet payload size for Fast Ethernet is 1498 bytes compared with 4474 for FDDI. It sounds like you're using large packets already, given your mention of NISCS_MAX_PKTSZ. Things like MSCP-serving disks or tapes over the LAN use large packets for I/Os of 3 512-byte blocks or larger. (For example, Volume Shadowing copy and merge operations use 127-block I/Os; IIRC things like Oracle and Rdb typically use 16 or 64 block I/Os.) Large packets allow these transfers to be done using fewer packets (resulting in fewer interrupts) and reducing the host CPU overhead.
If you move to Fast Ethernet, you'll have to get by with smaller packets. (Until 7.3-2, LAN adapter and PEDRIVER interrupt workload must all be handled on the Primary CPU in an SMP system, and saturation of the Primary CPU in interrupt state is a bad thing. If you're near to or having problems with Primary CPU saturation in interrupt state, this might drive you off the performance cliff.)
The Gigabit Ethernet standard allows (but does not require) support of packet payloads up to something near 8KB in size, which would actually be larger than FDDI. Unfortunately, not all Cisco products, for example, support the full maximum size the standard allows. Originally, Cisco called anything any larger than 1498-byte paayloads "Jumbo Packets" even if they were only, say, 2KB in size. Lately, I've heard Cisco call these "Mini-Jumbo" packets. Details on which Cisco switches support what packet sizes may be found at http://www.cisco.com/warp/public/473/148.html
Before 7.3, PEDRIVER used a specific bit in the FDDI header to tell whether a packet had traversed 1498-byte Ethernet, but that only worked with DIGITAL-brand 10/100 bridges. Fast Ethernet and Gigabit Ethernet have no such bit in the header, so as of 7.3 PEDRIVER actively probes to determine for the actual maximum packet size over a path. If you plan to go the Gigabit Ethernet route, I'd recommend running at least 7.3-1 with the latest DRIVER ECO kit.
To make the transition easier, seriously consider simply adding the additional interconnect at first, allowing you to test it while retaining the redundancy of keeping the old equipment in place. You can use SYS$EXAMPLES:LAVC$STOP_BUS to disable FDDI for tests while it's still connected (or, in 7.3 or above, SCACP can be used to either lower FDDI's relative priority, but retain it as a backup, or to disable it entirely). You can also keep around the older equipment as a backup if you wish. Even a 10-megabit link might hold the cluster together across a bridge reboot or something.
Note that PEDRIVER as of 7.3 or above can quite effectively use multiple paths (e.g. multiple Fast Ethernet links) at once for higher composite bandwidth, so that is an alternative to Gigabit Ethernet if large-packet support is not crucial. (I know of large clusters with 4 parallel rails of Fast Ethernet, and even some with 2 rails of Gigabit Ethernet). (Since 5.4-3, PEDRIVER has been able to handle multiple LAN adapters, but prior to 7.3 it only transmitted to a given remote node over one adapter at a time, although it could receive on all adapters at once.)
For things like lock requests, you'll find that FDDI and Gigabit Ethernet are not very different in performance. But for activity like shadow copy or merge operations (or locking while these are going on), the 10X greater bandwidth of Gigabit Ethernet gives you much more headroom. I recently worked with a 2-site VMS disaster-tolerant cluster customer that is planning a conversion from GIGAswitch/FDDI to Cisco 6513 hardware with Gigabit Ethernet and full-size Jumbo packets. In our tests, the lock-request latency of FDDI in a fairly unloaded cluster averaged 208 microseconds, and GbE 215 microseconds (although I've seen 200 microseconds for GbE on Cisco 2900-something switches in no-load tests at another site, and 140 microseconds with a cross-over cable eliminating the switch latency). The big difference in our tests came during a shadow merge. FDDI lock-request latency went up from 208 to 3,781 microseconds during our merge tests, while GbE latency went only from 218 to 362 microseconds, an order of magnitude lower impact.
If you like your DIGITAL-designed FDDI hardware as a cluster interconnect and want hardware designed with the same philosophy, seriously consider the products from dnpg.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-21-2003 01:47 AM
тАО11-21-2003 01:47 AM
Re: Gigabit ethernet & FDDI
Because current setup has a finite lifespan, I'm trying to do this in as 'minimalist' a way as possible, avoiding the upgrades from 7.2 particularly. At same time, I have to keep an eye on costs.
For now I think I'll try to utilise existing DE500 Fast Ethernet cards for cluster connectivity, using G/BE switches for backbone. Will then have to see whether system performance is acceptable. (N.B. For Alpha 4100s I'm working with, I believe DE500 cards need to be 'told' at console prompt to use Fast Ethernet)
I'm hoping performance problems from 1st attempt were primarily due to 10M/b connectivity rather than NISCS_MAX_PKTSZ being set to 1498, though I'd certainly appreciate anyones comments on that assumption.
Also maybe a naive question, but is it acceptable to mix switches capable of dealing with frames above 1498 (jumbo/min-jumbo), with those that don't on same network? Assumption being that those that couldn't would ignore them.
Thinking ahead, if I did put G/b cards into Alphas & increase NISCS_MAX_PKTSZ, then providing Alphas were connected directly to 'jumbo-frame' capable switches, would cluster run okay on a network which also had 'non-jumbo switches' connected?
Keith, for yours - and mine - peace of mind, I'll be retaining FDDI equipment for some time after this attempt.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2003 11:50 PM
тАО11-24-2003 11:50 PM
Re: Gigabit ethernet & FDDI
A NISCS_MAX_PKTSZ setting of 1498 would be appropriate for use with Fast Ethernet.
If a Jumbo Packet on Gigabit Etherent or a packet on FDDI with a larger payload than 1498 bytes is directed at a node on Fast Ethernet, the switch will simply drop it at the point where it would have to transition to Fast Ethernet, since Fast Ethernet can't handle packets larger than 1498 bytes.
Having a mix of switch hardware capable of larger packets with switches which can't handle larger packets can work OK -- you just want to avoid a case where VMS tries to transmit larger packets but they can't get through. This would typically occur where a Fast or 10-megabit Ethernet segment lies between FDDI and/or Gigabit Ethernet segments, while you have NISCS_MAX_PKTSZ set to a value above 1498. With VMS 7.3 and above, PEDRIVER actively probes to determine the maximum-size packet which can actually get through, but before 7.3 PEDRIVER uses the setting of a bit in the FDDI header to determine whether a packet has traversed Ethernet (but this bit was typically only set this way on Digital brand switches).
To use Jumbo packets on Gigabit Ethernet with PEDRIVER, IIRC you'll have to be running 7.3 or above.