about
Switching to Switches!
Date: Jan 24, 1995
Written by: Walter Benton
1. | Approved Standards vs De-Facto-Standards | 3 |
2. | Media Hype! Is it hype or not? | 3 |
3. | Switching! What is it? | 4 |
4. | Switching! Good and/or Bad? | 5 |
5. | Your Choice | 6 |
6. | What Kind Problems Can Occur? | 6 |
7. | Buffering, How Big is Enough? | 8 |
8. | Design Factors | 8 |
9. | The Backbone Syndrome | 10 |
10. | The Front-end Syndrome | 11 |
11. | Solution or Not? | 12 |
12. | Peer-to-Peer vs Client/Server | 12 |
13. | Switching Hub Inter-Connectivity (High-speed) | 13 |
14. | Slow Bridge/Router Performance | 15 |
15. | To Summarize | 15 |
A word about the author: |
Walter Benton is an American with over 17 years of computer experience and 7 years working with NetWare LANs. He specialized in satellite tracking and satellite telecommunications for the U.S. Navy using 3GHz Microwave equipment in Yokosuka, Japan 17-years ago. He is now a Manager in the Network System Marketing Department of Memorex Telex Japan where he has been working for the past 5 years. |
Many new technologies were released during 1994. A good portion of those technologies have yet to be fully approved by any international standards committees. Standards give us the freedom to mix-and-match various vendors' products as necessary to meet our ever growing networking needs. Without standards, multi-vendor networks will not inter-operate. As networking is now expanding at ever faster paces, it is all the more imperative that approved standards be used to build our future networks.
Looking back at 1994, I found it quite difficult to know whom to believe, what is right and which is best. It was a hard year to safely choose a product based upon either publicity or by brand name. What will the future bring? How can we, in any perspective, comprehend and feel relatively assured that what we spend money on today, will still be useable in the future?
It seems that during 1994, we abandoned those standards committees. What will the future realistically
be like is still a big unknown? We fell prey to mass PR campaigns from a small handful of (up until now) successful vendors whom all thought they had good solid strategies. And in doing so, we ended up ignoring everything else that we have held precious to networking over the past several years like "Standards" just to name one. Placing your LAN in possible jeopardy is an option that nobody wants to choose.
During 1994, "Ethernet Switching Hubs" were the center of media attention. But look around today, How many large installations actually have Ethernet switches fully implemented throughout their entire network, from front-end workgroups to the backbone? How many success stories have you read either here in Japan, or for that matter, anywhere in the world of switches replacing Bridges and Routers as collapsed backbone architectures? What success rates have these installations really experienced; what troubles do they currently experience; what kind of problems can they expect to experience in the near future?
You don't have to look hard to find switch installations, they're all over the place, but most of them only have a small handful of users (usually 60 or less) physically implemented in a simple workgroup. What about the massive "about to be installed" sites with hundreds or thousands of users who fell for the media hype? Today, you see network managers all over going for switches, and you also hear of order backlogs mainly from Bay Networks (previously Synoptics), for their LattisSwitch 28115! This backlog is over 300 in Japan alone, but over 1200 world-wide. Why is there such a backlog? It's not just due to a production backlog where they can't make them fast enough to ship everywhere, but due to technological problems that have just recently shown up in the United States (end of Nov '94).
Switches, as known today, supposedly first came out in 1993. Kalpana and formerly Synoptics (now Bay Networks) are two names well known throughout the world today for their switching hubs. Today, almost all vendors offer some kind of "Switching" hub although the terminology of a "Switch" is quite vague in it's interpretation.
One vendor, Fibronics Inc. of Israel, actually shipped the world's first true switching hub back in 1991. It was a 12-port Ethernet matrix-switching hub with an FDDI ring attachment. But at that time, switching was not a catch phrase and so it was dubbed the "FX-8610 Workstation Server" and not a "switching hub". It was built around the "store-and-forward" method that many vendors implement in their hubs today. It originally supported only 4 users per port, but was then increased to support up to 32 users per port due to user demands. This hub still ships today with FDDI as the standard high-speed interface.
A newer model started shipping the latter part of 1994 labeled the FX-8616 with 16-ports and two optional FDDI or TP-PMD ports. TP-PMD is the officially approved 100Mbit/s Twisted Pair standardized version of FDDI.
When you talk about Bridges or Routers or even Repeaters; these terms are quite well known. They are known for what they are, for what they are supposed to do, and for how they are supposed to do it, not to mention what their limitations are. These products have standards committees' approvals regulating their inter-connectivity. You can mix-and-match any "standard compliant" vendor's bridge with any other "standardized" vendors' bridge, router or repeater today. You can do this because of APPROVED STANDARDS!
But how do you define a "Switch"? What is the standard for a switch? What approvals by standards organizations like (IEEE, IAEA, ANSI, etc.) have been given to switches? How do you determine what is and what is not a switch? A switch can have any one or more of the following:
Basically put, a switch is just that "A (fast) switching BRIDGE" and nothing more at this time in the eyes of the standards committees. But looking at the performance increases that small test-beds have recently shown, this looks to be a possible way to help solve bottlenecks that we currently face in today's shared-network environments. A lot of time and money has been spent by several vendors on PR and seminars about LPP (Lan Per Port), and how much money you can save by retro-fitting switching hubs into your current network. Due to the cheaper price-tag and the easier manageability of switches compared to Routers, not to mention the "supposed" faster transfer rate than the currently installed Bridges and Routers offer, switches are becoming an almost necessity item on a majority of network manager's shopping lists.
Going back to basics, a switch operates in virtually the same way as a bridge. Each port on a Switch is a separate collision domain. Even when switched to another port, each segment is a separate entity. The information is passed at the OSI LLC (Logical Link Control) of the Layer 2 just like a bridge.
Latency times of switches are in the order of 10µs - 80µs, much shorter than bridges (avg. speed around 300µs) and routers (avg. speed 2ms or greater), because of the switching technology used. The fastest type of switch "on the market" today is 40us, Kalpana developed this switch and called it "Cut-Through" switching or otherwise known as "On-the-Fly" switching. Both terms are used quite frequently today.
"Cut-Through" or "On-the-Fly" switching reads only the first 14-bytes of each packet to decipher the destination address, and then creates an "On-the-Fly" connection (or switched network) while the packet is still being received by the port. The advantages of this method is a low-latency time of 40µs, but the disadvantages are that because only the first 14-bytes are read, packet runts, collision fragments, incomplete packets or otherwise errored packets are switched as well. A bridge, on the other hand, filters out these unwanted packets and does not send them out to other segments.
Another difference is that switches only forward data to the segment(s) that each packet is destined for. This means that the non-destined segments won't receive that data, reducing the amount of unnecessary traffic for those non-destined segments. This, in-turn, then increases available bandwidth on those unswitched segments. Some intelligent bridges also have this feature.
Some vendors thought that errored packets (runts, short-frames, collision fragments, etc.) should not be forwarded and therefore developed another type of "Error Checking" switch. This method reads the first 64-bytes of each packet before deciding whether to forward that packet on to the destination segment or not. This method reduces the re-transmission of most bad data. But to gain this advantage, there is an overhead latency time which is slower than the "Cut-Through" method. Even so, it is still faster than either a bridge or a router. Either way, even this method will forward errored frames such as the CRC as it is in the end of the frame.
Then there is "Store-and-Forward" technology. This method gives a complete integrity check of each packet before sending it to it's destination segment (including a CRC check after the complete packet has been received). This is the safest switching method known today, but also the slowest as compared to the other switching methods. Even so, "Store-and-Forward" switches are still faster than ordinary conventional approved bridges and routers. Either way, more effective bandwidth utilization is realized than that offered in conventional equipment.
If you only have a small number of errors across all segments, the "Cut-Through" method is by far the fastest method available. If you want to reduce the number of errored or bad packets that are received from one segment and keep them from being passed on to their destination segment(s), then the "Error Checking" or even better yet, the "Store-and-Forward" technique is the more preferred.
All of these switches perform fast switching, but because they are still quite new to the market, they have not been fully tested as to what extent they can be effectively used. What are their limits? Here is where further investigation is required to find out if and how switches can be effectively implemented into large networks. Which methods are most appropriate, what are the actual limitations of switches as they are currently designed? Basically put, more testing needs to be done before standards committees will approve switches. Until such time, should you decide to implement switching into your network, you will be one of the many test-beds (otherwise known as "beta sites") that are currently learning what the limits of switching are the hard way.
Regardless of which switch you choose, there are various other considerations that must also be looked into before designing a switched network. Switches are thought of as Plug-and-Play systems. You just plug them in and they automatically switch for you with minimal configuration. You would immediately assume that these switches can be placed anywhere to relieve bottlenecks by just dropping them into the troubled area.
If this is truly all there was to switches, standards would already be out and I wouldn't be writing this article. Designing switched networks, on the contrary, is quite complicated due to unforeseen limitations that have only recently began to crop up in the United States and will soon be seen here in Japan. The reasons why standards have not been issued yet is also becoming clearer to those who decided "We don't need them this time around!".
But whether we follow standards or even more so if we don't, we must still carefully plan our network design, especially in a switched environment. We must ensure that the design is flexible enough to cater to both our present demands and can scale successfully into our future needs easily while ensuring a smooth migration path.
Adnet Technologies, a vendor of LAN systems, found out the hard way what kind of problem their installed customers were having when one of them called and complained that the system Adnet had installed just a week before, was almost at a stand-still.
After an investigation, Adnet found out that the LAN was choking due to packet pileups in the buffers on the backbone switches. Further investigation revealed that it wasn't only Adnet's isolated customer, but various other similar installations were also experiencing the same slowdown problem.
Switches came into the market place quicker than most people expected. Switches broke all the speed barriers that have been in place for some time now, but they were not issued any tickets or even warnings for going over the regulated speed. Vendor after vendor embraced the "If we don't have a switch, we're out of business" approach. And the problems associated with switches were ignored as unnecessary "costly" delays by most of the vendors.
Data just comes to the switch and either the first 14-bytes, 64-bytes or the complete packet are read in and switched to their destination. What happens if two users on the same port are simultaneously talking to the same server through a switch on another port? Basically put, while one user is transferring data across the switch, the other user must wait his turn to access the same switch. Because both PCs are on the same port and the server that both want to talk to are on another port, a separate switch must occur for each of them to be connected.
Let's assume that one PC has already sent a request to the server and his packet has been switched properly. The next user then sends his data across the switch, but at that same time, the switch simultaneously receives the answer from the server in response to the first user's request. One of the following things will happen:
A 1.2 Mbyte file is 1,228,800 bytes in size. That means that if all packet sizes were of the Ethernet maximum
size (1518 bytes), it would require 810 packets to deliver one user's data. As both the server and the workstation are transferring 1.2 Mbyte files simultaneously, that amount would double to 2.4 Mbytes or 1620 packets (probably more packets as very few LANs send at the full Ethernet maximum size). When the buffers become full, the packets that won't fit into the buffer are dropped. These dropped packets must then be detected via protocols and re-transmitted. The re-transmissions in turn increase the amount of traffic which only exacerbates the situation.
The above example was referring to only two PCs, one sending and one receiving a 1.2 Mbyte file to/from one server. What happens when 20 users and 1 or 2 servers are connected to the switch box? As long as the average amount of traffic sent through each switch is minimal, the effects are usually better throughput than either a "Bridge" or a "Router" due to the lower latency that switches offer.
What about when traffic increases? To what extent can your LAN be safely switched? How can you know when your buffers overflow? To answer this, you need to purchase expensive sniffers or analyzers and actually measure what peaks and averages you currently have. To properly find peaks and averages of a network, you have to monitor it for between several days and several weeks depending on the daily fluctuations on your LAN. Each segment can and will show different results depending on the user's attached to that segment.
It is also normal for end-of-the-month traffic to exceed the daily amounts so ensure that those statistics are recorded too. But what are the peaks of your switch? Again, this can differ from installation to installation and vendor to vendor, so the only way to tell for sure is to measure throughput after your switch is installed. What if you want to add a new application or one or more new users to your current LAN? You have to perform the long measurement process all over again to see what impact your new application/user(s) have on the network!
What if you are planning a new installation and don't know what the averages and peaks will be? Ethernet was normally designed to handle burst traffic. As the number of users increase, the amount of burst traffic also increases, so how can you safely plan a new installation?
As far as switches are concerned, when the buffer becomes full, packets are dropped. These dropped packets have to be detected by either the protocol, the application or some other method and re-transmitted. This costs valuable time (regardless of how it is detected) and re-transmissions on a busy network only helps to frustrate the situation by taking up otherwise unnecessary bandwidth.
Even if you add 1 Mbyte of RAM per switched port, for large switching hubs this could require adding up to 20 Mbytes or more of RAM which can turn out to be quite costly. What happens when that 1 Mbyte is over-run? Add another Megabyte? How big of a buffer is big enough? Can additional RAM be added to the current hardware? Because each installation is different, there is no correct answer here. Therefore, the only real answer is to go back to the design-board and re-design your current hubs into a workable design.
The throughput of FDDI (90Mbit/s) is over twice that of 100Base-T (40Mbit/s) and it has been proven time and again to be safe enough for mission critical use. FDDI offers features of Dual-counter-rotating-ring, and you can also concentrate your switching hubs into dual-homed FDDI concentrators to rid yourself of the old FDDI problem when the ring fails in two places, cutting your ring into two separate rings.
FDDI does not limit you with the maximum length problems that 100Base-T currently face. And this does not even begin to mention that 100Mbit/s Ethernet hubs with high-speed "backbone" connectivity, "not just high-speed server attachment", are designed using repeater technology which means you have a limitation as to the number of hubs that you can inter-connect to expand your network. Not very flexible for networks with large expansion capacity!
If you are installing a Client/Server network, the server segment(s) will always be the busiest. If you attach the server to multiple segments, the load can then be distributed amongst several segments so that no individual segment is overloaded. In NetWare networks, this requires a special NLM (NetWare Loadable Module) to allow the server to distribute the data evenly over several segments all using the same network number.
In the United States, several Switching NIC manufacturers have developed a switching NIC to fit inside a NetWare server with several ports built into the NIC, but even these cards are experiencing buffer overflows and their vendors are now working with Novell to develop a flow control method to tell the server to back-off sending so much traffic when the buffers become full. This back-pressure NLM is still under development though and has not been accepted by all vendors as either a de-facto-standard or by the Standards Committees' as an approved standard yet. But this is only for the server side, what about the client side?
Simple report generation usually requires sampling various data and reporting only on certain portions of that data, therefore this type of user would be considered a medium-light to medium user.
End-of-month reports and complex data reporting requires sampling of a lot of data, subtotaling and totaling various different groups of data repetitively to attain the desired results. This type of user is considered medium to heavy, depending on the application, but again, this type of user is usually not an everyday user unless special data requirements exist within certain corporations requiring such heavy data processing.
Programmers and other users who perform a lot of program compiling or re-indexing are usually the heaviest type of users. These users are usually few in number for most corporations, but in software houses or similar application development environments, the number of these users can be quite large. Special consideration for these types of users should be accounted for.
9. The Backbone Syndrome (toc)
It turns out that switches don't work as quickly as everybody expects them to, especially in backbones. Switches might work well in small front-end installations, but place them where traffic can become enormous (like in a backbone, or even a heavily populated front-end) and the LAN almost stops. When the packets overflow the installed buffers, the end results are slow-downs in performance to such an extent that the older technology "Bridges" and "Routers" actually work faster. In the front-end, buffer overflows and re-transmissions aren't any where near as catastrophic and heavy as in the backbone.
Using standard Collision Detection methods such as the "back-pressure" method when the buffer becomes full is one method being debated to solve this problem. Here, a collision signal is sent before the buffers become completely full. When this signal is received by the transmitting nodes, they all stop sending. If this signal is sent from a switch located on the backbone, it could affect hundreds of users totally bringing those overflowed segments to a stand still. Even local peer-to-peer traffic that would otherwise not traverse the switch will be affected.
Until ATM becomes a finalized standard, the only other option is to increase the bandwidth on the backbone. If 10 Mbit/s is currently used, upgrading it to 100 Mbit/s will prove the most effective.
But back to the story of 100 Mbit/s, if you choose 100Base-T, you will only see a four fold increase to 40 Mbit/s effective utilization and the number of hubs and stations capable of successfully using that network will be limited as test-bed results are beginning to show. 100VG-AnyLAN will give you an increase of eight (8) times what your current Ethernet LAN offers, but it doesn't seem to be catching on very quickly.
FDDI on the other hand, is a bit more expensive, but test-bed results were in long ago and it has proven time and again that FDDI is a stable media. It will give you nine (9) times the performance capacity of your current Ethernet LAN and has a proven track record of being a successful choice in mission critical situations. Even for non-critical situations, it is a stable media for you to plan any LAN and will still be able to perform properly when that LAN turns mission critical in the future. This cannot be said for 100Base-T as of today!
If you only think this kind of problem can happen in the largest of backbone installations, you're completely wrong. In fact, it quite often happens in smaller front-end installations as well. Actually, It can happen almost anywhere on the LAN because, with the current shipping technology, their is no way to signal the attached devices that the buffers are almost full.
Again, you can decrease the number of users per segment until there is one user attached per port. If this still doesn't solve your problem, then you have to use a faster bandwidth in the front-end as well. Depending on the necessary throughput on the front end, 100Base-T and maybe 100VG-AnyLAN will probably prove to be a good future technology, but again, the final results to base standards upon have yet to come in. Here too, FDDI-to-the-desktop is also available, and it is available for all platforms, not just limited platforms like 100Base-T is shipping. (i.e. If you have a powerful Macintosh server that you want to connect directly to 100Mbit/s cabling, 100Base-T doesn't have any NICs for MACs yet, but FDDI does.)
Several companies in the U.S. are currently discussing ways to solve this buffer overflow problem, but there is still quite a bit of debate as to whether large buffers alone can solve the problem or not. One group thinks that with large enough buffers, the problem won't crop up, but how large is large enough is a very debatable and expensive temporary fix? Another group feels that the only way to completely prevent such problems from re-occurring is to place some form of flow control into effect. But even then, there is no consensus as to which type of flow control to implement or how to implement it.
Other companies are considering creating a new form of flow control to try and mimic ATM's ABR (Available Bit Rate), but this would require new NICs and/or new flow control software instead of the currently installed architecture. It would also require the majority of the switch vendors to join together in establishing such a procedure. This alone will take several months to decide upon and then several more months to implement into the equipment. Then this new equipment must then go back to the test-beds and prove whether that flow control actually works as it was designed to do. The complete process will take at least 6 to 8 months or possibly even longer. By the time some form of flow control is finally implemented in a manner that it can interoperably be used with each vendor's equipment, ATM will be about ready to be released in useable form!
Flow control can be implemented in the transport layer, because transport is connection oriented! so Novell, TCP/IP and other can implement flow control. The industry can define a standard that will enable the network devices to send a message to the network asking it to slow down, and the stations will obey. Such standard will solve the flow problems for connection oriented protocols. But what about connection-less protocols?
Ron Shani (Mktg. Director of Fibronics Inc., Israel), an ATM Forum member said,
Quote:
"On the other hand the flow control should be implemented in the LLC level (for being protocol independent). If the 802.2 committee will address the issue, and standardize flow control messages over the LAN, than any network queuing device such as bridge, router or switch will be able to notify the congested network segment to slow down."
End Quote
In peer-to-peer switched environments, the above mentioned problems occur less often than in client/server environments. This is because in a peer-to-peer system, two stations are commuting with each other while other stations are also communicating with other workstations simultaneously. If each station were on it's own port, you would not see the same problems as if you placed 30-50 or more users into one switched port. Regardless of the aggregate throughput of the switching hub, the timing in which the data is passed through the switch one-way while data coming from the other direction is queued up and the buffer size of the queued port itself (this varies from vendor to vendor) will determine how frequently packets will be dropped.
In Client/Server switched environments, all users must talk to one server. In such environments, backlogs on the server side due to frequent receipt of requests and replies to those requests can cause a strain on the server port. Especially if the server is sending at 100Mbit/s and each of the client switched ports only handles 10Mbit/s. The server side will want to offload 100Mbit/s data quicker than the 10Mbit/s are capable of handling causing pileups on the 10Mbit/s switch buffer as well. Having to re-send this data will only add to the severity of the problem.
As you can see, in peer-to-peer environments where there are several to several communications, the number of users talking to each other is less than that for a client/server environment where it is an all-to-one conversation.
Several Unix machines talking to each other would constitute a peer-to-peer environment where as a NetWare environment would be classified as a client/server layout. As long as the amount of data is not that great, and the number of users attached to each port is also not that many, the chances of dropping packets is not as great. But as Ethernet is based around bursts of data during file transfers, again, the timing of bursts from both sides of the switch and the amount of buffers each port is capable of using will determine how often packets are dropped. As long as the number of dropped packets are small, the delay experienced won't be that great, but in congested situations like in a large backbone, performance almost comes to a stand-still.
Understand the above, we can draw a conclusion that the best implementation of switches is in the front-end (within workgroups) and not in the backbone. It is recommended that one PC be attached per port, but this is very expensive. Depending on the average and peak traffic amounts as well as the number of buffers in the switching hub will define the maximum number of users allowed per port. It is not an easy task to determine this maximum as traffic fluctuates with each users' requirements.
Should slow-downs be experienced, the only thing to do is to reduce the number of users per port which means increasing the number of hubs which brings us to the next problem.
The majority of hubs on the market have at least one high-speed port (high-speed is defined as 100Mbit/s or faster). Some hubs don't offer high-speed connectivity, other hubs have several high-speed ports. These high-speed ports are usually reserved for server connectivity in a client/server environment, but how do you interconnect three or more hubs together?
If the hub has only one high-speed port, you will only be able to connect two hubs together into one high-speed backbone. With two hubs interconnected via the only high-speed port on each hub, how will you connect a high-speed server? You need more than one high-speed port per hub to inter-connect several high-speed hubs and servers.
If you are considering 100Base-T as the high-speed connection, you can and will experience similar problems as mentioned above sooner or later.
Some hub vendors offer 100Mbit/s FDDI/CDDI as the high-speed port. FDDI, even though it is shared, has a higher effective utilization rate than 100Base-T. Using FDDI, you can connect quite a number of hubs together into a single FDDI concentrated ring. If the cost of fiber cabling is not appealing to you, then CDDI (not approved) or as just recently approved by the ANSI standards committees, but not as well known, "TP-PMD" can be used with Category 5 UTP cabling and span up to 100 meters without any problems.
FDDI and TP-PMD are the only high-speed approved and reliable standards on the market today. Regardless of how well 100Base-T seems to be accepted, it is not yet a fully proven technology although it holds promises of becoming a future standard. 100VG-AnyLAN hasn't taken off yet, and from the looks of it, might no take off. At this point in time, it is hard to say who will win this battle (100Base-T or 100VG-AnyLAN). All we can do is to wait and see what the majority of customers choose, just like with the Sony Beta vs the VHS Group battle in the video market several years ago. Technologically, Sony's "Beta" was higher-grade, but still "VHS" won! Who will be the winner in the 100Mbit/s Ethernet market?
To say the least, until one or the other drops from the market, it will be a wait-and-see battle. In the mean time, if you need to design and install a high-speed LAN (large or small), FDDI has been around for quite some time now, it is stable enough for mission-critical applications, it has a proven track record in backbones and is an approved standard. FDDI will be around for at least the next 2-3 years until ATM replaces it in the backbone. At that time, depending on how far the cost of FDDI/TP-PDM drops, 100Base-T or possibly 100VG-AnyLAN might replace FDDI in the front-end. If the cost difference is minimal, due to the higher available bandwidth of FDDI out-performing both 100Mbit/s Ethernets, FDDI will might just win as the front-end media of choice as well. But then again, if ATM-to-the-desk has taken off and is relatively priced similar to FDDI, then ATM might win out as well.
But before ATM can really be used to do what everyone is waiting for, applications must be developed that can handle the necessary protocol tasking (available bit rate, priority setting, virtual path/channel setting, etc.) required by ATM before it can effectively be used. If you want to use non-ATM applications in an ATM-to-the-desk environment, the ATM NIC and a background ATM LAN Emulation program will have to be run in the background of your PC requiring a bit of overhead. But this is still sometime off in the distant future because even though the "Draft" for LAN Emulation has been accepted by the ATM Forum (Dec '94), the finalization of this draft will be required to ensure compatible LAN Emulation across a distributed multi-vendor LAN/WAN environment. This finalization isn't expected until the June-July 1995 time-frame at the very earliest, and may even be delayed at that.
Ironically, 100Mbit/s FDDI (a shared media) which has been put down by these switching hub vendors, is the safest things you can build a reliable LAN upon today. Regardless of the price and complexity of routers, and the slow performance that you think FDDI bridges/routers of today offer, they are the only real means to ensure that your LAN integrity is upheld.
One last thing to consider about switches is that throughput is usually measured in "pps" (packets per second) or "fps" (frames per second). This will tell you the real speed of the switch. Tested under both heavy and light loads will better help you asses the true throughput of the hub you are looking for. Try to get independent 3rd party figures as the throughput values vendors offer are many times misleading.
Today's bridges and routers seem to have bogged the LAN down, but if you really think about it, most of the problems experienced with routers tapped into an FDDI backbone is not on the FDDI side as much as it is in the concentrated (highly-complex) collapsed backbone routers that are being offered today. If you design your LAN around simpler concentrator attached FDDI bridging equipment with the same number of ports distributed to the front-end as you would consider with a switch or other collapsed architecture, you will see better performance than that offered in the multi-segment, collapsed-backbone single-box systems for all architectures that are selling today using high-complexity and slow performance collapsed routers.
Don't take my word for it, measure the actual traffic that is being passed across the FDDI backbone yourself and you will usually find that most FDDI rings are no where near what their maximum capacities can handle. Here too, the limits are more in the concentrated routers with all the sophistication built into one collapsed-backbone architecture chassis. These units have to perform complex routing between all the segments that are connected on the front-end as well as performing 100Mbit/s high-speed FDDI backend transmissions. You will usually find the performance problems in the front-end routing of these collapsed environments or in the sophisticated routing scripts before allowing the data onto the FDDI backbone more-so than in the actual backbone itself. FDDI is a BIG PIPE!
Try placing several smaller (not-so-sophisticated) FDDI routers or bridges strategically connected into one or more FDDI concentrator and you will usually experience an increase in performance and a better FDDI utilization rate than using a single high-speed collapsed router.
I believe in offering systems that are reliable and don't particularly care to recommend products that I'm not sure whether they will work properly or not. But recently, I've found that many network managers are clinging less and less to the standards that have kept networks running for quite some time now, and moving more and more into the world in which I prefer not to offer my clients paradigms.
Switches have their limitations and not all of them have been found out yet. Will you allow your office LAN to be a guinea pig beta-site for not-yet-fully tested equipment? Standards offer a real solution, de-facto-standards will always offer you a better world, but you don't necessarily get everything you were looking for, sometimes you get more problems than you bargained for. There are no assurances with non-standardized equipment.
Implementing switches requires switching design skills that very few people fully understand. Placing switches in the wrong places (i.e. backbones) will bottleneck your LAN even more-so than it is now. Keep switches in local workgroups (the front-end) with the number of users per port down to a useable amount so that you don't overflow the buffers.
Don't go for equipment just because it is boasted as being great or because it has actual data proving it to be better than another vendor, because when you start believing that, you will find that each vendor's product is always better than everybody else's, and everyone cannot be right. Each installation's needs and environments and are different and therefore common sense must come into play to decide which is best for your system. A proven system elsewhere may not necessarily be the best for your system unless you have the exact same and not just similar environments? Numbers and statistics can be re-calculated to look either good or bad depending on the purpose they were created.
Make sure that you have an upward migration path with the flexibility and scalability that you need. The currently shipping 100Mbit/s Ethernet products have point-to-point length limitations and a restricted number of hubs/users attachable in one network, not to mention only 40 Mbit/s of actually useable bandwidth. The scalability of this type of solution is limited at best.
FDDI is the best thing you've got going for your LAN right now that is approved and you can rest assured that field test-beds were completed quite a while back. FDDI maximum utilization is rated at 90% or 90 Mbit/s, this is over twice the pipe size that 100Base-T's 40 Mbit/s actual utilization offers "and it is not yet completely field-tested"! Even if 100Base-T is half the cost of FDDI, it's performance is less than half (40 Mbit/s vice 90 Mbit/s respectively), and FDDI supports most major platforms on the market today, a much wider spectrum than both 100Base-T and 100VG-AnyLAN technologies offer.
FDDI will also be switchable into 100 Mbit/s ATM in the future as the ATM Forum and other standards committees have already decided upon. Therefore, FDDI will be scaleable into ATM in the future. What about the other 100Mbit/s Ethernet solutions? Will they be scaleable to ATM?
Keep high-speed servers on the fastest link possible. At present 90 Mbit/s FDDI is the fastest and it is also approved. And unless the FDDI attached station fully supports 90 Mbit/s throughput more than one server can usually share the same FDDI port. Today, the fastest PC architecture available today only supports a maximum of 42 Mbit/s and even that is only under certain conditions; a more realistic Pentium NetWare server only gives you a maximum of 20~30 Mbit/s.
Don't choke your LAN even further with not-yet-fully-tested equipment, go for tested and approved products and standards, especially in the backbone!
In the past, it was simple, either you followed IBM with Token-Ring or DEC and the Unix world with Ethernet. When these standards were created, the manufacturers controlled the market. Today, a large number of smaller companies are entering into the market and each wants a piece for themselves, thus all the new methods. These methods were created to confuse the customer into believing each vendor's story. By dividing up the older simple world of Ethernet or Token-Ring, these smaller vendors stood a chance of gaining a bigger portion of the previously tightly woven market.
The only problem is that vendors don't control the markets any more like they used to. The market control is decided upon by the end-users as was the case in the video market with the Sony "BetaMovie" vs the "VHS Group"; the end-users, not the vendors chose the winner. This time around too, the end-user is the one who controls the market, so I hope that all of you choose something that you won't regret in the future. Your network systems depend on your choice of equipment. Unless you are prepared for the risks involved in beta-site (test-bed) systems, I would think twice before I purchased into the new emerging technologies without understanding them for what they fully are first.
Some of the technology released on the market today has quite a lot of future potential, but as of today, it is not yet fully proven potential!. Can you afford to risk your network's livelihood on unapproved equipment/standards? Can you safely take responsibility for choosing a technology that may turn out not to work properly? What kind of set backs can this cause you and your company should a major flaw such as buffer overflows on the backbone choking your network to a grinding halt or some other as of yet undiscovered problems crop up?
Shared media is not as bad as it is made out to be. In fact, most simple task users (word-processing users, small- to medium-sized spreadsheet users, medium-sized database users, etc.) don't require the full 10Mbit/s that LPP offers, making it a relatively expensive solution. You can usually place up to at least 10 shared users per 10Mbit/s port with an average 5~6 Mbit/s throughput. Of course, depending on each user's average volume of data, that figure might be either an overkill or an underkill. When slow-downs do occur, monitor whether they are relatively constant and quite frequent or if they are just temporary time-to-time bursts of data that occasionally occur.
Placement of current switching technology is not just plug-and-play, you MUST take into account the average and peaks of traffic relative to the buffers that the proposed equipment has built-in. Manual flow monitoring from time to time is required. Even if measured once, users' needs change over time (addition of a new application, new user, newer faster NIC, newer faster PC, etc.) that can change the already measured averages and peaks. Equipment alone will not make a good LAN, good design is the best preventative medicine for a LAN. Creating a flexible structure than is easily expandable within your current frame-work, but that will offer you painless growth scalability without a lot of extra cost is the most important thing that you could invest your money in today. The equipment that you need will reside within that resilient structural design!
I hope this article has helped to put into perspective the importance of standards and also to help those of you out there who want to choose what is best for your system, not just because of what some vendor offers as their best solution. Especially in today's ever changing market, with all the new protocols and new technologies coming forth, somebody needs to step in and clarify the situation.
Remember, YOU help control the markets today, so make sure that you make the right decision!!!