in
Today's Corporate Networks
Date: Nov 17, 1998
Written by: Walter Benton
Several years ago, when networks were able to start using ATM, a mini boom occurred. Since that time, switched 100Mbps Ethernet and Gigabit Ethernet have taken the front row in high speed networks. In the mean time, ATM more or less took a back-room burner position where those that had already invested worked long & hard to get their investments in some workable order, albiet not exactly the way that they expected.
A lot of corporations whom originally invested in ATM wished that they hadn't after they installed it as they found various incompatibilities amongst manufacturers, proprietary methods, bugs galore and worst of all, a throughput level much lower than they had ever expected!!!
If you look into the basics of ATM "it's Merits", you'll find three (3) major points which show that ATM is superior over and above all other protocols:
2. A single protocol for both LAN and WAN.
3. The capability to transmit voice, video and data simultaneously.
These three (3) major points were supposed to catapult ATM over and above all other protocols on the market back towards the end of 1995, but here it is 4 years later and although ATM is being used more and more, it isn't being used in the way that most people expected it to be.
To help understand why, let's look at these three major points to see how they really stand up to their originally planned uses.
1. The first protocol with unlimited bandwidth expandability.
When ATM was first released, there were numerous 25Mbps and some 155Mbps expensive systems. There was also talk about a 51Mbps SONET system as well. These systems were all limited in capability and ILMI as well as other bugs not to mention performance problems galore. Don't forget now, that final specifications for the ATM protocol management were not released yet as there were two groups pushing in opposite directions for credit based management as well as rate based management. Both were entirely different and a concensus was not to be easily found at the very beginning. Manufacturers of one were not compatible with the other and sniffers used to help find the problems were outrageously priced even with their limited capability. Problems flourished and only a small handful of specialists were able to offer any help what-so-ever.
Still, many major corporate investors decided that this was the way to go. Media hype expounded topping previous protocol announcement levels. All major manufacturers looked at this as a way to draw quick future income and thus the ATM boom was born.
Re-cap of the Era
As previously mentioned, when you outgrew your 10Mbps segment, you had to divide that segment into several segments and then tie them all together at the backbone, but tying ten (10) 10Mbps segments to a 10Mbps backbone just wouldn't cut it. FDDI was outrageously expensive at that time an only those who had no other choice invested in it. For those who couldn't wait, they started their 100Mbps backbone structure with only limited capital and the FDDI market started to grow. It's growth was expected to skyrocket in the future, but one more thing also happened during 1995. The debut of Switching Hubs brought about a new way of configuring networks at only a fraction of the cost and boasted speed increases that could out-perform FDDI hands down.
The only problem was that FDDI didn't drop packets, FDDI allowed approximately 95% of it's total bandwidth to be utilized by user traffic (low maintenance overhead) and FDDI had a built-in backup feature called DAS (Dual attached station) which automatically kicked in when any one of the FDDI bridges/routers went out. Very few of these features are found on switching hubs even today. To find out what kinds of havoc these problems can cause, please refer to another "Drawback Series" article of mine titled "The Drawbacks of Backbone Switching Hubs".
While this was a problem at the time, Ethernet has since warpped from 100Mbps into the Gigabit zone, essentially making the problems explained in my "The Drawbacks of Backbone Switching Hubs" dissapear into the woodworks. Noticed how I use the word "dissapeared" and not "solved". Basically put, as long as there is ample bandwidth remaining in a switching hub, the problems mentioned in my article won't come true. Thus, as long as you can keep ahead of the game and ensure that you always have much more bandwidth than you'll ever need (including peaks), switching hubs will work in the backbone, however, whenever that bandwidth becomes full, the exact same problems will re-appear again regardless of whether you are using 100Mbps, 1Gbps, 1Tbps, etc..
But back to the point I was getting at... "Scalable Bandwidth" was supposed to be the main feature. This would solve the problem when you outgrew OC3@155Mbps in that you could keep the ATM protocol and with only a few additions, bump the backbone speed up to OC12@622Mbps. Nice and easy as it seemed, the specs for OC3 has just been finalized and OC12 was still a figment of our imagination, and was supposed to be available when we needed it.
Let's look at what this allows us. If you have 100 PCs with ATM NICs installed (25Mbps or 155Mbps) and each one comes up requesting a minimum of 10Mbps (that's what you would get out of a Lan-per-Port switch), the 16th user would be trying to punch in at an aggregate total of 160Mbps which is over the 155Mbps limit and that doesn't even take into account any overhead required to set up and tear down the virtual switches, nor the overhead required for management which was about 25Mbps at that time (in it's frail beginning status).
Thus, if your network is less than 16 users... fine, if your network is all ATM, fine... but what about your hundreds of Ethernet users, Ethernet switches, Ethernet Routers, etc. These guys don't use ATM, so an ILMI server must be set up to translate chop up Ethernet packets into ATM cells and give them ATM addresses. This overhead, in the beginning, was humongous because the specs were just out and nobody had a hardware solution because the risk (with the specs only half complete) was just too great to imprint it on circuit boards. The end result of performing ILMI in software along with the incompleted specifications, performance was ranged between 12Mbps to 75Mbps. Either way, it was not a constant speed and didn't perform anywhere near the 155Mbps bandwidth which it was supposed to be capable of.
After that, numerous ILMI modifications, specification changes/additions, and also the specifications pertaining to the basics of traffic management allowed PVCs (Permanent Virtual Circuits) to be established, but SVCs (Switched Virtual Circuits) were still unheard of because the protocol had yet to be finalized.
Then came more and more traffic management in various forms of packet shaping and packet shaving techniques again which caused some incompatibility issues until the specifications were more complete.
Finally, about 1.5 years later, a corporate level useable form of ATM was finally within reach after a 1.5 year antagonizing low performance rating and vendor incompatiblity problem experienced by all those who jumped the gun, OC3@155Mbps ATM was finally usable (albiet in still limited capacity) in corporate networks.
Even so, 15 Lan-per-port PCs with a dedicated 10Mbps rate just couldn't beat a 15 PCs attached to a 16 port 10Mbps switching hub with a 100Mbps FastEthernet backbone because FastEthernet just didn't require the overhead that ATM did and it was much cheaper. Thus the boom in the 100Mbps FastEthernet market and a setback in sales for all ATM vendors.
The planned OC12@622Mbps ATM protocol was still in final testing stages when all of a sudden a new specification for GigaEthernet came out. Again, GigaEthernet was looked at over and above OC12 as Ethernet was a known where as ATM was an unknown. Another advance for the hub manufacturers and another setback for ATM.
By the time OC12@622Mbps was finally marketable in usable form, the majority of buyers were using it to replace their currently under-bandwidth (and over-priced) OC3@155Mbps backbones.
3. The capability to transmit voice, video and data simultaneously.
a. CBR: Constant Bit Rate
b. VBR: Variable Bit Rate
c. ABR: Available Bit Rate
d. UBR: Unspecified Bit Rate