TRUE STORY: Back in the 90s I was on a team involved in converting an aging IBM Token Ring network to all Ethernet at a large company. When presenting the investment case to leadership, the Corp IT Director said the following to justify the conversion: "We have way too much traffic on the existing token ring network, and congestion is causing some of the tokens to FLY OFF THE RING DUE TO CENTRIFUGAL FORCE!". Of course an insanely laughable statement, but the leadership folks just nodded and moved forward to approve the large investment. This absolutely happened.
I remember a Microsoft book when studying for ones network certificate, a picture of a thinnet bnc T connector was shown , with the subtitle, tokens must turn right to go around the ring counter-clockwise... Or something like that... Like electrons know how to follow an arbitrary convention... Who ever wrote that knows nothing of token ring, or electricity... ... I love Dilbert as much as far side ... funny and sad cuz the stupid is real!!
Not forgetting that when assembly line robots were first deployed (like in automobile factories) they all used token ring because it was guaranteed when the data would arrive at the robot. That was important, because if the data was delayed because the network was choked, the car chassis could get crushed, or the robot damaged. Apparently, token ring was unchoke-able. Almost all supermarkets used token ring too, because of the bar code scanners. Again, guaranteed data round trip time, and unchoke-able.
@@RetroBytesUK And now replaced by EtherCAT with the same guarantees. Also, Token Ring doesn't guarantee that any arbitrary amount of data would always arrive, obviously. So you still had to carefully estimate the amount of data your net would have to handle. The big guarantee was just that there was a deterministic time frame at which you had the chance to send your data, whereas in a non-switched Ethernet due to the nondeterministic nature of CSMA/CD, some nodes could starve the rest of the net on the ability to send anything.
@@kubiedubie IEEE 1588 is a different use case. It might exist in a network that for example does motion control, so that multiple drives can act synchronously.
Token ring was very expensive and was not without its troubles. The cabling was thick and heavy and would pull the plugs out of the patch panel or the connector would fall apart, plus if by accident you had two machines with the same address it would cause havoc. We had a situation where a machine on another floor was rarely used and had the same address as another on the network. No problems if they were both on the network at different times but if they happen to be on the network at the same time the network crashed. It took some while to sort out. It was the expense that killed it of as you could get many more Ethernet machines networked than you could token ring for the same money.
Yup! I just found the channel and I’m just shocked TH-cam failed to recommended a single video until just a few months ago! I finally was lucky enough to stumble across a video while scrolling through some of the videos above the comments on mobile while I was watching another amazing channel, Adrian’s Digital Basement. Both channels are massively underrated!
@@RetroBytesUK love Adrian as well, and everyone seems to really speak highly of Adrian, including other TH-camrs like yourself, and his viewers/supporters. You also seem to be right up there with Adrian for being an awesome human! So keep crushing it, and I look forward to seeing what you upload next!
Working IT in a hospital, my friend and I once spent about 3 hours toner probing to find a jack from a side office. After many many closets, we found a storage room with what we later learned was a token ring patch panel from many many years ago! Our most senior IT person on site got a big smile on his face when we brought one of the RJ45 to Type 1 plugs to him!
I did IT in a bank and was there for around 12 years. We had a token ring "hub" in our data center and I always looked at it as I walked by. Well around 2018 we were going around and cleaning up out of service equipment. I decided to just power off that piece of equipment without authorization because I knew it wasn't used and it was the only active thing left in the middle of a rack. Fast forward just an hour later I'm back at my desk and there is a trouble call in the que from an external client to call Joe at whatever bank... A token ring network was down and they needed investigation! I went and powered it back on and called the guy up to say it should be good. he shared a laugh with me at my disbelief that it was still in use (it was there as part of a backup emergency connection).
I heard tell of an episode at Georgia Tech where a device was showing up on the network that nobody recognized. Took forEVER to track it down, but they eventually found a closet that had somehow been renovated out of existence (something like two different crews closed off two doors, neither crew noticing that there was no NO door at ALL?!?!?! Ah, higher academentia), that still had a Netware server chugging away in it?!?!? Might be apocryphal, but it's a great story nonetheless. :D
@@mrz80 the original "Netware server runs for years after being walled in" story appeared in April 2001, and it was at the University of North Carolina. I believe that original story was genuine, but over the years since it's been embellished and attributed to various organisations. If you hear an older IT worker refer jokingly to "server 54", it's a callback to that story.
At the college I worked at we initially used token ring - our (idiot) boss was a big IBM fan. The MAU boxes were not intelligent - they were just a bunch of latching relays that would insert a node onto the ring when the PC TR adapter initialised. Only good thing was they were passive and didn't have a power supply. Problem was it wouldn't always detatch from the ring and bypass the socketi n some circumstances. The MAUs came with a small plug-in device with a PP3 battery inside, which you could plug into each port in turn to reset the port's relay. ISTR the cable was horribly stiff solid-core stuff, which was terminated into the wierd square plugs with IDC contacts, piercing the cable insulation when assembled. I think the plugs were hermaphroditic - they would plug into each other. I recall getting some of the Madge cards, which were a lot cheaper than IBM, but it wasn't long before it got replaced with Novell NE2000s and later, NE2000 clones ( one of the more dubious of which set the MAC address via DIP switches, repeating the same 8 switch bits across multiple bytes!)
400Gbps currently being deployed and 800Gbps being standardised with router line cards already supporting it. I remember building 64kbps frame-relay circuits for customers (or 128k for the posh ones!) what seemed like not all that long ago. I feel old :(
I remember when the county I used to live in had a t1 for the entirety of the county.. home cable is 10x faster now! No more driving to the next county to park my laptop at a friend's house for a 3 day download to download a cd, now it takes 15 seconds..
I don't know anyone who has got their hands on 400Gbe kit yet, although someone else in the comments mentioned Arista has started shipping switches now. I'm quite surprised they have been able to go from announcing to shipping so quickly given everything that's going on with the supply chain. I had assumed Q1 next year at the earliest.
@@EVPaddy I saved a bill for nastalgia, over 5k for a T1..., A three hop (17 mile total) wireless link just to get to the endpoint.. Never took off, went broke fast... But in the end I just had 4 of my own dialup lines bonded together for backhaul for myself and one wireless customer , outsourced all but a local # to a virtual ISP service and even used their radius servers for my local... Held on at breakeven for a few years like that.. Interest didn't start to pickup till the local Telco upgrade and they got DSL, my wireless was better then their DSL... believe it or not. Then cable and a fiber provider came through so I gave up... My main customers were AOL users that switched to their bring your own isp plan , as AOL didn't have a local number. For wireless I was using DEC, later Lucent 915mhz equipment, (it was before 802.11b) , 2.4 gig wouldn't ever have worked around here... When cable internet came there was no way to compete, and there was no such thing as long distance phone service anymore either...
@@EVPaddy what bothered me the most was that customer interest picked up because of DSL, a commercial that just said to get it , just a black screen and the letters flew across, never said what it was or anything, just said to get it, and people did... I gave people too much credit.. I went to a business plan competition while seeking funding, the winner's read like a perfume add, you couldn't tell what they were selling, I should have known.... If I put the same money in the stock market I'd be a millionaire now instead of having $20 and a few bits of legacy equipment that didn't make it to the scrap metal yard with the 700kg that did... (I didn't even get to scrap my own towers, I never found who did)... I think my old dialup Pop is still in the crawlspace of a foreclosed house...
Years before token ring, there was another ring technology introduced in the mid-70s, the Cambridge ring. Developed at the University of Cambridge, this became widely used in UK higher education, but I don't think it had much commercial success outside this sector. It was an example of a "slotted ring": there was a fixed number of packets continually circulating around the ring, put there by a master station, and each packet had one bit in the header which flagged whether this "slot" was occupied or not. A station which wanted to transmit waited until it received a packet whose flag indicated that it was empty, then stuck its payload in the packet, flipped that flag, and passed the packet on down the ring. The only two vendors I can remember were Camtec and SEEL. At 1:18 you mention "one thin piece of coax" used for ethernet. We used to call this "thinwire ethernet", formally "10base2", but it wasn't ethernet's first form. Before 10base2 there was 10base5, and boy was that horrible! The expensive coax cable was nearly 10 mm in diameter, and difficult to bend. Also, you had to be careful not to bend it too tightly, because that would cause signal reflections, killing performance. To attach to it, you literally drilled into the cable, and attached a "vampire tap", which had a spike which reached the solid core. It was dreadful stuff to work with, so the arrival of thinwire ethernet was very welcome, even though anyone could break a thinwire network by disconnecting the cable from the T-piece at the back of their workstation.
I talked to a few people who where involved in the cambridge ring when I was working on my video about Econet. I must admin I did not mention 10base5 as was a video on token ring, and it was feeling a little ethernet heavy, also there are not many people ever experienced thick coax. Then only time I ever did was ripping a section of it out to replace it with fiber.
The advantage of that IBM Token Ring connector was that it would short across if it was unplugged so that you didn't lose the entire ring network if you didn't have both ends connected at every network location. Without that connector if somebody removed a computer and didn't connect the in and out ports the ring was broken and everybody lost the network. One other advantage the Ethernet gained that helped it was the ability to use the thin "cheapernet" cabling. Originally you had to use a thick cable with a tapped drop line for each network location.
The other nice thing was that all the connectors were the same - so if you wanted a long Type 1 token ring cable you could just plug two shorter ones together rather than having to use special adapters. Pity they were about the size of a small planet.
I don't think it was the connector doing that. Making sure the ring was complete as stations were added or removed was what the MAU was for. When you turned the PC on and once the TR card drivers were loaded then the TR card would send a voltage to the MAU that triggered it to switch some relay contacts and add the new station to the ring. When the PC got turned off, or was disconnected, then the voltage on the MAU port disappeared so the MAU's relay contacts reset to bypass the now disused port.
@@jrstf Very temporarily, yes. There were mechanisms in Token Ring to detect this kind of issue and to automatically recover but it did take a finite amount of time. It usually wasn't a problem though.
@@1anwrang13r Yeah - there was also a little accessory ("Setup aid", IIRC) that you plugged into the MAU ports to check if the relays were operating correctly.
50-100 plus coax ethernets worked surprisingly well at scene parties. sure the sparks when hooking up were spectacular but it still managed to move warez allright!
I loved setting up those old BNC coax cable networks at LAN parties. But you really had to trust the guys at both ends of the cable not to be sore losers and twist the terminators off to ruin the game for everyone else.
@@pfefferle74 I HATED 10Base-2 the amount of time I spent crawling around on the floor of computer labs to try to figure out which of the BNCs was causing issues because a student had kicked it under the table. it was a bloody nightmare.
@@pfefferle74 had that happen while playing quake back in the 90s. A friend pulled the terminator off his end of the cable at the other side of the house. Took us about 30 minutes to figure out what had gone wrong.
Having installed Novell Netware LANs on Token-Ring, Ethernet and Arcnet back in the day, here is what we used to say about Token-Ring... "One ring to rule them all, one ring to find them, One ring to bring them all, and in the darkness bind them; In the Land of IBM where the Mainframes lie."
@@TH3C001 Funny, I worked at Mod-Tap at the time and we use to make application notes with a full list of the application and parts to put Token Ring over unshielded twisted pair cabling, we had the Type-1A connector to UTP, the DB-9 to UTP, etc. One the cover of that application note was a cartoon with almost that statement. You brought me back to a time when we smoked in the office.
My first job here at the University of Football was doing workstation support. Our office had mostly IBM PCs and PS/2s (Ah, Microchannel, we hardly knew ye!). DOS 6.22 was the joy of my existence for the flexibility it gave in stuffing device drivers into odd nooks and crannies of unused RAM. My finest hour was getting a token ring-attached PC to talk TCP/IP, Netware, and AS/400 PC/Support (basically SNA with some extra tweaks) running all at the same time on a PC running DOS6.22 and Windows for Workgroups 3.11. deviccehigh and loadhigh are your FRIENDS! :D
@@popquizzz I understand literally zero parts of that comment having only been born in '94 lol, but I'm glad to have been clever enough to have brought that memory back for you.
Typically complex and expensive IBM technology. Their research labs came up with many interesting ideas, but very few of these seemed to make it into their products.
I think you oversimplified the CSMA/CD collision detection slightly on the coaxial cable. It's such a crazy simple and interesting design that it deserves some appreciation, I think. It's not about sensing that there's not the data you sent on the cable, it's much more interesting than that. 10Base ethernet signal is manchester coded, so it's very clearly an AC signal, but when there's another transmitter on the same wire, the interference causes drift on the DC component and the receiver detects this out-of-normal voltage (yes, voltage) on the bus. This also sets the minimum ethernet packet length: each ethernet packet transmitted on a 10Base5 cable must be long enough that it occupies twice the whole length of the wire when transmitted from the far end. This ensures that every node on the bus sees the collision and there's no ambiguity where some nodes would consider that a packet was sent correctly and others would consider it to be a collision. This was later emphasized by requiring each node to signal a collision on the bus when it sees one. This guarantees that the packet is crapified everywhere and a retransmission is then expected and attempted. CSMA/CD ethernet is a fascinating system - no acknowledges, no arbitration, no flow control, just conditioning of electrical waveforms. So very 1970's.
"This also sets the minimum ethernet packet length: each ethernet packet transmitted on a 10Base5 cable must be long enough that it occupies twice the whole length of the wire when transmitted from the far end." That was not just done for collision detection but also for detection of reflections -- it ensures that a reflected packet results in a garbled packet and thus a collision detected. If the packet was shorter, it would be transmitted and received succesfully, and then reflected and received twice or repeated even more times.
@@Madmark50484 You mean Alohanet. But Alohanet is not CSMA/CD. In fact it has nothing of it, Alohanet nodes just transmitted data whenever they wanted. But experiences with Alohanet showed that you get terrible channel capacity with just dumb shouting, giving impetus to developing carrier sense first to get CSMA (Carrier Sense Multiple Access) and then collision detection to get CSMA/CD and that's Ethernet then.
@@nmosfet5797 may be a little memory fade in action. It was explained to me the communication method for the Aloha Islands in my A+ class of 2006. I’m going to check the books that accompanied the class but I appreciate the explanation.
I didn’t know about the DC factor in interference detection. I thought it was just a matter of listening to your own signal, and if what you heard was not what you were sending, then that meant somebody else was interfering. Apple invented an even cheaper system, called “LocalTalk”, which was classified as “CSMA/CA”. Instead of detecting collisions, it simply transmitted and hoped for the best. The “CA” (“Collision Avoidance”) part was in proactively waiting some random time after you heard someone else transmit (rather than diving in immediately as with CSMA/CD), to try to minimize the chance of a collision.
I remember the thick yellow hosepipe cable and vampire taps, a whole computer room running on 10Mbit - and another technology which ethernet also destroyed, 100Mbit FDDI (which was also a ring, actually a double ring with a redundant reverse ring allowing it to use foldback if the line was broken).
@@SomeMorganSomewhere I remember when we renovated the old Physics building at UF in the early 2000s, and found a couple of labs that were still on the building's ancient original thicknet. Pulled about a quarter mile of RJ8 out of the ceilings, with a slew of vampire taps clinging on for dear life. I think that was the last hurrah of coax ethernet on campus..
@@mrz80 Thankfully they ripped it out before I got there but where I work now I have tons of spare coax networking equipment. I actually have enough supplies to boot up a network assuming I could find PCs that have ISA interfaces lol. I have hundreds (!) of terminators I've just collected from random places I've been (fixing cabling/etc).
I was in hospital during a power failure. Bedside TV and phones shut down. When power returned, every station tried to network reboot simultaneously, which took more than an hour.
Nice video and good memories! Among the first enterprise network devices I was honoured to work on were Xylan OmniSwitch or OmniSwitch/Router which had a "any-to-any switching" capability. You could have Ethernet, Token Ring, FDDI or even ATM/FrameRelay in such devices (each on their own line cards, of course). IBM became a reseller of Xylan devices and even branded them, probably up to the point when Alcatel purchased Xylan (along with Packet Engines a few months before) to form the Network Business Division that still lives on today in Alcatel-Lucent Enterprise (which I'm an employee of).
The OmniSwitch was a very nice bit of kit back then. My only use of FDDI was as a MAN technology in the mid 90s. Yorkshire cable used it to deliver data to their cabinates. We had the equivalent of one of their cabinets in a rack in our office, bringing in our least lines and ISDN pri lines.
@@Routerninja after the merger of Alcatel and Lucent there was a lot of duplicate (or legacy/obsolete) technology around. Lucent brought also Bell Labs to the "merger of equals" (Alcatel and Lucent) with a lot of history and significance. Alcatel brought another big player, called Timetra, which is also called "IPD". Alcatel-Lucent (including Bell Labs) later got integrated with Nokia. You can still find many of the products/solutions today in Nokia, obviously upgraded and often leading the market (I have to mention Nokia IPDs FlexPath 5 (FP5) which is a very impressive Networking chipset). The previously mentioned OmniSwitch product lines have been updated under the brand of Alcatel-Lucent Enterprise and range from Access to Core network including Wireless, but are exclusively addressing Ethernet today. :)
I was not aware that Lucent still exist. Anyway, good for them. I designed a regional hq back in the day, so still have fond memories of them. Just realised BITD = 24 years ago 😵💫
@@Routerninja Oh wow.. Cascade is a name I haven't heard in a long while! I feel like all of us commenting on this video are old people who should retiring and yelling "get off my yard" by now.. :-)
Unless I missed it, there is one important note about the "Type-1" bulky connectors that is worth mentioning. Most of their design was around the fact that because the "loop" had to be maintained, the jacks physically shorted the IN to the OUT when the cable was unplugged.. This meant that when you pulled the cable from the wall, the wall jack would just passively let the token go through to the next node. One of the worst thing you could do would be to unplug the DB9 from the back of the PC, because the token ring would be severed at that point. This was quickly fixed once we started using the token ring switches, but for a while, this could be a problem if my memory serves..
At some point one of the offices I worked for bought a couple of IBM 8250 concentrators to replace a half a rack full of 8228 MAUs. The 8250 was a pretty neat bit of kit, while it lasted. :)
I worked for IBM from the mid-80’s in the team that installed and maintained the internal IBM networks in Europe, IBM had sites with 1000+ nodes on token ring mostly bridged networks, some of these rings were huge, cabinets full of mau’s clicking like crazy! We went on to install ATM backbone routers with tr and Ethernet cards, and intelligent token ring hubs with 100mb uplinks, before swapping the TR hubs out for ethernet, and using ethernet/TR bridges to keep the old TR devices alive for many years, into the 2000’s. That MAU/ATM/TR/Ethernet upgrade program took me all over Europe.
It's a good thing Ethernet "won" that non-existing race, because switched networks make CSMA/CD completely unnecessary, while Token Ring would have been quite the liability in the long-run. Frame queues are just a lot more efficient in pushing through data, and even allow for QoS considerations.
You could switch, and queue, Token Ring just as easily as you could switch Ethernet. Each port on the Token Ring switch was effectively its own ring. One benefit of Token Ring over Ethernet was that the maximum frame size was 16KB compared to Ethernet's 1.5KB.
@@1anwrang13r Ethernet is arguably still easier to parse and forward. You're right that it wouldn't have been a problem if Token Ring was also switched. Frame size I agree, and still not there with all equipment supporting jumbo frames. Although the small frame size might have made switches more affordable. In a typical application that isn't 10G+ though, you'll see that average frame size is well below 1.5K either way, and I rather have switches optimized for small frame throughput.
@@graealex Growth in IPv6 rollout will be the driver of larger frames sizes. As almost all exchanges have agreed a minimum frame size of 9k for IPv6. The primary driver being larger frames means less overhead per byte moved. Which at the ISP and exchange level is fairly important.
@@RetroBytesUK Again, this is only relevant for max throughput. But average packet size remains small, and the way modern webpages are structured, there is a tendency for even more and smaller packets, instead of a stream of very large ones. Not saying that agreeing to a larger guaranteed MTU is a bad thing, in this day and age. Oh, and Internet routers supporting a minimum MTU is different from Ethernet equipment supporting certain frame sizes, although they are obviously related. Internet exchanges don't necessarily agree on "frame" size, but packet sizes. Although in many cases, Ethernet is used at exchanges.
Back in the 90s, on a Novel over Token Ring site I worked at , the Financial Director managed to take out the whole site's networking (other than production's PDP 11 network) by keeping pulling the PCMICA networking card to take his Thinkpad over to his PA, while yelling at us about the network going out. He plugged back in, and the problem just stopped. There was a lot of competition in the department as to who was going to tell him that he's the root cause. Alas, it wasn't me.
Let's try to remember that Betamax was not the disaster it's made out to be. Apart from selling millions of domestic machines, Betamax morphed into studio formats Betacam, BetacamSP, Digital Betacam, HDCAM and more, which were installed in every TV studio in the world. Machines selling for seriously huge amounts of money, many using tapes which are mechanically almost identical to domestic Beta. Professional versions of VHS such as MII and D9 were all failures. Sony made more money from Beta than anyone ever did from VHS. Not such a flop after all. I do remember Token Ring on a some old HP workstations back in the day.
I think thats why I liked the comparison between token ring and betamax. Both successfull in their nitch, both concidered unsuccessful in the wider market.
@@RetroBytesUK They are similar in that perspective, but that doesn't make them superior to the solutions which won out in the wider market. They both had characteristics which made them superior for those niches but others that made them inferior for the wider market. Betamax won in the professional market because picture quality was king there but had other shortcomings which didn't make it the superior solutions for the wider market.
@@tss20148 Betamax and Betacam are very different animals. But both were fundamentally proprietary to Sony whereas Ethernet and VHS were basically open-standards, so many more companies became involved driving rate of development up and costs down.
@@EuroScot2023 Oh that old story again. Only Sony made Beta and VHS was free to license. Except that the business models for VHS and Beta were essentially the same. If you wanted to build machines you paid the license. Domestic Beta (Betamax) machines were made by various companies including Sanyo (who made machines in vast numbers) and Toshiba, as well as smaller players such as NEC and Aiwa. Betacam machines were only made by Sony (some badge engineering aside) but the same was true of the VHS derived professional machines which were only made by JVC and Panasonic who were under Matsushita. And as for Betamax and Betacam being completely different, well they were incompatible at a recording level, but they had very many connections. The linear tracks were in the same places. Betamax and Betacam Oxide tapes were interchangeable, and so too were BetacamSP and ED-Beta. The auto-stop mechanism was the same. Even some accessories had connections including interchangeable camera batteries and internal components.
One of my previous jobs had token ring for their entire administration office site (like a dozen buildings and a few hundred employees) till about the mid-2000's. Meanwhile the hundreds of other sites had ethernet and we had already upgraded many from 10 to 100 and then 100 to gbe. One really funny part with the boards reluctance to upgrade HQ is that the PCI token ring NICs were like $800-1200 each and were easily installed 300+ new PCs there in just a couple years and the cost of just the NICs could have more than paid for a entire gbe install. Instead they spent even more money to split rings in to even smaller rings with their own ethernet backhaul. I think we ended up where a single small dept would have 1-3 rings, a whole building might have 5-10, the intranet would still be slow AF.
My first employer had 16Mb/s token ring network. It was superior to ethernet at the time but it was the development of affordable switches that swung the needle towards ethernet and 100Mb versions of this in particular. We did look at a token ring switch too - but they didn't really offer anything better and were more expensive so eventually we moved to Ethernet. We operated a FDDI network too (joining the token ring networks together across a metropolitan area - this was 1Gb/s and was pretty good. We even deployed FDDI interfaces on key servers so that they could sit directly on the high speed network. The key technology is switching which resolves the collision problem that all shared access topology networks had - switching essentially gives each node a point to point connection with addressable frames - the improvements you can get now are just limited to bandwidth and latency.
I always liked FDDI. I worked at an ISP in the mid 90s and the core (SPARC) servers were on a FDDI backbone. A decade later (in the mid-2000s) I had some FDDI in my homelab. FDDI was the only way to get 100mbit on my DEC Turbochannel systems, so as a big dork I got some DECSwitch FDDI equipment, and a Bay Networks FDDI to 100mbit Ethernet bridge to get it online. I could only handle U/D topologies and didn't have the double-port boards that could do double rings, but it was pretty fun. I miss playing with it sometimes, but don't miss the monthly power bill or noise.
My first job out of undergrad was with IBM working on the 1996 Atlanta Olympics. We were doing everything on AS400, OS/2 on PCs and Token Ring. During the games I was assigned to the aquatics center. So the 2nd week of the games, I'm monitoring stuff during one of the Platform Diving events in the evening. I suddenly get a call on the radio from the scorekeeping room saying that they lost connectivity to the scoreboard, in fact between all 7 computers used to enter scoring data, run the scoreboard and other things (I forget all the functions being done). Hmm. Weird... I run to the network closet, open it and just as I look at the MAU, I get a call on the cell phone from the TOC (Operations center downtown) saying that they've lost the entire Olympics Network! I look at the MAU and it's flashing E6 E6 E6 over and over... I figure "What's the worse that can happen at this point???" I power cycle it, wait 30 seconds and get another call on the radio from the scorekeeping room saying things were working again. I grab the phone, tell the person on the line from the TOC to reboot all their MAUs network-wide. The next morning I get in and get a phone call from the head of ACOG, the head IBM person for ACOG and my boss. They asked what happened, I told them the long version of above... They told me that I basically saved the network and thanked me! During the games, IBM sent a LOT of their employees to "help out" those of us who were working it for the prior 2+ years, but with almost no training or real assignment based on skills. Apparently one of these volunteers had plugged in a piece of network diagnostic gear into the network at the TOC that hadn't been tested on the network beforehand. It started spitting out bad data packets which hosed up the MAUs. That was (thankfully) my last experience with Token Ring... The connectors on TR were HUGE - like 4x in each direction the size of a Ethernet RJ45 plug. The cables were also super thick and bulky.
@@RetroBytesUK Yes, IBM was a major sponsor and also the main Technology Integrator. Many of the team of developers I worked with were offered to continue the code and work on Barcelona, which was largely using the same setup to keep costs down.
I love the old "vintage" stories. It's amazing how delicate some of these old networks were, and yet 99.99% of the time everything worked great. Good times!
I've still got a shrinkwrapped box of OS2 4.0 that an IBM rep friend of mine gave me at the intro event. :D Ran OS/2 on one of my work PCs for a while, assessing if it was any better at talking to our AS/400 with/without Comm Damager :) vs. our cobbled pile of DOS/WfWG, IP, IPX, and SNA. Fun times.
@@mrz80 Thats cool.. I used OS/2 versions 1.2, 1.3, and then a little into 2.x.. I do remember loading 4.0 at home, and liking it, but by then, Microsoft really had established themselves, even if the product wasn't as good! I remember being so impressed with OS2 being able to handle the multitasking SO much better than anything Microsoft could offer..
A bit over 10 years ago as I was finishing up my studies, there was a "technology incubator" company in my town. They'd have cheap offices for tech start-ups. Said technology incubator had its entire building hooked up with Token Ring. It was kinda hilarious.
I have just read a 90’s PC Magazine article comparing networks and here you are with Token Ring. Love reading about the incompatible late 80’s and early 90’s protocols, apps and file formats and the expensive single purpose software solutions that were for sale to remedy this. The PC frontier was wild.
To add to the confusion you had Novell and IBM using their own proprietary terminology designed to make it harder to compare and making a network engineer trained on one system have to relearn a bunch of terms just to know what each vendor meant. Then you get to add that a lot of things were not yet really standardized in the OS side. For example, does each window in a windowing system have its own network session or do they share them. At one point the one window/one connection option was fairly popular which allowed such things as different applications having the same drive identifier mapped to different network file shares and since these were determined by the network driver provided by the network vendor and not by the OS itself it would even vary between installations using the same OS. One network vendor actually made it a configuration choice in loading the driver so the behavior could vary between computers in the same building on the same network. Ah, nostalgia.
@@RetroBytesUK Well, at least there was real change and advancement back then, as well as even old hardware staying fully usable as long as you were satisfied with their lower specs. Nowadays, for the last couple of decades, all we have had has been changes for sake of change, worsened and worsened software undoing decades worth of hardware advancements, artificial need for hardware upgrades purely to be able to run lessened-functionality software gotten more bloated merely due to unprofessional development we are forced to use just because of the support and compatibility droppings of old versions for sake of dropping. Today's software industry is purely horrible disaster.
@@TheSimoc I don’t completely agree. We are at a point where standards are in place and interoperability between systems has never been more common. No one is trying to replace TCP/IP, DNS, expansion slots, memory modules etc. Where standards change such as processor socket types are purely physical engineering which is necessary. But you can freely run your DOS machine today and browse the internet on it for the sole reason that standards are in place. You are not forced to upgrade anything.
Any time someone mentioned a "ring" in the context of a network to me I always just automatically thought of a ring topology, and since it rarely came up I never really thought much of it. Until I was assisting our head developer with a problem and while chatting I somehow got him on the whole tangent of Token Ring and how it was so much better. This same gentleman worked at IBM for like 18 years right out of Uni so I guess I now understand why he had an even more vested interest in the tech. Very interesting stuff regardless, though.
I had totally forgotten about Token Ring. The last time I used that was back in '91, when we were implementing a production control system for a highly automated injection molding factory. The system was running on a cluster of about a dozen 386 machines with IBM ARCTIC cards, using OS/2. These machines were connected among each other, and to a IBM mainframe using 16MB Token Ring, even using optical fibre for some of the longer connections (IIRC). The ARCTIC cards were basically 80186 machines on a card, with 512KB of memory and 8 serial ports, running a proprietary real-time OS. These cards were handling the communication to the machines on the plant floor, while the software running on the OS/2 implemented the business logic.
Man, Sega really did go all in on their Daytona Usa arcade boards. They not only used token-ring networks, but token-ring networks using TOSLINK cables.
What seems like a lifetime ago in the early 1990s, I encountered a token ring network at an insurance company I worked at. A short time later, they upgraded to ethernet and a Novell network when their IBM mainframe was outsourced then that operation was terminated shortly afterwards a short time later. Around the same time, I also worked with a thin-net network. My family's graphics business went from a Varityper Epics 20/20 typsetter to desktop publishing which output to an imagesetter. The RIP (Raster Image Processor) PC utilized a 3c509 NIC with a BNC connector and I configured their thin-net network. This was slow and reliable until a building tenant decided to put a file cabinet on top of a network in the shared office space. That crushed the cable and brought the whole network down! Then of course nearly a decade later in 2000, I configured a 10/100 ethernet network with those really reliable 3-com 3300 24-port switches. My manager allowed me to take some home and one the 3300s is still in operation today more than two decades later. The 100 Base/T network may be a bit slow but for the few of us on the network, it suffices for watching videos, printing and file sharing.
3c509 was such a groundbreaking card.. no more jumpers!! LOVED those cards.. especially since I got to take home some of the 3c501 and 3c503 cards they replaced to use in my own network! :-)
@@SteveJones172pilot They were great to work with. The software I used them with though was awful. I had to edit a bindings file and put their interrupts into the file so the software could talk to the card. When everything was loaded, there was something like 320K left of RAM on the system. I did have a 3c503 and upgraded that to the 3c509. Speaking of networks... I worked briefly with thick-net also. The company was installing a network for some imaging equipment and the imagers used thick-net and not ethernet or thin-net. With thick-net, you terminate both ends of the network then connect the various equipment via "vampire taps". These things actually drill through the coaxial cable and into the metal core with cables connecting from the taps to the devices.
@@Clavichordist Oh wow.. yeah, thick net I saw, but never had to work with.. That was a whole different level of inconvenient! :-) I used plenty of AUI cables, but thankfully all the transceivers were either 10Base-2 or 10Base-T 🙂
@@SteveJones172pilot It sure was inconvenient to work with. In addition to the vampire tap system, there was a bending radius you had to watch out for. Unlike other cabling such as CAT-5 or even thin-net, thick-net couldn't be bent too tightly otherwise it did something inside the cable and that would degrade the signal. It was fun to work with and I'm glad my encounter was very brief. I did run into it once but from an "Oh I know what that is" point of view when the network tech I hired was inspecting the old wiring closet.
Nice video, thank you! I also had a small TR network at home, as I was given some used TR hardware for free in the late 90s :-) But as I bought a PowerBook G4 and I couldn't use my PCMCIA TR card with it (no drivers for OSX...), I also switched to Ethernet. Btw, FDDI also used the Token passing technology, AFAIR. That could be a followup video? :-)
When you showed the MAU unit, my first thought was: Wow, IBM could have invented the switch right there and then! They could have moved token management responsibility from the nodes to the MAU, and be done with lost tokens and elections. The implementation could also be backwards compatible: just pass a token on each port, so cards that still thought they were on an actual shared circuit would continue to function. Newer cards would be implemented to negotiate with the MAU to do away with the token altogether. At the most extreme, the MAU could bridge Token Ring and Ethernet seamlessly, allowing for a transparent heterogeneous network, and the nodes would be none the wiser, while network engineers would be able to migrate from one technology to the other at their own leisure. Hindsight and all that :P
The problem IBM had was that they didn't want to create something that was cheaper and would take business away from one of their existing EXPENSIVE/PROFITABLE products. So somebody else came along and ate their lunch.
You never could really bridge token ring and ethernet "seamlessly" because of the MTU mismatch problem. Token ring ran with a 2k or 4k (or 16k on 16mbps) MTU, while ethernet's was 1500 bytes. If you had a brain dead DOS or Windows IP stack that wouldn't allow on-the-fly MTU negotiation, or that pegged the "DO NOT FRAGMENT" bit, then any traffic that crossed from token ring to ethernet would get broken. Mixed token ring/ethernet environments that didn't put a *router* between the topologies was destined to be a bleeding-ulcer generator for us poor network engineers. :D
I am glad you showed those terrible early Token ring cables - not only were the connectors huge, and always right-angled, but the cable was stiff and springy with a lousy bend radius!
Heh, I worked at IBM at the time, but for my home network I never moved beyond 16mbit token ring, and indeed went 100mbit ethernet. The real game changer in this was affordable ethernet switches. Once those existed, there were few reasons left for token ring.
Awsome video, thx for bringing back memories. Additional note: The original TR connectors are one of the very few hermaphroditic connectors that allows every cable to be used as an extension. Also iirc some of the contacts are ment to switch on a relay to break up the ring for pretty dumb MAU designs.
You missed IEE 802.4, Token Bus. Token bus worked on a standard ethernet backbone but created a logical ring. Token bus solved the problems of token ring, and did not require a physical ring. The reason it did not get used was that it came out in 1989 when the ethernet people were solving the large network problem in a way that was less expensive than token bus, with the advent of the bridge and the later switch.
You've hit the nail on the head there Token Bus never became relivent due to bad timing. Well outside of general motors. Its an area I decided to keep out of the video for the sake of brevity.
@@RetroBytesUK The only reason I bring it up is because I was one of the engineers that worked on the Siemens token bus MAC chip. We got all the way to production ready silicon, including installed microcode, in the Fall of 1989 when the project was canceled. I worked at what was then Siemens Semiconductor in Santa Clara. A small group in Munich had also just completed a working phy. We were able to prove our implementation in a physical network before fabricating silicon by using a Daisy Megalogician in 'reverse', taking our logical simulation out to the physical world, plugging it into the ISA PC network card we had already built. It only operated at 1 bps, but that was enough to do all of the tasks needed to verify the design to the 802.4 spec before going to fab. Those were fun times.
@@RetroBytesUK We were production ready, with final microcode in the MAC and a phy that passed specs in mid September 1989. We had engineering samples of both chips and built a working 10Mbps network with four or five nodes. All told, the MAC had 50 person years in it. Siemens Munich cancelled the project and laid off all of the American workers right at the start of the tech crash of 1989. I did not find work again until mid 1992, at TDK Semiconductor (formerly Silicon Systems) in Nevada City, CA.
@@letstrytouserealscienceoka3564 It must have been more than a little upsetting to have done all that work? and come so far for them to just cancel it. Glad to hear you did eventually end up staying in the industry, I'm guessing that was not an easy 2 year period.
Migrated a few companies from TR to Ethernet in 1999/2000. Once the first NetWare server went in the TR usually came out soon after. Doesn't feel that long ago and now I feel old. Madge branded hardware always made me think of Neighbours with Madge and Harold bishop 😂
I started my IT career around 1997, and 10base2 using coax and bnc connectors was on its way out. Once I really got into it around 2001, 100mb Ethernet with 1Gb interconnects was really the standard. I suppose I'm surprised that apart from specific use cases, we've not seen too many advancements in speed, particularly in Soho environments, where 1Gb to the desk and maybe 10Gb uplinks seem to be the norm. In fact, we're moving away from wires to the end point and switching to wifi.
I think a big part is poeple like the convenience of wifi, and where willing to accept lower speeds as the trade off. That and all the faster than 1Gb ethernet standards have a very short max distance over copper, they basically all need fiber to the desk to work, and re-wiring is cost prohibative. In the DC 10/40/100Gb are now fairly common, what you select is around the bandwidth required and the port-densitiy/cost/space/power for the switching to support that bandwidth.
All wifi is CSMA/CD and with the drawbacks that entails. And until recently 1G was faster than any disk a typical consumer would have around. Also the first gen of 10G was finicky and would drop to 1G if the wires weren't perfect, The new gen of multi-G fixes a lot of that but it's probably a case of too little to late. Relatively cheap MIMO routers are decent at reducing carrier interference in a moderatly crowded environment.
True, but you shouldn't forget, portables (and even more laptops) were prohibitively expensive for a long time. So now, that laptops, with even great batteries, are the standard, the wish comes through, using them mobile. And yeah, it's shared media, but with enough access points, roaming and even 802.11n, the throughput is enough for them to work. Else, the docking station is cabled...
Awesome review! I had a customer in 1999 who had mixed 4 and 16, ran HP-UX and AIX servers (I was reviewing their Y2K Lotus Notes system) and a global Novell NDS. So cool at the time! Only time I got in contact with TokenRing. I remember Byte Magazine in 90s ran an article with the calculations for ring size. I look for it in my collection….
Betamax is apt. By the late 90s, VHS had continued to evolve and absorb new improvements. If you compared Betamax to VHS by that time, VHS would often come out looking better. Likewise, when Ethernet had switches, it took away its biggest problem, and there was no reason to bother with Token Ring anymore.
This was an excellent video! I used to teach token ring many many moons ago and this really took me back. Just fun stuff, thanks for taking the time to put it out there
I worked with Token Ring at the end of the 80’s, we used a system from Nine Tiles on the British Rail IECC signalling system. Many jokes about the token falling out when the quite fragile single core wires broke in our test and development environment…
Scary thought, I think I have a Token Ring PCI card in a box still somewhere. Fun fact, up to the early 2000s, Unichem used Token Ring. A common theme amongst companies that had suckled on the IBM AS/400 and S36 offerings and rolled on with them. When you have to set up SNA on AIX over token ring, over a leased line to an AS/400, you know you have landed in hell.
We had a hardware router from IBM that handled bridging SNA on to our token ring network. Being able to emulate 3270 terminals on a PC with just a regular LAN connection was a big leap forward for us at the time.
@@RetroBytesUK Speaking of 3270 terminal emulation on a PC, that sure did push OS/2 sales, that emulation was solid. Having worked mostly up to that stage upon Honeywell and ICL mainframes and terminals of the day (Newbury iirc did well there), and flavours of Unix. The first time, I encountered Token Ring was in the year 2000 at a major company with a single ISDN channel for its Internet. With me at the time, duplex 10MB and bonded 128K ISDN dual channel internet ( some hooky 0800 dialup thing that sold shares for lifetime access and found if you brought 2 (24£ outlay) you could do bonded ISDN on 0800 number - yay those early times) it felt like I was working in a Museum in places. But that was my first real and the last encounter with Token Ring, within a couple of years it went to all Ethernet latest and greatest FOTM CISCO of the time, but darn the price of Token Ring adapters compared to the built-in ethernet in business desktops offerings of PC's back then. That alone justified the move. Also, remember MADGE as I knew the chap who was the authorised reseller, Peter can't recall the surname offhand. He did well in the Token Ring days I do believe.
@@PaulGrayUK My first encounters with Token ring was when I was working for ICL. Every site in the company ran it. The last time I encountered it was in the late 2000's as Maersk group was still using it in a number of their offices and warehouses. They where still heavily mainframe based, and also where still running lotus notes on a collection of OS/2 servers. It really did feeling like working with a museum.
@@RetroBytesUK I spent some time at the Feltham ICL office, though was contracted to work on the Videoconferencing side, those ICL out dials, were wonderful.
@@PaulGrayUK LOL 3270 emulation HAD to be solid on OS/2! ALL IBM mainframe customers had green screen CICS, VM, or IMS apps, and all of their users HAD to have 3270 capability on their shiny new (and hideously expensive PS/2s. AND, apart from Lotus SmartSuite for OS/2, users couldn't do much else on OS/2 except run any corporate apps the company developed for OS/2 themselves... and there were frankly NEVER very many of those. At one point back in the late 1908s, IBM came out (pre-OS/2) with a machine called the "3270 PC". I had one at work... it was a modified PC AT with a 3270 ISA card in it, and it gave the user 4 3270 terminals they could switch between, plus DOS on their desktop... and I think a couple of scratchpad sessions you could switch to as well. No windows tho... you switched to a session using some hotkey, and it took up the entire display. It was a LONG time ago and it sounds crazy, but it's true! In fact, I worked with a LOT of crazy products at IBM back then... including an absolute FRANKENSTEIN of a product in the early 1990s called "OS/2 For Windows". It was intended to run Windows under OS/2 somehow in an address space or something, to enable users to have both OSes on their PS/2. I can't even BEGIN to describe that absolute monstrosity! 🤕
Fantastic video that took me back in time - Hermaphrodite connectors, FDDI, Cambridge Slotted Ring,... Surprised you didn't also bring up the long running court cases over royalties with Mr Olaf Soderblom, helping to keep innovators out and costs high. Thank you!
I decided to avoid going into all the royaltiy stuff, as I try to keep these things to 20-25 mins. There are a fair few things to talk about in that area, etherent also has some battles on that front too. So it felt like a bit too much of a time hole. Although Olaf Soderlom seams like a really interesting guy, who apparently flyes hawker hunters, which is an aircraft my Dad worked on.
WOW! I haven't heard "hermaphroditic connectors" in a long time. I mentioned them to someone once year sago. I got a blank stare and thought they were going to call HR or something! 😂
Great video and very well explained. I worked for IBM New Zealand back in the 80's and 90's and was heavily involved with Token-Ring. I even had buttons made to be given out at trade shows... "IBM Token Ring. Where no LAN has gone before" (Yes, I am a Star Trek fan.)
Another great video from RetroBytes, now officially my favorite tech channel - only you didn't say PCBway correctly, still need's the upwards inflection
Great video on TR! My 1st introduction to TR was back in the late 90s, when I had to replace it with Ethernet. The MAUs in the closet were massive and took up so much space and that cabling… WOW! Fun fact: Kalpana invented many of the LAN technologies we still use 34 years later like Etherchannel/LAG/Port-channel and VLANs. Their product evolved into the Cisco Catalyst 3000 series. Good times!
Brilliant video! Not so many years ago, I was teaching this stuff to technical high school students. It also seems we attended the same University! And yes, I remember the NFS boot disks...
@@MostlyPennyCat Sun workstations came with ethernet built in, so it was most probably ethernet to connect them, as alternative interfaces cost more to add.
@@MostlyPennyCat You could get other cards, but most SPARC workstations that had built-in networking had Ethernet. You could get Token Ring, FDDI and ATM cards for them. Certainly easier when they started coming with PCI and PCIe card slots. SBus was likely very expensive because it was not widely used. I know the SPARCStation4 and SPARCStation5 had both twisted pair and AUI Ethernet ports on the back. Only one could be used and the system would attempt to figure out which you were using. My first exposure was the SPARCStation2 and it only had an AUI connector, meaning we had to have an AUI cable back to the 10Base5 or 10Base2 coax.
I was a network engineer that had been directed to get a large Token Ring network installed and up and running for our agency with offices scattered on multiple floors of two towers in an office building we were in in the early 1980s. Then right at the end of that decade I was told that we had to change everything to Ethernet in about four months -- with minimal work disruptions and limited amounts of overtime or off hours work. By installing Ethernet switches in the wiring closets along side the existing MAU's and using two types of BALUNS we were able to move the PC workstations to Ethernet over the IBM type 2 Cable then when the new cabling was installed switch almost immediately over to it and all Ethernet. Although I didn't have to do much of the physical work I was able to get pretty good at installing either an IBM hermaphrodite Token Ring connector and if necessary an RJ-45. I worked on Ethernet stuff for the rest of my career and I liked it much better than the Token Ring network especially since it was a lot easier to identify, isolate and replace a faulty Network Interface Card. As some of the previous comments have said there were cases where the Token Ring network was technically better, but as the speed of Ethernet increased that became almost moot. One other thing about Ethernet... A similar, but not exactly the same, types of protocols were used in the late 1960s through the mid 1970s to monitor sensors that the USAF had planted along the Ho Chi Minh trail in Laos during the Vietnam war. I know, because I was there and worked on that system.
Nice walk down memory lane! In the late 90s some people thought ATM would take over the world. Fortunately it never happened - it was a terrible idea from the start. 😅And there was the FDDI standard which was basically token ring with two rings over fibre. It saw some success in campus networks and Internet exchanges in the mid-late 90s. I believe gigabit ethernet still implements CSMA/CD so theoretically it should be possible to have a gigabit hub. I've never seen one though.
@@tripplefives1402 We got our first long range etherent circuits in the early 2000s but most places still used ATM especially ISPs that provided Adsl services. As adsl dslams where all atm based, so most feeds from the teleco to the ISP where over atm. When vdsl came in that's when BT switched most stuff from atm to ethernet. ATM never really seamed to take off in the lan space
@@RetroBytesUK Interesting, where I live (Scandinavia) only the very first generations of DSL (99-02) were built with ATM but we quickly switched to DSLAMs with ethernet backend. By late 00s I think all ATM was gone. Fun fact - my first DSL modem in 2000 had an "ATM25" user port - 25 Mbps ATM over UTP. 😂 Never got to use it though.
I unfortunately at one point supported an ATM to the desktop LAN. I unfortunately also at one point had a customer that had deployed a nationwide ATM WAN with several hundred locations using ATM LAN Emulation Even when used for its intended purpose ATM LANE is an evil of biblical proportions. That someone would use it for a WAN was pretty much unfathomable. Fortunately, my job was to replace the thing and not to maintain it. I also supported several FDDI campus backbones back in the day and several customers with metro area networks connected via carrier provided FDDI which was quite common then. Gigabit Ethernet does still support CSMA/CD but it was the last ethernet version to do so.
@@JayJay-88 late 00s early 10s is when BT brought VDSL widely (and thus moved to ethernet), the switch started in.. 05? 06? in some areas. So the timelines aren’t _that_ different.
When I think of token ring, I think of the Dilbert comic where Dilbert tells the PHB that his computer isn't connected to the network becaus the token has fallen out of the token ring network, leaving the PHB to crawl around on the floor looking for it.
I worked at IBM in the Thinkpad department in the early 2000s. IBM was famous at the time for hanging on to every bit of technology it had invented. They were still using servers running on Warp OS, and the OS was still being developed internally. We were still using Token Ring at the Research Triangle Park location. In the Thinkpad department, we used those 16mbs PCMCIA token ring cards. As a general rule, the network was pretty stable and reliable. We would regularly ship large Windows software packages from our lab to the laptop manufacturing storage servers dubbed Area 51 using Token Ring. That is, until a few of my coworkers decided to play a lan game of Worms during a slow period. That would have been fine with TCP/IP, but they chose to use IPX/SPX. The combination of token ring and IPX/SPX broadcasts from a few machines brought our network down pretty quickly,. Halfway through the game, one of our IT guys walked in and asked if anyone had any big file transfers going, because most of the token ring network was at a standstill and no one could access Area 51. My coworkers quickly quit their game of Worms, and the network came back up shortly afterwards.
I worked in networking and servers through the 90's, it was a crazy, I had to know Ethernet, ARCnet and token ring and all the protocols like IPX, TCP and NetBEUI and all the OS'es like 20 different versions of Unix, Dos/windows, OS/2 & WARP, Novell, MacOS and all the various platforms and architectures they ran on, and some clients would have a Unix server, an NT server or LANtastic and a NetWare server all in the same company with SGi, SUN, IBM/PS machines and PCs and mac workstations and you were just expected to know everything about all of this, and get them all working together and speaking on the same network, and building bridges and routers with Linux was pretty much the only solution in most cases. I worked on a lot of Token Ring networks at banks, they mostly ran OS/2 on IBM PS/2 machines and they were the worst when it came to cabling.
We had a network like that back in the same time period. Every protocol imaginable along with bridging (not routing) between sites. Thickwire and thinwire initially. I brought them kicking and screaming into 10BaseT because cat-3 cable was already in the floor. Someone brought in an Apple printer that used Apple's networking protocol (Appletalk?), luckily they brought in a contractor who plugged in a network analyzer, looked at all the crap already there, and said "Do not add Appletalk to this mess, send the printer back!" I eventually got them to buy routers to eliminate the continuous broadcast storm. The TCP/IP server admins moaned a little about re-addressing. Nonroutable protocols like DEC LAT and some IBM stuff got tunneled then fixed ASAP. The DEC and IBM people understood and worked with me. For some reason every Netware admin resisted mightily and HAD to be the default network which was 1. Finally I just "decreed" (and I am a lousy dictator) that there would be NO Novell network one and everyone had to move if they wanted to be routed. Finally we became a TCP/IP network. "You want on the Internet? You use TCP/IP".
@@davidg4288 I had lots of clients with networks like that, and I think the big problem was software, the engineering guys used CAD on Unix workstations and Unix Servers but had some dos/windows machines for other stuff, the Stores/Warehousing people used inventory software on terminals connected to Unix servers but still had a windows machine for half their work, the accounts people had windows machines and used netware servers and had a terminal to access the unix for inventory stuff, and the designers would have macs for half their work and windows machines for the other half, and everyone was still running around with floppies and CDRs to share files with each other, except the designers, cause no one could read their files and the all used zip drives and you could clean it all up and get everyone on the same protocol and servers, but it was only in the later part of the 90s that people could start having 1 computer to do everything they needed to.
@@RachaelSA Yes I remember the old removable disk "sneaker net". Someone even bought a 9-track tape drive for their IBM PC so they could transfer large files to the IBM mainframe. Before that we had a TRS-80 Model II with an RJE (remote job entry) card that could send files to the mainframe via modem. The IBM PC couldn't talk to the IBM mainframe yet but Radio Shack could. Later the 3270 PC (and the IRMA card) sorta fixed that. The DEC people could always have their PC emulate a VT-100 terminal. In either case it made for a very slow data transfer. The most frustrating was when they were finally all on the same network but the PC's used either netbios or Novell IPX, the DEC VAX used DECnet (or LAT), and the IBM mainframe used SNA. The tower of babble! Oh yeah, and Token Ring and Ethernet used "different endian" MAC addresses. Probably for a good reason other than annoying network admins.
@@davidg4288 Linux was awesome for all that back then. I would come in and replace all their servers with 1 big linux box, or pick the best server they have and install linux on it and change all the workstations to tcp/ip and run NFS and samba and marsnwe on the same shares and everyone had access to everything in 1 place and i could copy all Unix stuff to the linux and scrap the terminals and give everyone a terminal emulator program. Often I would also set up a mail server on the linux so they could all mail files to each other and if they had internet I would set up a gateway and proxy and fetchmail and some even got irc servers. I would also set up a modem so I could just dial in from anywhere to do support.
When I started my first programming job in 2007 we had a few legacy token ring machines connected to the larger network with one of those bridges. Also an Ethernet machine that used BNC connectors. Fun times.
It would have been nice to here a little bit about bus vs star vs ring topologies. Obviously strengths or perhaps design luck of the star topology played to development / success of the Ethernet switch. You kind of touched on it, but a mention of bus top. was missed and some may be interested to learn about those differences. Maybe diagrammatic aids could have shown it. Great content BTW.
It looks like to me that the bus topology was described at around 1:15, where machines using Ethernet were connected over a shared bus, which was just a single piece of coaxial wire that all machines were connected to.
I tend to multitask working, listening to music and learning something on youtube. Having the information in this video be presented without background music would have suited this habit a lot better.
I remember in 1993 my workplace replaced our simple ring Ethernet network with one long cable per device threaded through the ceiling and down into a network closet. So the multiport switch hardware must've existed even that early.
There were thicker Ethernet cables (10Base5) vs the 10Base2 coax mentioned in the video. I do know there were devices that acted as a sort of a MAU (Media Access Unit) Hub where you then had multiple AUI (Access Unit Interface) drops. Who knows what you had in the ceiling. It could have been 10Base5 Cable with a MAU on a vampire tap, or one of the hub like devices where you could then provide several AUI drop cables to connect to Ethernet cards. I can say I don't miss 10Base2, or 10Base5 or any of the MAU and AUI nonsense. In terms of cable thickness, we are certainly getting back there with 100gb QDR copper cables. But we don't have drill and tap Thick Coax or trace down where things have come disconnected. AUI cables were notorious for falling off the card.
@@buffuniballer I have an "AUI Transciever" here, which plugs into the back of an old Microvax & converts to Ethernet (RJ45 or co-ax, takes your pick). It has a nifty locking mechanism on the back of it to prevent it from simply falling off! I presume other cards/plugs also had this thing - like a sliding metal shield with 2 T-shaped pins which engaged in 2 slots. Of course, the biggest problem with the transceiver is, now the machine has to sit 6 inches away from any surface, to give it room!
@@theelectricmonk3909 Oh, our AUI adapters had the same. I think the issue was a combination of folks wouldn't get them well latched AND coax was a bit heavier than TP cables. Not that TP doesn't have its own issues. But those are more about not being able to release the tab on 1U servers with lots of cables. Think Cat5 cables for ILOM and Serial Mgmt covered by QDR cables for production networks. But they seldom just fall out unless you don't get that tab seated, which can be difficult in the above described environment.
@@buffuniballer Been there, many times: Sometimes the tab gets its end broken off (= can't be released without a small screwdriver, and incredible patience), entirely broken off (= falls out), or just loses its "spring" (= falls out unless manually seated with aforementioned screwdriver, and who remembers to do that?)... then there's the randomly failing cable, because one of the conductors has broken but works if positioned *just so* ... Not to mention wiring up the connectors in the back of a patch panel is an exercise in frustration, trying to remember if the other end is wired to T-568A or T-568B (with hilarious consequences if you get it wrong)... then you accidentally put the insertion tool in the wrong way around & snip off the cable **before** it goes into the IDC block.... Ah, networks. Loathe 'em or hate 'em... can't live without 'em!
Exactly. We had to keep them running while extending the nets with new advancements. I recall running Netware 2.15 on arcnet, token-ring and ethernet at the same time!
We had arcnet running all over campus for all that Johnson Controls building HVAC management stuff. There was a huge celebration when the last of the arcnet went away. (At least, I *think* it went away; Facilities might STILL quietly be supporting bits and pieces of it here and there :D )
Thank you for this fun explanation! I am a network engineer, but I started my career in 2014, so I have only dealt with Ethernet; even study materials vaguely mentioned token ring networks, but nothing beyond that. I was always curious as to how these older technology actually worked.
Token ring was a pain - we had Madge and IBM can we kept getting broken rings (unpleasant) - it got so bad we ripped the whole companies out over a few weekend and replaced with 3com Ethernet. No idea what caused it even happened on dumb maus cables locally between 2 pcs . Glad to see the back of it
It was a pain. Having limited or practically no IT maintenance budgets in early 1990s, i was able to locate some Racore cards that allowed me to move the T/R off of coax to twisted pair. Blackbox had an T/R over TP MAU that allowed long extension connections. I thought that was freeing until SMC began shipping low-cost ethernet on TP. I need TP. I need TP for my BH. (sorry, couldn't help it).
Love the background music. It's upbeat and conveys the relentless march of time. I feel sorry for those who can't concentrate without absolute silence. Who else digs that 2/2 ragtime that sounds so cheery and bright?
Nice video, but I couldn't watch past the 12th minute. The background music is too loud and too annoying and the visuals are too 'busy' and distracting. I really liked the subject and actual content but the background music and visuals are way too distracting. Sorry.
There were also managed multiport Ethernet repeaters with 10base2 Coax cables. I remember the DEChub 90 modular series, which also was fully managed and you could read out the stats via SNMP. When I was a student in the 1990s, there were lots of them in use in our University as the twisted-pair cabling was still in the planning phase but Ethernet connectivity was needed everywhere and as "emergency measure" we installed them and Cheapernet cabling. It was always segmented to a few rooms so cabling issues usually did not disconnect everyone.
A lot of IBM sites were using Token Ring until the mid 00s (when I left Big Blue) Prior to that I'd worked on a site that had Novell (the package type Novell warned you on the courses not to use) over Token Ring. No clue if they ever bit the bullet and went to TCP/IP over ethernet (they were mooting it back when I left in 1998). I know the chap who designed their Token Ring network died in the mid 00s, having been made, and they couldn't pay me enough to go back there, so I don't know its final fate.
When I moved into a dorm at Carnegie Mellon in 1994, they had token ring physical layer connectors. Huge weird IBM connectors. You had to get the ports in your dorm room activated. You could choose between Token Ring, 10BaseT Ethernet, LocalTalk (230.4kbps Apple proprietary) or RS-232 serial. You could literally plug a terminal into your dorm port. 10BaseT required a balun - balanced/unbalanced transformer adapter. LocalTalk cables were much cheaper. A year or two after I got there, they removed Token Ring. They had a trade in program where you could swap your token ring card for an Ethernet one. Moving away from the dorms back then meant getting dial up at 56k or so. Definitely worth staying in the dorms.
I'm not sure that I would agree that Token Ring was designed to solve the problems associated with Ethernet, they were two solutions to network that were developed more or less at the same time. Token Ring to a certain extent is a typical IBM solution to a problem, logical and safe. Ethernet is a lot more radical because the collision problem is an obvious downside, but lets face it, the collision problem was resolved on day one and for large networks improved on very quickly so it was never really a problem. The way I had the difference explained to me in the early 90s was that if you had twenty people sitting around the table at a dinner party, token ring would be that going round clockwise, everyone took turns speaking, whereas with Ethernet, people spoke when they had something to say, and if two people spoke at the same time they would apologise and try again. For me, one just felt more "real" than the other and I came to appreciate the simplicity that cabling an office to use ethernet was over that required for Token Ring. So I am not sure Token Ring was the Betamax, because Betamax was in many ways the superior technology to VHS. Ethernet dominated because it is technically superb.
Ethernet was cheap. Token Ring was expensive. That, more than anything else, made Ethernet preferable. Collisions were a huge problem with big Ethernet networks and that wasn't really resolved until switches became commonplace in the mid-90s.
Routers shipping round layer 3 really did help, but where too expensive to be practical for most institutions. Also they did not solve the problem of broadcast, far too many protocols at the time depended on broadcast to operate correctly even if they used a layer 3 protocols. Then there where the protocols that had no concept of a network address, only station addresses that where the same as the Mac Addr. Routers did not help with those at all, as they where not routable.
Very nice, I need to dial into the local bulletin board system and post (if the line isn’t busy). Will be great to see the responses posted as people individually comment over the next several days.
I like the CAN-bus implementation of multiple access. The devices pull a line low to represent a 1 and leave it to represent a 0. Once the last packet is sent, any device is free to start transmitting. This means multiple devices transmit on the same line at the same time. The instant a device tries to transmit a 0 but detects the line was pulled down to transmit a 1, it stops transmitting and lets the device transmitting a 1 continue. This means there are never any collisions and the implementation is super simple.
CAN Bus is so cool that even HARLEY went to it back in 2014 with their new gen "Rushmore" bikes. It eliminated a LOT of individual wires from the formerly-thick wiring bundles and made things MUCH simpler. Unfortunately, my 2012 CVO Street Glide is not CAN bus - but it's a LOT PRETTIER than anything that's come out of the MoCo before nor since IMHO.
You never covered the reasons it became known as "Broken Ring". The biggest one was MAU failures, when a MAU failed you would have to go and reboot each MAU and wait for a new token to be generated, until you found the one with the issue. The next big issue was the Type 1 connectors, they were easily broken and would sometimes not close when they were unplugged. If this happened you would have to go and inspect every connection on the ring or unplug everyone from the MAUs and then plug them back in one at a time until you found the faulty cable.
The Type 1 connectors were stupidly large and fragile. When you could easily fit 48 RJ45 connectors into 1U of rack space but only 10 Type 1 connectors then you know there's something badly wrong. The cable itself was also way thicker than it needed to be so it made cable routing difficult.
Thanks for this entertaining and informative video! Your analogy of the Token Ring LAN being "the Betamax of Networking" is absolutely brilliant! I was an IBMer in the 80s and 90s, and I worked with Token Ring pretty extensively starting about 1990, in the IBM SouthLake/Westlake technical support center for office systems, west of the DFW airport. Those connectors and cabling were big and clumsy, were a pain to run and connect, and it was expensive to install the MAUs in a closet. But Ethernet was still using coax cabling at the time. I actually set up a TR LAN in our office in Atlanta when I moved there in 1992 because I had started working with a product called Lotus Notes back in Westlake after the failure of the OfficeVision/2 product IBM had been developing in Westlake. IBM started selling Lotus Notes in 1994, and eventually acquired Lotus in 1995. I call Notes "the Betamax of Distributed Document Sharing"! It was a brilliant product for the time, invented by Ray Ozzie of Iris Associates, which was acquired by Lotus in 1994. Notes enabled document-based databases to be built, with threaded conversations, etc. These databases could be "replicated" across Notes servers and even to Notes workstations. I built many Notes databases with some pretty sophisticated forms, etc. I also installed a product called "Hoover", which provided categorized news articles that got downloaded to the Notes server via a dial-up connection overnight, and these articles were then available for people to read when they arrived at the office the next morning. It was kind of leading edge back in 1992/93. You should do a video on Lotus Notes if you haven't already! Anyway, back in 1992 I needed a real LAN so that my PS/2 (yeah...) could communicate with the Notes server machine over the network. Everyone else around me only had IBM 3270 cards in their PS/2s so they could use the IBM mainframe 3270 terminal applications over a coax connection. Those coax connections went to every office and cubicle, but NOT so for the big new TR cabling. It took me awhile to help my coworkers and my boss understand what a "network" connection even was, why it was better than just having a 3270 coax connection, and how they could make use of it with new network-based applications like Lotus Notes. In IBM in those days, there were not yet any servers on a LAN in field sales offices, so it took awhile for LAN infrastructure to become the norm. Even in the mid to late 90s, many IBMers were still just using 3270 connections to IBM's ancient mainframe 3270-based email system "PROFS", which later became "OfficeVision/VM" and "OfficeVision/370". The iBM field sales offices I worked in back the late 90s and early 2000s had pretty much never installed much TR LAN infrastructure. They seemed to go straight to Ethernet when they installed a LAN. I also NEVER saw a TR LAN in a customer site - they were ALWAYS Ethernet. Our ThinkPad laptops by the early-to-mid 90s all had a built-in Ethernet port, but of course no TR port - so that kind of made even IBM go with Ethernet outside of their lab and development sites by the mid 90s. Speaking of IBM PS/2s... they and the IBM operating system they ran - OS/2 - would also be interesting topics for a video. I haven't looked to see if you've already done one yet, but I will. The PS/2 and OS/2 are very interesting stories and I know quite a lot about them, having lived through that entire crazy era at IBM. And yeah, I bought a Betamax VCR back around 1980 - because it was SOOO much better technology than VHS! 🤦♂
I was the "network guy" back in those days which usually meant Ethernet (10Base-5, 10Base-2, 10Base-T) but occasionally Token Ring would pop up, especially near the IBM mainframe. Strangely I understood it because we once used Paradyne terminals on the IBM mainframe. The Paradynes were configured in a bidirectional ring and used a token passing protocol, complete with a "beacon" alert if one direction failed. I don't remember the data rate but it was much lower than 4Mbps, they were character displays. Later I remember FDDI using bidirectional fiber rings, it wasn't a "token ring" though. The speed was 100Mbps so FDDI was made quickly obsolete by cheaper 100Mbps Ethernet.
FDDI survived longer than you think, I'm aware of FDDI rings that were still operating within large orgs at least until 2012. By then it was mostly for legacy workloads that didn't translate well to packet-switched networks though. ISTR there were also some attractive synchronicity properties but I may be thinking of SONET rather than FDDI there.
@@SomeMorganSomewhere You made me look! I know we used FDDI between buildings but I didn't know FDDI could go up to 200 kilometers. I assume that would be over single mode fiber. And apparently it could do switched voice and video as well as Ethernet. SONET can carry all kinds of things, T1, T3, Ethernet, among many, many others. Maybe FDDI over SONET? SONET was originally a telephone company thing, we used it for a corporate backbone but I've been retired for awhile so who knows. I remember MPLS but that's probably SD-WAN by now. I was never an ISP or telco employee, those would be the experts.
@@davidg4288 Yeah, you are correct single mode fibre. ISTR you could only do 100km for a full dual-ring setup, not entirely sure why it'd be different to a single ring setup, some weird protocol quirk syncing the rings I assume. FDDI was the tech these companies went (one of them is a Telco so they likely still have it lying around today, along with several other strata of legacy tech ;) ) to for their initial MAN technology, eventually got mostly replaced by MPLS and Ethernet, though they kept their rings around for some legacy stuff that worked better in a circuit-switched network (FDDI-II I think... added circuit-switching capabilities) MPLS is still around today but it's slowly falling out of favour. A couple of universities here were using SONET rings to link campuses back in the day, and they definitely had some weird stuff that required synchronicity going on so I may well be conflating the two, it's been many years...
When I started working for the University System in 2000, we still had computer labs on both hubs and token ring switches. One of my first jobs was to move them all over to Ethernet switches. I also recall we had a now dead piece of core networking gear in an asynchronous transfer mode (ATM) switch in the mix between buildings.
I was sold on ethernet frame in the late 80's, but there were a lot of passionate magazine managers pushing ATM in the late 1990s and some thought is was the (bomb) best tech in networking. It bombed alright after the famous Denver airport project. Fortunately I stuck it out with the ethernet frame which continues to serve me well.
During the late 1990s/early 2000s I was sys admin for a system that used token-ring to support an OpenVMS cluster in which 3 work-horse application CPUs talked to two servers acting as disk managers. We got it because it was massively faster than the Ethernet we had available at the time. We had TWIN token rings, counter-rotating, which meant we could reach 2x the normal bandwidth of the Ethernet at the same bit rate. Had we NOT used the counter-rotating twin rings but ran them as con-rotating (same direction), we would have about 1.4x the Ethernet bandwidth. If we had used Ethernet as-such, collisions would have had about 0.35 of the token bandwidth. The really nice part was that we could endure a system failure because there were TWO paths to every member. The CPUs were DEC/COMPAQ Alphas. Darned things ran circles around everything else in the site. Our applications started to run so fast that our customers wondered if something was broken. But no, everything ran about 30x faster. Not 30%. 30x! When we switched to Gigabit ethernet and fiber-channel disk connectivity, we eventually upgraded to more traditional non-ring configurations... but for a while our token ring was outrunning other systems like Secretariat at the Belmont Stakes.
I don't recall experiencing TokenRing anywhere when I was younger but I can very well tell the improvements the school networks had when things went from BNC to ethernet. Don't you dare to turn off that machine in the middle of a ring, otherwise your friends can't do stuff anymore.
I remember back in the early nineties my company was running token ring (we were an IBM shop) and had rings on each of 12 floors all tied to a server ring. One night the ring on the legal floor began beaconing. I cannot recall how we determined which card was the culprit but we did and unplugged it. Our euphoria of quickly fixing the problem was short lived as the computer plugged into the next port on the MAU began to beacon.... To make a long story short, we ended up unplugging every computer on the floor (around 60) disconnecting all of the MAU's from each other and then using the initiation tool (yes, there was a special tool you had to initialize each port on the MAU before using it) we reinitialized every port on the 8 MAU's on the floor. Having done this, reconnecting everything and restarting all of the computers, the ring again worked. We never were able to figure out the root cause. We just chalked it up to bad "juju."
The college I used to work at went full in on token ring back in the 80s. Even to this day (as of a few years ago) many of the older buildings have those IBM chunky connectors on wall plates and in the comm closets. I think the funniest thing is, since you can easily adapt those to rj45 jacks, it's actually running Ethernet on top of that physical hardware designed for token ring, and they're even doing PoE over those connectors too. Crazy stuff. The company that made those patch panels were quite smart. Backend is CAT5e twisted pairs, which attach to the panel in an 8 ping edge connector. In the patch panel goes a little adapter module, to adapt it to IBM connectors, rj45, rj11, dual rj11, COAX, and even rj11 + COAX on a single module. So, it's no wonder they keep using it there since it's so flexible. Only when there's a full renovation do they pull it out and run new lines. I've only just come across this channel, and I'll be binging your content soon enough. This is very much something I've been doing as well. I've set up my own dial-up internet, ISDB BRI and PRI connectivity, and I'm nearly there for adding in ADSL connectivity as well. I hope to combine it all into a single portable box for demonstrating these now-defunct WAN technologies. Let me know if you want to chat about it!
I had a job back in the 90's for a large department store holding company. The had an IBM mainframe room which did all the transactions for all the POS terminals for around 500+ department stores. I worked in the real estate division. We were charged with developing malls, new department stores, renovations etc.. Because of IBM it was wired with Token Ring even though our division ran off a Novell network. The Token Ring hub was made by 3COM. Came in one morning, and the hub had failed. There was an acrid smell in my computer room, so I figured the power supply had died. We had a terrible time finding _any_ company by that time who still made token ring hubs. The cabling was coax with BNC T shape connectors. We finally did manage to find a hub and swapped it out. Meanwhile the department was dead in the water.
My farther worked in a university when the internet was growing. And he told me years ago that there network was once changed from a linear to a circle layout, and how much it improved. I feel like I finally understood what they did.
This video just went through the beginnings of my career... including the Kalpana 10mbps switches.. I was in a Netware environment with about 8 servers, and each department basically had it's own network. Some were ThinNet (10Base-2) but in '91 when our company moved to a new headquarters, we moved everyone to 10Base-T using synoptics hubs. We had one group who was using a small lantastic network on thinnet still because of proprietary support, and another which had a proprietary IBM system which required token ring. I had configured our Netware 3.11 servers with multiple network cards to allow each network to route to one another. We had Attachmate gateways for 3270 emulation over IPX (we were all IPX/SPX at this point) so all the networks had to be connected. Rebooting a server would bring down connectivity to anyone on the "other side", so finally I convinced my boss to let me buy a Kalpana switch just like the one in the video, and we dedicated one port to each server, and one to each synoptics hub, so that basically every node had equal access to every server, and nobody had to route between networks anymore. It was awesome. When I left in '94 we had started to dabble in TCPIP, but mostly for network management. I was young and excited about tech, and lucky to be in a place that let me spend the money if I'd write a business justification to it.. I learned so much and lived through all of what was in this video!
Fun fact: The first token ring switch was based on the Kalpana switch fabric, front ended with the IBM token ring MAC chip. The IBM 8272 was a joint dev between IBM and Kalpana.
What a blast from the past. I worked with a lot of Token Ring in the 90s, it was pretty common in Australia. The last-gasp I was given to implement was 155Mbps ATM LANE Token Ring running into a Windows NT Server running MPR also plugged into the Ethernet network. The core devices were a pair of Bay networks Centillion 100 chassis. Super clunky and expensive. Fortunately we were given a new budget and transitioned everything to 100Mbps Ethernet with the servers on GbE.
O dear LORD... hateHateHATE ATM! We had a very senior faculty member who had a position on some IBM consortium board and he wrangled us a half dozen 8265 ATM boxes and got the administration to insist we build our core network around 'em, make 'em talk to our Cisco Cat5000s with LANE modules. OC12 was way better than 100 ethernet, right?!?!! After 6 weeks of trying, with our Cisco SE in my office and a conference call with the IBM ATM developers in France, we gave up. I shut down the never-quite-functional ATM devices and my boss ran 'round converting our 100 ethernet out-of-band management network into the campus backbone - a huge ring with a spanning-tree block halfway -round. Had everything cut over in the time it took him to drive 'round to all the core router sites. We couldn't surplus those ATM switches fast enough!
@@mrz80 Sounds like we've chewed some common mud. Things are a lot easier today but you still run into people who drink the Kool-aid and try to implement something dumb due to a good sales pitch
When I was in college in the early '90s, we talked a lot about Token Ring and a lot of other fancy network configurations in some of my classes. One day, in class, we talked a little bit about Ethernet. The point in this class is that even though you could get more efficiency from these fancy network designs, Ethernet was going to win out because of its simplicity. It was so simple that it was barely worth talking about in a graduate level computer engineering class.
in my school, they managed to defeat the problem with multiple PCs booting over network by something quite nice: all PCs booted into windows 3.11 without a harddrive (the PCs had a DOS-bootchip). and the PCs all received their windows-session at the same time, probably by some sort of broadcast. by the way: the other problem with ethernet in that old implementation: all PCs that are not meant to receive data will receive them anyways and throw them away. it just needs one malicious PC that does not throw the packages away and congratulations, you got somebody sniffing your net.
TRUE STORY: Back in the 90s I was on a team involved in converting an aging IBM Token Ring network to all Ethernet at a large company. When presenting the investment case to leadership, the Corp IT Director said the following to justify the conversion: "We have way too much traffic on the existing token ring network, and congestion is causing some of the tokens to FLY OFF THE RING DUE TO CENTRIFUGAL FORCE!". Of course an insanely laughable statement, but the leadership folks just nodded and moved forward to approve the large investment. This absolutely happened.
🤣
There was a Dilbert cartoon strip with a similar joke about an engineer telling a manager that the token had fallen out of an unplugged cable.
Gee, business people not understanding tech, say it isn't so! ^-^
@@theantipope4354 Well you had to terminate the end of Coax Ethernet cables so here its relevant, or some troll stole the terminator cap.
I remember a Microsoft book when studying for ones network certificate,
a picture of a thinnet bnc T connector was shown , with the subtitle, tokens must turn right to go around the ring counter-clockwise... Or something like that...
Like electrons know how to follow an arbitrary convention... Who ever wrote that knows nothing of token ring, or electricity...
... I love Dilbert as much as far side ... funny and sad cuz the stupid is real!!
Not forgetting that when assembly line robots were first deployed (like in automobile factories) they all used token ring because it was guaranteed when the data would arrive at the robot. That was important, because if the data was delayed because the network was choked, the car chassis could get crushed, or the robot damaged. Apparently, token ring was unchoke-able. Almost all supermarkets used token ring too, because of the bar code scanners. Again, guaranteed data round trip time, and unchoke-able.
Thats a very good point about factory automation.
@@RetroBytesUK And now replaced by EtherCAT with the same guarantees. Also, Token Ring doesn't guarantee that any arbitrary amount of data would always arrive, obviously. So you still had to carefully estimate the amount of data your net would have to handle. The big guarantee was just that there was a deterministic time frame at which you had the chance to send your data, whereas in a non-switched Ethernet due to the nondeterministic nature of CSMA/CD, some nodes could starve the rest of the net on the ability to send anything.
@@graealex Or Ethernet with IEEE 1588 support.
@@kubiedubie IEEE 1588 is a different use case. It might exist in a network that for example does motion control, so that multiple drives can act synchronously.
Token ring was very expensive and was not without its troubles. The cabling was thick and heavy and would pull the plugs out of the patch panel or the connector would fall apart, plus if by accident you had two machines with the same address it would cause havoc. We had a situation where a machine on another floor was rarely used and had the same address as another on the network. No problems if they were both on the network at different times but if they happen to be on the network at the same time the network crashed. It took some while to sort out. It was the expense that killed it of as you could get many more Ethernet machines networked than you could token ring for the same money.
This channel is stupidly underrated
It’s because of the music bed. Having vocals with spoken word is really disorienting. Content is 10/10 tho
It's easy to forget there aren't at least 100k subs. The content is so well done.
Yup! I just found the channel and I’m just shocked TH-cam failed to recommended a single video until just a few months ago! I finally was lucky enough to stumble across a video while scrolling through some of the videos above the comments on mobile while I was watching another amazing channel, Adrian’s Digital Basement.
Both channels are massively underrated!
@@letthetunesflow Also Ardrian is the nicest guy.
@@RetroBytesUK love Adrian as well, and everyone seems to really speak highly of Adrian, including other TH-camrs like yourself, and his viewers/supporters. You also seem to be right up there with Adrian for being an awesome human! So keep crushing it, and I look forward to seeing what you upload next!
Working IT in a hospital, my friend and I once spent about 3 hours toner probing to find a jack from a side office. After many many closets, we found a storage room with what we later learned was a token ring patch panel from many many years ago! Our most senior IT person on site got a big smile on his face when we brought one of the RJ45 to Type 1 plugs to him!
I did IT in a bank and was there for around 12 years. We had a token ring "hub" in our data center and I always looked at it as I walked by.
Well around 2018 we were going around and cleaning up out of service equipment. I decided to just power off that piece of equipment without authorization because I knew it wasn't used and it was the only active thing left in the middle of a rack.
Fast forward just an hour later I'm back at my desk and there is a trouble call in the que from an external client to call Joe at whatever bank... A token ring network was down and they needed investigation!
I went and powered it back on and called the guy up to say it should be good. he shared a laugh with me at my disbelief that it was still in use (it was there as part of a backup emergency connection).
@@volvo09 How dare you! XD
@@volvo09 Wow and scary.
I heard tell of an episode at Georgia Tech where a device was showing up on the network that nobody recognized. Took forEVER to track it down, but they eventually found a closet that had somehow been renovated out of existence (something like two different crews closed off two doors, neither crew noticing that there was no NO door at ALL?!?!?! Ah, higher academentia), that still had a Netware server chugging away in it?!?!? Might be apocryphal, but it's a great story nonetheless. :D
@@mrz80 the original "Netware server runs for years after being walled in" story appeared in April 2001, and it was at the University of North Carolina. I believe that original story was genuine, but over the years since it's been embellished and attributed to various organisations. If you hear an older IT worker refer jokingly to "server 54", it's a callback to that story.
At the college I worked at we initially used token ring - our (idiot) boss was a big IBM fan. The MAU boxes were not intelligent - they were just a bunch of latching relays that would insert a node onto the ring when the PC TR adapter initialised. Only good thing was they were passive and didn't have a power supply. Problem was it wouldn't always detatch from the ring and bypass the socketi n some circumstances. The MAUs came with a small plug-in device with a PP3 battery inside, which you could plug into each port in turn to reset the port's relay. ISTR the cable was horribly stiff solid-core stuff, which was terminated into the wierd square plugs with IDC contacts, piercing the cable insulation when assembled. I think the plugs were hermaphroditic - they would plug into each other. I recall getting some of the Madge cards, which were a lot cheaper than IBM, but it wasn't long before it got replaced with Novell NE2000s and later, NE2000 clones ( one of the more dubious of which set the MAC address via DIP switches, repeating the same 8 switch bits across multiple bytes!)
I remember the MAUs. Never had to use the reset plug. My detox prescription included Racor and SMC. There was no turning back.
400Gbps currently being deployed and 800Gbps being standardised with router line cards already supporting it. I remember building 64kbps frame-relay circuits for customers (or 128k for the posh ones!) what seemed like not all that long ago. I feel old :(
I remember when the county I used to live in had a t1 for the entirety of the county.. home cable is 10x faster now! No more driving to the next county to park my laptop at a friend's house for a 3 day download to download a cd, now it takes 15 seconds..
I don't know anyone who has got their hands on 400Gbe kit yet, although someone else in the comments mentioned Arista has started shipping switches now. I'm quite surprised they have been able to go from announcing to shipping so quickly given everything that's going on with the supply chain. I had assumed Q1 next year at the earliest.
@@petevenuti7355 we were an internet provider and we had 34 MBit/s in the end.Cost some 160k per month.
@@EVPaddy I saved a bill for nastalgia, over 5k for a T1..., A three hop (17 mile total) wireless link just to get to the endpoint..
Never took off, went broke fast...
But in the end I just had 4 of my own dialup lines bonded together for backhaul for myself and one wireless customer , outsourced all but a local # to a virtual ISP service and even used their radius servers for my local...
Held on at breakeven for a few years like that..
Interest didn't start to pickup till the local Telco upgrade and they got DSL, my wireless was better then their DSL... believe it or not. Then cable and a fiber provider came through so I gave up...
My main customers were AOL users that switched to their bring your own isp plan , as AOL didn't have a local number. For wireless I was using DEC, later Lucent 915mhz equipment, (it was before 802.11b) , 2.4 gig wouldn't ever have worked around here...
When cable internet came there was no way to compete, and there was no such thing as long distance phone service anymore either...
@@EVPaddy what bothered me the most was that customer interest picked up because of DSL, a commercial that just said to get it , just a black screen and the letters flew across, never said what it was or anything, just said to get it, and people did...
I gave people too much credit..
I went to a business plan competition while seeking funding, the winner's read like a perfume add, you couldn't tell what they were selling, I should have known....
If I put the same money in the stock market I'd be a millionaire now instead of having $20 and a few bits of legacy equipment that didn't make it to the scrap metal yard with the 700kg that did... (I didn't even get to scrap my own towers, I never found who did)... I think my old dialup Pop is still in the crawlspace of a foreclosed house...
Years before token ring, there was another ring technology introduced in the mid-70s, the Cambridge ring. Developed at the University of Cambridge, this became widely used in UK higher education, but I don't think it had much commercial success outside this sector. It was an example of a "slotted ring": there was a fixed number of packets continually circulating around the ring, put there by a master station, and each packet had one bit in the header which flagged whether this "slot" was occupied or not. A station which wanted to transmit waited until it received a packet whose flag indicated that it was empty, then stuck its payload in the packet, flipped that flag, and passed the packet on down the ring. The only two vendors I can remember were Camtec and SEEL.
At 1:18 you mention "one thin piece of coax" used for ethernet. We used to call this "thinwire ethernet", formally "10base2", but it wasn't ethernet's first form. Before 10base2 there was 10base5, and boy was that horrible! The expensive coax cable was nearly 10 mm in diameter, and difficult to bend. Also, you had to be careful not to bend it too tightly, because that would cause signal reflections, killing performance. To attach to it, you literally drilled into the cable, and attached a "vampire tap", which had a spike which reached the solid core. It was dreadful stuff to work with, so the arrival of thinwire ethernet was very welcome, even though anyone could break a thinwire network by disconnecting the cable from the T-piece at the back of their workstation.
As someone who once installed 10baseT (the other name for it), can confirm it was not fun to work with.
@@truckerallikatuk 10baseT is ethernet over twisted pair, nothing to do with coax.
I talked to a few people who where involved in the cambridge ring when I was working on my video about Econet. I must admin I did not mention 10base5 as was a video on token ring, and it was feeling a little ethernet heavy, also there are not many people ever experienced thick coax. Then only time I ever did was ripping a section of it out to replace it with fiber.
It was called a beesting tap btw
@@davidrobertson5700 well, our MAU vendors always called them vampire taps, and there's an entry under that name in wikipedia
The advantage of that IBM Token Ring connector was that it would short across if it was unplugged so that you didn't lose the entire ring network if you didn't have both ends connected at every network location. Without that connector if somebody removed a computer and didn't connect the in and out ports the ring was broken and everybody lost the network.
One other advantage the Ethernet gained that helped it was the ability to use the thin "cheapernet" cabling. Originally you had to use a thick cable with a tapped drop line for each network location.
The other nice thing was that all the connectors were the same - so if you wanted a long Type 1 token ring cable you could just plug two shorter ones together rather than having to use special adapters. Pity they were about the size of a small planet.
I assume that also means any transition at the node, connect/disconnect/power down, would temporarily trash the entire network.
I don't think it was the connector doing that. Making sure the ring was complete as stations were added or removed was what the MAU was for. When you turned the PC on and once the TR card drivers were loaded then the TR card would send a voltage to the MAU that triggered it to switch some relay contacts and add the new station to the ring. When the PC got turned off, or was disconnected, then the voltage on the MAU port disappeared so the MAU's relay contacts reset to bypass the now disused port.
@@jrstf Very temporarily, yes. There were mechanisms in Token Ring to detect this kind of issue and to automatically recover but it did take a finite amount of time. It usually wasn't a problem though.
@@1anwrang13r Yeah - there was also a little accessory ("Setup aid", IIRC) that you plugged into the MAU ports to check if the relays were operating correctly.
50-100 plus coax ethernets worked surprisingly well at scene parties.
sure the sparks when hooking up were spectacular but it still managed to move warez allright!
I loved setting up those old BNC coax cable networks at LAN parties. But you really had to trust the guys at both ends of the cable not to be sore losers and twist the terminators off to ruin the game for everyone else.
@@pfefferle74 I HATED 10Base-2 the amount of time I spent crawling around on the floor of computer labs to try to figure out which of the BNCs was causing issues because a student had kicked it under the table. it was a bloody nightmare.
don't forget your 50 ohm resistors
@@pfefferle74 had that happen while playing quake back in the 90s. A friend pulled the terminator off his end of the cable at the other side of the house. Took us about 30 minutes to figure out what had gone wrong.
Ah yes. The times before grounding pcs were a hoot.
When Token Ring was the Betamax of Networking Token Bus ( 802.4) was the Video 2000 of Networking
Nice video 2000 reference there.
Having installed Novell Netware LANs on Token-Ring, Ethernet and Arcnet back in the day, here is what we used to say about Token-Ring...
"One ring to rule them all, one ring to find them, One ring to bring them all, and in the darkness bind them; In the Land of IBM where the Mainframes lie."
Token-Ring? Nah, _Tolkien-Ring!_
@@TH3C001 Funny, I worked at Mod-Tap at the time and we use to make application notes with a full list of the application and parts to put Token Ring over unshielded twisted pair cabling, we had the Type-1A connector to UTP, the DB-9 to UTP, etc. One the cover of that application note was a cartoon with almost that statement. You brought me back to a time when we smoked in the office.
My first job here at the University of Football was doing workstation support. Our office had mostly IBM PCs and PS/2s (Ah, Microchannel, we hardly knew ye!). DOS 6.22 was the joy of my existence for the flexibility it gave in stuffing device drivers into odd nooks and crannies of unused RAM. My finest hour was getting a token ring-attached PC to talk TCP/IP, Netware, and AS/400 PC/Support (basically SNA with some extra tweaks) running all at the same time on a PC running DOS6.22 and Windows for Workgroups 3.11. deviccehigh and loadhigh are your FRIENDS! :D
@@popquizzz I understand literally zero parts of that comment having only been born in '94 lol, but I'm glad to have been clever enough to have brought that memory back for you.
Typically complex and expensive IBM technology. Their research labs came up with many interesting ideas, but very few of these seemed to make it into their products.
I think you oversimplified the CSMA/CD collision detection slightly on the coaxial cable. It's such a crazy simple and interesting design that it deserves some appreciation, I think. It's not about sensing that there's not the data you sent on the cable, it's much more interesting than that. 10Base ethernet signal is manchester coded, so it's very clearly an AC signal, but when there's another transmitter on the same wire, the interference causes drift on the DC component and the receiver detects this out-of-normal voltage (yes, voltage) on the bus. This also sets the minimum ethernet packet length: each ethernet packet transmitted on a 10Base5 cable must be long enough that it occupies twice the whole length of the wire when transmitted from the far end. This ensures that every node on the bus sees the collision and there's no ambiguity where some nodes would consider that a packet was sent correctly and others would consider it to be a collision. This was later emphasized by requiring each node to signal a collision on the bus when it sees one. This guarantees that the packet is crapified everywhere and a retransmission is then expected and attempted. CSMA/CD ethernet is a fascinating system - no acknowledges, no arbitration, no flow control, just conditioning of electrical waveforms. So very 1970's.
"This also sets the minimum ethernet packet length: each ethernet packet transmitted on a 10Base5 cable must be long enough that it occupies twice the whole length of the wire when transmitted from the far end."
That was not just done for collision detection but also for detection of reflections -- it ensures that a reflected packet results in a garbled packet and thus a collision detected. If the packet was shorter, it would be transmitted and received succesfully, and then reflected and received twice or repeated even more times.
And yet you make no mention of the Aloha islands that used this technology first.
@@Madmark50484 You mean Alohanet. But Alohanet is not CSMA/CD. In fact it has nothing of it, Alohanet nodes just transmitted data whenever they wanted. But experiences with Alohanet showed that you get terrible channel capacity with just dumb shouting, giving impetus to developing carrier sense first to get CSMA (Carrier Sense Multiple Access) and then collision detection to get CSMA/CD and that's Ethernet then.
@@nmosfet5797 may be a little memory fade in action. It was explained to me the communication method for the Aloha Islands in my A+ class of 2006.
I’m going to check the books that accompanied the class but I appreciate the explanation.
I didn’t know about the DC factor in interference detection. I thought it was just a matter of listening to your own signal, and if what you heard was not what you were sending, then that meant somebody else was interfering.
Apple invented an even cheaper system, called “LocalTalk”, which was classified as “CSMA/CA”. Instead of detecting collisions, it simply transmitted and hoped for the best. The “CA” (“Collision Avoidance”) part was in proactively waiting some random time after you heard someone else transmit (rather than diving in immediately as with CSMA/CD), to try to minimize the chance of a collision.
I remember the thick yellow hosepipe cable and vampire taps, a whole computer room running on 10Mbit - and another technology which ethernet also destroyed, 100Mbit FDDI (which was also a ring, actually a double ring with a redundant reverse ring allowing it to use foldback if the line was broken).
Somewhere I have a 10Base-5 patch lead you could bludgeon somebody to death with ;)
@@SomeMorganSomewhere I remember when we renovated the old Physics building at UF in the early 2000s, and found a couple of labs that were still on the building's ancient original thicknet. Pulled about a quarter mile of RJ8 out of the ceilings, with a slew of vampire taps clinging on for dear life. I think that was the last hurrah of coax ethernet on campus..
@@mrz80 Thankfully they ripped it out before I got there but where I work now I have tons of spare coax networking equipment. I actually have enough supplies to boot up a network assuming I could find PCs that have ISA interfaces lol. I have hundreds (!) of terminators I've just collected from random places I've been (fixing cabling/etc).
I was in hospital during a power failure. Bedside TV and phones shut down. When power returned, every station tried to network reboot simultaneously, which took more than an hour.
Nice video and good memories! Among the first enterprise network devices I was honoured to work on were Xylan OmniSwitch or OmniSwitch/Router which had a "any-to-any switching" capability. You could have Ethernet, Token Ring, FDDI or even ATM/FrameRelay in such devices (each on their own line cards, of course). IBM became a reseller of Xylan devices and even branded them, probably up to the point when Alcatel purchased Xylan (along with Packet Engines a few months before) to form the Network Business Division that still lives on today in Alcatel-Lucent Enterprise (which I'm an employee of).
The OmniSwitch was a very nice bit of kit back then. My only use of FDDI was as a MAN technology in the mid 90s. Yorkshire cable used it to deliver data to their cabinates. We had the equivalent of one of their cabinets in a rack in our office, bringing in our least lines and ISDN pri lines.
Or when Lucent bought Cascade, we used a TON of their FR and ATM switches at UUNET back in the day..
@@Routerninja after the merger of Alcatel and Lucent there was a lot of duplicate (or legacy/obsolete) technology around. Lucent brought also Bell Labs to the "merger of equals" (Alcatel and Lucent) with a lot of history and significance. Alcatel brought another big player, called Timetra, which is also called "IPD". Alcatel-Lucent (including Bell Labs) later got integrated with Nokia. You can still find many of the products/solutions today in Nokia, obviously upgraded and often leading the market (I have to mention Nokia IPDs FlexPath 5 (FP5) which is a very impressive Networking chipset). The previously mentioned OmniSwitch product lines have been updated under the brand of Alcatel-Lucent Enterprise and range from Access to Core network including Wireless, but are exclusively addressing Ethernet today. :)
I was not aware that Lucent still exist. Anyway, good for them. I designed a regional hq back in the day, so still have fond memories of them.
Just realised BITD = 24 years ago 😵💫
@@Routerninja Oh wow.. Cascade is a name I haven't heard in a long while! I feel like all of us commenting on this video are old people who should retiring and yelling "get off my yard" by now.. :-)
Unless I missed it, there is one important note about the "Type-1" bulky connectors that is worth mentioning. Most of their design was around the fact that because the "loop" had to be maintained, the jacks physically shorted the IN to the OUT when the cable was unplugged.. This meant that when you pulled the cable from the wall, the wall jack would just passively let the token go through to the next node. One of the worst thing you could do would be to unplug the DB9 from the back of the PC, because the token ring would be severed at that point. This was quickly fixed once we started using the token ring switches, but for a while, this could be a problem if my memory serves..
At some point one of the offices I worked for bought a couple of IBM 8250 concentrators to replace a half a rack full of 8228 MAUs. The 8250 was a pretty neat bit of kit, while it lasted. :)
Who else finds the music distracting from the great content?
I'm deaf and use the captions. Sometimes a disability is an advantage.
I worked for IBM from the mid-80’s in the team that installed and maintained the internal IBM networks in Europe, IBM had sites with 1000+ nodes on token ring mostly bridged networks, some of these rings were huge, cabinets full of mau’s clicking like crazy!
We went on to install ATM backbone routers with tr and Ethernet cards, and intelligent token ring hubs with 100mb uplinks, before swapping the TR hubs out for ethernet, and using ethernet/TR bridges to keep the old TR devices alive for many years, into the 2000’s. That MAU/ATM/TR/Ethernet upgrade program took me all over Europe.
It's a good thing Ethernet "won" that non-existing race, because switched networks make CSMA/CD completely unnecessary, while Token Ring would have been quite the liability in the long-run. Frame queues are just a lot more efficient in pushing through data, and even allow for QoS considerations.
You could switch, and queue, Token Ring just as easily as you could switch Ethernet. Each port on the Token Ring switch was effectively its own ring. One benefit of Token Ring over Ethernet was that the maximum frame size was 16KB compared to Ethernet's 1.5KB.
@@1anwrang13r I'm glad you said that, because the OP's statement could easily be argued the other way.
@@1anwrang13r Ethernet is arguably still easier to parse and forward. You're right that it wouldn't have been a problem if Token Ring was also switched.
Frame size I agree, and still not there with all equipment supporting jumbo frames. Although the small frame size might have made switches more affordable. In a typical application that isn't 10G+ though, you'll see that average frame size is well below 1.5K either way, and I rather have switches optimized for small frame throughput.
@@graealex Growth in IPv6 rollout will be the driver of larger frames sizes. As almost all exchanges have agreed a minimum frame size of 9k for IPv6. The primary driver being larger frames means less overhead per byte moved. Which at the ISP and exchange level is fairly important.
@@RetroBytesUK Again, this is only relevant for max throughput. But average packet size remains small, and the way modern webpages are structured, there is a tendency for even more and smaller packets, instead of a stream of very large ones.
Not saying that agreeing to a larger guaranteed MTU is a bad thing, in this day and age.
Oh, and Internet routers supporting a minimum MTU is different from Ethernet equipment supporting certain frame sizes, although they are obviously related. Internet exchanges don't necessarily agree on "frame" size, but packet sizes. Although in many cases, Ethernet is used at exchanges.
Loved 16 Mbs Token Ring - used it for performance testing at IBM. Repeatability and reliability was key
Back in the 90s, on a Novel over Token Ring site I worked at , the Financial Director managed to take out the whole site's networking (other than production's PDP 11 network) by keeping pulling the PCMICA networking card to take his Thinkpad over to his PA, while yelling at us about the network going out. He plugged back in, and the problem just stopped.
There was a lot of competition in the department as to who was going to tell him that he's the root cause. Alas, it wasn't me.
Let's try to remember that Betamax was not the disaster it's made out to be. Apart from selling millions of domestic machines, Betamax morphed into studio formats Betacam, BetacamSP, Digital Betacam, HDCAM and more, which were installed in every TV studio in the world. Machines selling for seriously huge amounts of money, many using tapes which are mechanically almost identical to domestic Beta. Professional versions of VHS such as MII and D9 were all failures. Sony made more money from Beta than anyone ever did from VHS. Not such a flop after all. I do remember Token Ring on a some old HP workstations back in the day.
I think thats why I liked the comparison between token ring and betamax. Both successfull in their nitch, both concidered unsuccessful in the wider market.
@@RetroBytesUK They are similar in that perspective, but that doesn't make them superior to the solutions which won out in the wider market. They both had characteristics which made them superior for those niches but others that made them inferior for the wider market. Betamax won in the professional market because picture quality was king there but had other shortcomings which didn't make it the superior solutions for the wider market.
@@RetroBytesUK Niche!!!!!!!!! A nitch is something I scratch! 😀
@@tss20148 Betamax and Betacam are very different animals. But both were fundamentally proprietary to Sony whereas Ethernet and VHS were basically open-standards, so many more companies became involved driving rate of development up and costs down.
@@EuroScot2023 Oh that old story again. Only Sony made Beta and VHS was free to license. Except that the business models for VHS and Beta were essentially the same. If you wanted to build machines you paid the license. Domestic Beta (Betamax) machines were made by various companies including Sanyo (who made machines in vast numbers) and Toshiba, as well as smaller players such as NEC and Aiwa. Betacam machines were only made by Sony (some badge engineering aside) but the same was true of the VHS derived professional machines which were only made by JVC and Panasonic who were under Matsushita.
And as for Betamax and Betacam being completely different, well they were incompatible at a recording level, but they had very many connections. The linear tracks were in the same places. Betamax and Betacam Oxide tapes were interchangeable, and so too were BetacamSP and ED-Beta. The auto-stop mechanism was the same. Even some accessories had connections including interchangeable camera batteries and internal components.
One of my previous jobs had token ring for their entire administration office site (like a dozen buildings and a few hundred employees) till about the mid-2000's. Meanwhile the hundreds of other sites had ethernet and we had already upgraded many from 10 to 100 and then 100 to gbe.
One really funny part with the boards reluctance to upgrade HQ is that the PCI token ring NICs were like $800-1200 each and were easily installed 300+ new PCs there in just a couple years and the cost of just the NICs could have more than paid for a entire gbe install. Instead they spent even more money to split rings in to even smaller rings with their own ethernet backhaul. I think we ended up where a single small dept would have 1-3 rings, a whole building might have 5-10, the intranet would still be slow AF.
My first employer had 16Mb/s token ring network. It was superior to ethernet at the time but it was the development of affordable switches that swung the needle towards ethernet and 100Mb versions of this in particular. We did look at a token ring switch too - but they didn't really offer anything better and were more expensive so eventually we moved to Ethernet. We operated a FDDI network too (joining the token ring networks together across a metropolitan area - this was 1Gb/s and was pretty good. We even deployed FDDI interfaces on key servers so that they could sit directly on the high speed network. The key technology is switching which resolves the collision problem that all shared access topology networks had - switching essentially gives each node a point to point connection with addressable frames - the improvements you can get now are just
limited to bandwidth and latency.
I always liked FDDI. I worked at an ISP in the mid 90s and the core (SPARC) servers were on a FDDI backbone.
A decade later (in the mid-2000s) I had some FDDI in my homelab. FDDI was the only way to get 100mbit on my DEC Turbochannel systems, so as a big dork I got some DECSwitch FDDI equipment, and a Bay Networks FDDI to 100mbit Ethernet bridge to get it online. I could only handle U/D topologies and didn't have the double-port boards that could do double rings, but it was pretty fun. I miss playing with it sometimes, but don't miss the monthly power bill or noise.
FDDI was 100Mbps only - there never was 1Gb/s FDDI - it was replaced by ethernet
@@gorak9000 I think you are right - my memory from back then is fading. The fibres were repurposed to run 1GB ethernet a year or two after.
Your videos fluctuate alarmingly between the history of my childhood and my career. Christ, but i feel old.
My first job out of undergrad was with IBM working on the 1996 Atlanta Olympics. We were doing everything on AS400, OS/2 on PCs and Token Ring.
During the games I was assigned to the aquatics center. So the 2nd week of the games, I'm monitoring stuff during one of the Platform Diving events in the evening. I suddenly get a call on the radio from the scorekeeping room saying that they lost connectivity to the scoreboard, in fact between all 7 computers used to enter scoring data, run the scoreboard and other things (I forget all the functions being done). Hmm. Weird... I run to the network closet, open it and just as I look at the MAU, I get a call on the cell phone from the TOC (Operations center downtown) saying that they've lost the entire Olympics Network! I look at the MAU and it's flashing E6 E6 E6 over and over... I figure "What's the worse that can happen at this point???" I power cycle it, wait 30 seconds and get another call on the radio from the scorekeeping room saying things were working again. I grab the phone, tell the person on the line from the TOC to reboot all their MAUs network-wide.
The next morning I get in and get a phone call from the head of ACOG, the head IBM person for ACOG and my boss. They asked what happened, I told them the long version of above... They told me that I basically saved the network and thanked me! During the games, IBM sent a LOT of their employees to "help out" those of us who were working it for the prior 2+ years, but with almost no training or real assignment based on skills. Apparently one of these volunteers had plugged in a piece of network diagnostic gear into the network at the TOC that hadn't been tested on the network beforehand. It started spitting out bad data packets which hosed up the MAUs.
That was (thankfully) my last experience with Token Ring... The connectors on TR were HUGE - like 4x in each direction the size of a Ethernet RJ45 plug. The cables were also super thick and bulky.
That's a really interesting story, thanks for sharing it with us. I seam to remeber seeing IBM branding all over the coverage of that years Olympics.
@@RetroBytesUK Yes, IBM was a major sponsor and also the main Technology Integrator. Many of the team of developers I worked with were offered to continue the code and work on Barcelona, which was largely using the same setup to keep costs down.
I love the old "vintage" stories. It's amazing how delicate some of these old networks were, and yet 99.99% of the time everything worked great. Good times!
I've still got a shrinkwrapped box of OS2 4.0 that an IBM rep friend of mine gave me at the intro event. :D Ran OS/2 on one of my work PCs for a while, assessing if it was any better at talking to our AS/400 with/without Comm Damager :) vs. our cobbled pile of DOS/WfWG, IP, IPX, and SNA. Fun times.
@@mrz80 Thats cool.. I used OS/2 versions 1.2, 1.3, and then a little into 2.x.. I do remember loading 4.0 at home, and liking it, but by then, Microsoft really had established themselves, even if the product wasn't as good! I remember being so impressed with OS2 being able to handle the multitasking SO much better than anything Microsoft could offer..
A bit over 10 years ago as I was finishing up my studies, there was a "technology incubator" company in my town. They'd have cheap offices for tech start-ups. Said technology incubator had its entire building hooked up with Token Ring. It was kinda hilarious.
The B-roll with the dancers and the boombox had me LOL every time!
I have just read a 90’s PC Magazine article comparing networks and here you are with Token Ring. Love reading about the incompatible late 80’s and early 90’s protocols, apps and file formats and the expensive single purpose software solutions that were for sale to remedy this. The PC frontier was wild.
Everything was changing so rapidly then, you had no idea if you where buying something that would be wildly successful or a massive flop.
To add to the confusion you had Novell and IBM using their own proprietary terminology designed to make it harder to compare and making a network engineer trained on one system have to relearn a bunch of terms just to know what each vendor meant.
Then you get to add that a lot of things were not yet really standardized in the OS side. For example, does each window in a windowing system have its own network session or do they share them. At one point the one window/one connection option was fairly popular which allowed such things as different applications having the same drive identifier mapped to different network file shares and since these were determined by the network driver provided by the network vendor and not by the OS itself it would even vary between installations using the same OS. One network vendor actually made it a configuration choice in loading the driver so the behavior could vary between computers in the same building on the same network.
Ah, nostalgia.
@@RetroBytesUK Well, at least there was real change and advancement back then, as well as even old hardware staying fully usable as long as you were satisfied with their lower specs.
Nowadays, for the last couple of decades, all we have had has been changes for sake of change, worsened and worsened software undoing decades worth of hardware advancements, artificial need for hardware upgrades purely to be able to run lessened-functionality software gotten more bloated merely due to unprofessional development we are forced to use just because of the support and compatibility droppings of old versions for sake of dropping.
Today's software industry is purely horrible disaster.
And we're going back to that age too. lol
@@TheSimoc I don’t completely agree. We are at a point where standards are in place and interoperability between systems has never been more common. No one is trying to replace TCP/IP, DNS, expansion slots, memory modules etc. Where standards change such as processor socket types are purely physical engineering which is necessary. But you can freely run your DOS machine today and browse the internet on it for the sole reason that standards are in place. You are not forced to upgrade anything.
Any time someone mentioned a "ring" in the context of a network to me I always just automatically thought of a ring topology, and since it rarely came up I never really thought much of it. Until I was assisting our head developer with a problem and while chatting I somehow got him on the whole tangent of Token Ring and how it was so much better. This same gentleman worked at IBM for like 18 years right out of Uni so I guess I now understand why he had an even more vested interest in the tech. Very interesting stuff regardless, though.
I had totally forgotten about Token Ring. The last time I used that was back in '91, when we were implementing a production control system for a highly automated injection molding factory. The system was running on a cluster of about a dozen 386 machines with IBM ARCTIC cards, using OS/2. These machines were connected among each other, and to a IBM mainframe using 16MB Token Ring, even using optical fibre for some of the longer connections (IIRC). The ARCTIC cards were basically 80186 machines on a card, with 512KB of memory and 8 serial ports, running a proprietary real-time OS. These cards were handling the communication to the machines on the plant floor, while the software running on the OS/2 implemented the business logic.
Man, Sega really did go all in on their Daytona Usa arcade boards. They not only used token-ring networks, but token-ring networks using TOSLINK cables.
What seems like a lifetime ago in the early 1990s, I encountered a token ring network at an insurance company I worked at. A short time later, they upgraded to ethernet and a Novell network when their IBM mainframe was outsourced then that operation was terminated shortly afterwards a short time later. Around the same time, I also worked with a thin-net network. My family's graphics business went from a Varityper Epics 20/20 typsetter to desktop publishing which output to an imagesetter. The RIP (Raster Image Processor) PC utilized a 3c509 NIC with a BNC connector and I configured their thin-net network. This was slow and reliable until a building tenant decided to put a file cabinet on top of a network in the shared office space. That crushed the cable and brought the whole network down!
Then of course nearly a decade later in 2000, I configured a 10/100 ethernet network with those really reliable 3-com 3300 24-port switches. My manager allowed me to take some home and one the 3300s is still in operation today more than two decades later. The 100 Base/T network may be a bit slow but for the few of us on the network, it suffices for watching videos, printing and file sharing.
Its amazing how many ethernet 10base2 networks where taken out by a desk, cabinate, or chair damaging a cable.
3c509 was such a groundbreaking card.. no more jumpers!! LOVED those cards.. especially since I got to take home some of the 3c501 and 3c503 cards they replaced to use in my own network! :-)
@@SteveJones172pilot They were great to work with. The software I used them with though was awful. I had to edit a bindings file and put their interrupts into the file so the software could talk to the card. When everything was loaded, there was something like 320K left of RAM on the system.
I did have a 3c503 and upgraded that to the 3c509.
Speaking of networks... I worked briefly with thick-net also. The company was installing a network for some imaging equipment and the imagers used thick-net and not ethernet or thin-net. With thick-net, you terminate both ends of the network then connect the various equipment via "vampire taps". These things actually drill through the coaxial cable and into the metal core with cables connecting from the taps to the devices.
@@Clavichordist Oh wow.. yeah, thick net I saw, but never had to work with.. That was a whole different level of inconvenient! :-) I used plenty of AUI cables, but thankfully all the transceivers were either 10Base-2 or 10Base-T 🙂
@@SteveJones172pilot It sure was inconvenient to work with. In addition to the vampire tap system, there was a bending radius you had to watch out for. Unlike other cabling such as CAT-5 or even thin-net, thick-net couldn't be bent too tightly otherwise it did something inside the cable and that would degrade the signal. It was fun to work with and I'm glad my encounter was very brief. I did run into it once but from an "Oh I know what that is" point of view when the network tech I hired was inspecting the old wiring closet.
Nice video, thank you! I also had a small TR network at home, as I was given some used TR hardware for free in the late 90s :-) But as I bought a PowerBook G4 and I couldn't use my PCMCIA TR card with it (no drivers for OSX...), I also switched to Ethernet.
Btw, FDDI also used the Token passing technology, AFAIR. That could be a followup video? :-)
So many years spent wiring up banks & insurance companies with Token Ring. 😄
When you showed the MAU unit, my first thought was: Wow, IBM could have invented the switch right there and then! They could have moved token management responsibility from the nodes to the MAU, and be done with lost tokens and elections. The implementation could also be backwards compatible: just pass a token on each port, so cards that still thought they were on an actual shared circuit would continue to function. Newer cards would be implemented to negotiate with the MAU to do away with the token altogether. At the most extreme, the MAU could bridge Token Ring and Ethernet seamlessly, allowing for a transparent heterogeneous network, and the nodes would be none the wiser, while network engineers would be able to migrate from one technology to the other at their own leisure.
Hindsight and all that :P
Eventually that was a think for token ring, but far too late in the day.
The problem IBM had was that they didn't want to create something that was cheaper and would take business away from one of their existing EXPENSIVE/PROFITABLE products. So somebody else came along and ate their lunch.
You never could really bridge token ring and ethernet "seamlessly" because of the MTU mismatch problem. Token ring ran with a 2k or 4k (or 16k on 16mbps) MTU, while ethernet's was 1500 bytes. If you had a brain dead DOS or Windows IP stack that wouldn't allow on-the-fly MTU negotiation, or that pegged the "DO NOT FRAGMENT" bit, then any traffic that crossed from token ring to ethernet would get broken. Mixed token ring/ethernet environments that didn't put a *router* between the topologies was destined to be a bleeding-ulcer generator for us poor network engineers. :D
I am glad you showed those terrible early Token ring cables - not only were the connectors huge, and always right-angled, but the cable was stiff and springy with a lousy bend radius!
Heh, I worked at IBM at the time, but for my home network I never moved beyond 16mbit token ring, and indeed went 100mbit ethernet.
The real game changer in this was affordable ethernet switches. Once those existed, there were few reasons left for token ring.
Awsome video, thx for bringing back memories. Additional note: The original TR connectors are one of the very few hermaphroditic connectors that allows every cable to be used as an extension. Also iirc some of the contacts are ment to switch on a relay to break up the ring for pretty dumb MAU designs.
My first job in 1996 was on a TR network with an AS/400 and a Novell Netware server- I remember endless piles of 50 ohm coax cable being everywhere.
If it was using 50ohm coax then that was more likely Ethernet than Token Ring. Token Ring used twisted pair.
You missed IEE 802.4, Token Bus. Token bus worked on a standard ethernet backbone but created a logical ring. Token bus solved the problems of token ring, and did not require a physical ring. The reason it did not get used was that it came out in 1989 when the ethernet people were solving the large network problem in a way that was less expensive than token bus, with the advent of the bridge and the later switch.
You've hit the nail on the head there Token Bus never became relivent due to bad timing. Well outside of general motors. Its an area I decided to keep out of the video for the sake of brevity.
@@RetroBytesUK The only reason I bring it up is because I was one of the engineers that worked on the Siemens token bus MAC chip. We got all the way to production ready silicon, including installed microcode, in the Fall of 1989 when the project was canceled. I worked at what was then Siemens Semiconductor in Santa Clara. A small group in Munich had also just completed a working phy.
We were able to prove our implementation in a physical network before fabricating silicon by using a Daisy Megalogician in 'reverse', taking our logical simulation out to the physical world, plugging it into the ISA PC network card we had already built. It only operated at 1 bps, but that was enough to do all of the tasks needed to verify the design to the 802.4 spec before going to fab. Those were fun times.
@@letstrytouserealscienceoka3564 That must have been a really interesting project to work on, I had no idea Siemens had got that far along.
@@RetroBytesUK We were production ready, with final microcode in the MAC and a phy that passed specs in mid September 1989. We had engineering samples of both chips and built a working 10Mbps network with four or five nodes. All told, the MAC had 50 person years in it. Siemens Munich cancelled the project and laid off all of the American workers right at the start of the tech crash of 1989. I did not find work again until mid 1992, at TDK Semiconductor (formerly Silicon Systems) in Nevada City, CA.
@@letstrytouserealscienceoka3564 It must have been more than a little upsetting to have done all that work? and come so far for them to just cancel it. Glad to hear you did eventually end up staying in the industry, I'm guessing that was not an easy 2 year period.
Migrated a few companies from TR to Ethernet in 1999/2000. Once the first NetWare server went in the TR usually came out soon after. Doesn't feel that long ago and now I feel old. Madge branded hardware always made me think of Neighbours with Madge and Harold bishop 😂
I started my IT career around 1997, and 10base2 using coax and bnc connectors was on its way out. Once I really got into it around 2001, 100mb Ethernet with 1Gb interconnects was really the standard.
I suppose I'm surprised that apart from specific use cases, we've not seen too many advancements in speed, particularly in Soho environments, where 1Gb to the desk and maybe 10Gb uplinks seem to be the norm. In fact, we're moving away from wires to the end point and switching to wifi.
I think a big part is poeple like the convenience of wifi, and where willing to accept lower speeds as the trade off. That and all the faster than 1Gb ethernet standards have a very short max distance over copper, they basically all need fiber to the desk to work, and re-wiring is cost prohibative. In the DC 10/40/100Gb are now fairly common, what you select is around the bandwidth required and the port-densitiy/cost/space/power for the switching to support that bandwidth.
All wifi is CSMA/CD and with the drawbacks that entails. And until recently 1G was faster than any disk a typical consumer would have around. Also the first gen of 10G was finicky and would drop to 1G if the wires weren't perfect, The new gen of multi-G fixes a lot of that but it's probably a case of too little to late. Relatively cheap MIMO routers are decent at reducing carrier interference in a moderatly crowded environment.
True, but you shouldn't forget, portables (and even more laptops) were prohibitively expensive for a long time. So now, that laptops, with even great batteries, are the standard, the wish comes through, using them mobile. And yeah, it's shared media, but with enough access points, roaming and even 802.11n, the throughput is enough for them to work. Else, the docking station is cabled...
Awesome review! I had a customer in 1999 who had mixed 4 and 16, ran HP-UX and AIX servers (I was reviewing their Y2K Lotus Notes system) and a global Novell NDS. So cool at the time! Only time I got in contact with TokenRing. I remember Byte Magazine in 90s ran an article with the calculations for ring size. I look for it in my collection….
Betamax is apt. By the late 90s, VHS had continued to evolve and absorb new improvements. If you compared Betamax to VHS by that time, VHS would often come out looking better.
Likewise, when Ethernet had switches, it took away its biggest problem, and there was no reason to bother with Token Ring anymore.
This was an excellent video! I used to teach token ring many many moons ago and this really took me back. Just fun stuff, thanks for taking the time to put it out there
It turns out there are far more people interested in thia than I thought there would be.
I worked with Token Ring at the end of the 80’s, we used a system from Nine Tiles on the British Rail IECC signalling system. Many jokes about the token falling out when the quite fragile single core wires broke in our test and development environment…
Scary thought, I think I have a Token Ring PCI card in a box still somewhere.
Fun fact, up to the early 2000s, Unichem used Token Ring. A common theme amongst companies that had suckled on the IBM AS/400 and S36 offerings and rolled on with them.
When you have to set up SNA on AIX over token ring, over a leased line to an AS/400, you know you have landed in hell.
We had a hardware router from IBM that handled bridging SNA on to our token ring network. Being able to emulate 3270 terminals on a PC with just a regular LAN connection was a big leap forward for us at the time.
@@RetroBytesUK Speaking of 3270 terminal emulation on a PC, that sure did push OS/2 sales, that emulation was solid.
Having worked mostly up to that stage upon Honeywell and ICL mainframes and terminals of the day (Newbury iirc did well there), and flavours of Unix. The first time, I encountered Token Ring was in the year 2000 at a major company with a single ISDN channel for its Internet. With me at the time, duplex 10MB and bonded 128K ISDN dual channel internet ( some hooky 0800 dialup thing that sold shares for lifetime access and found if you brought 2 (24£ outlay) you could do bonded ISDN on 0800 number - yay those early times) it felt like I was working in a Museum in places.
But that was my first real and the last encounter with Token Ring, within a couple of years it went to all Ethernet latest and greatest FOTM CISCO of the time, but darn the price of Token Ring adapters compared to the built-in ethernet in business desktops offerings of PC's back then. That alone justified the move.
Also, remember MADGE as I knew the chap who was the authorised reseller, Peter can't recall the surname offhand. He did well in the Token Ring days I do believe.
@@PaulGrayUK My first encounters with Token ring was when I was working for ICL. Every site in the company ran it. The last time I encountered it was in the late 2000's as Maersk group was still using it in a number of their offices and warehouses. They where still heavily mainframe based, and also where still running lotus notes on a collection of OS/2 servers. It really did feeling like working with a museum.
@@RetroBytesUK I spent some time at the Feltham ICL office, though was contracted to work on the Videoconferencing side, those ICL out dials, were wonderful.
@@PaulGrayUK LOL 3270 emulation HAD to be solid on OS/2! ALL IBM mainframe customers had green screen CICS, VM, or IMS apps, and all of their users HAD to have 3270 capability on their shiny new (and hideously expensive PS/2s.
AND, apart from Lotus SmartSuite for OS/2, users couldn't do much else on OS/2 except run any corporate apps the company developed for OS/2 themselves... and there were frankly NEVER very many of those.
At one point back in the late 1908s, IBM came out (pre-OS/2) with a machine called the "3270 PC". I had one at work... it was a modified PC AT with a 3270 ISA card in it, and it gave the user 4 3270 terminals they could switch between, plus DOS on their desktop... and I think a couple of scratchpad sessions you could switch to as well. No windows tho... you switched to a session using some hotkey, and it took up the entire display.
It was a LONG time ago and it sounds crazy, but it's true! In fact, I worked with a LOT of crazy products at IBM back then... including an absolute FRANKENSTEIN of a product in the early 1990s called "OS/2 For Windows". It was intended to run Windows under OS/2 somehow in an address space or something, to enable users to have both OSes on their PS/2. I can't even BEGIN to describe that absolute monstrosity! 🤕
Fantastic video that took me back in time - Hermaphrodite connectors, FDDI, Cambridge Slotted Ring,...
Surprised you didn't also bring up the long running court cases over royalties with Mr Olaf Soderblom, helping to keep innovators out and costs high.
Thank you!
I decided to avoid going into all the royaltiy stuff, as I try to keep these things to 20-25 mins. There are a fair few things to talk about in that area, etherent also has some battles on that front too. So it felt like a bit too much of a time hole. Although Olaf Soderlom seams like a really interesting guy, who apparently flyes hawker hunters, which is an aircraft my Dad worked on.
WOW! I haven't heard "hermaphroditic connectors" in a long time. I mentioned them to someone once year sago. I got a blank stare and thought they were going to call HR or something! 😂
Great video and very well explained.
I worked for IBM New Zealand back in the 80's and 90's and was heavily involved with Token-Ring.
I even had buttons made to be given out at trade shows...
"IBM Token Ring. Where no LAN has gone before"
(Yes, I am a Star Trek fan.)
Another great video from RetroBytes, now officially my favorite tech channel - only you didn't say PCBway correctly, still need's the upwards inflection
Do you know hard it is not to pronounce it that way, or say what PCB stands for 😅
Perifractic would be happy one of his minions are out and correcting the presenters.
Great video on TR! My 1st introduction to TR was back in the late 90s, when I had to replace it with Ethernet. The MAUs in the closet were massive and took up so much space and that cabling… WOW! Fun fact: Kalpana invented many of the LAN technologies we still use 34 years later like Etherchannel/LAG/Port-channel and VLANs. Their product evolved into the Cisco Catalyst 3000 series. Good times!
I've actually got (and used in the late 90s) a Kalpana 10 megabit ethernet switch. It was really interesting gear.
Brilliant video! Not so many years ago, I was teaching this stuff to technical high school students. It also seems we attended the same University! And yes, I remember the NFS boot disks...
The picture of the department was probably a dead give away 🤣
Haha I was just thinking this reminds me of my high school technology classes!
Our uni was all Sun Solaris workstations, SPARCs and some UltraSPARCs. I don't know what the network was, probably ethernet.
It was 1997.
@@MostlyPennyCat Sun workstations came with ethernet built in, so it was most probably ethernet to connect them, as alternative interfaces cost more to add.
@@MostlyPennyCat You could get other cards, but most SPARC workstations that had built-in networking had Ethernet.
You could get Token Ring, FDDI and ATM cards for them. Certainly easier when they started coming with PCI and PCIe card slots.
SBus was likely very expensive because it was not widely used.
I know the SPARCStation4 and SPARCStation5 had both twisted pair and AUI Ethernet ports on the back. Only one could be used and the system would attempt to figure out which you were using.
My first exposure was the SPARCStation2 and it only had an AUI connector, meaning we had to have an AUI cable back to the 10Base5 or 10Base2 coax.
I was a network engineer that had been directed to get a large Token Ring network installed and up and running for our agency with offices scattered on multiple floors of two towers in an office building we were in in the early 1980s. Then right at the end of that decade I was told that we had to change everything to Ethernet in about four months -- with minimal work disruptions and limited amounts of overtime or off hours work. By installing Ethernet switches in the wiring closets along side the existing MAU's and using two types of BALUNS we were able to move the PC workstations to Ethernet over the IBM type 2 Cable then when the new cabling was installed switch almost immediately over to it and all Ethernet. Although I didn't have to do much of the physical work I was able to get pretty good at installing either an IBM hermaphrodite Token Ring connector and if necessary an RJ-45. I worked on Ethernet stuff for the rest of my career and I liked it much better than the Token Ring network especially since it was a lot easier to identify, isolate and replace a faulty Network Interface Card. As some of the previous comments have said there were cases where the Token Ring network was technically better, but as the speed of Ethernet increased that became almost moot.
One other thing about Ethernet... A similar, but not exactly the same, types of protocols were used in the late 1960s through the mid 1970s to monitor sensors that the USAF had planted along the Ho Chi Minh trail in Laos during the Vietnam war. I know, because I was there and worked on that system.
Nice walk down memory lane!
In the late 90s some people thought ATM would take over the world. Fortunately it never happened - it was a terrible idea from the start. 😅And there was the FDDI standard which was basically token ring with two rings over fibre. It saw some success in campus networks and Internet exchanges in the mid-late 90s.
I believe gigabit ethernet still implements CSMA/CD so theoretically it should be possible to have a gigabit hub. I've never seen one though.
@@tripplefives1402 We got our first long range etherent circuits in the early 2000s but most places still used ATM especially ISPs that provided Adsl services. As adsl dslams where all atm based, so most feeds from the teleco to the ISP where over atm. When vdsl came in that's when BT switched most stuff from atm to ethernet. ATM never really seamed to take off in the lan space
@@tripplefives1402 Sucks to be you. Glad you finally got rid of it. 😅😅😅
@@RetroBytesUK Interesting, where I live (Scandinavia) only the very first generations of DSL (99-02) were built with ATM but we quickly switched to DSLAMs with ethernet backend. By late 00s I think all ATM was gone.
Fun fact - my first DSL modem in 2000 had an "ATM25" user port - 25 Mbps ATM over UTP. 😂 Never got to use it though.
I unfortunately at one point supported an ATM to the desktop LAN. I unfortunately also at one point had a customer that had deployed a nationwide ATM WAN with several hundred locations using ATM LAN Emulation Even when used for its intended purpose ATM LANE is an evil of biblical proportions. That someone would use it for a WAN was pretty much unfathomable. Fortunately, my job was to replace the thing and not to maintain it.
I also supported several FDDI campus backbones back in the day and several customers with metro area networks connected via carrier provided FDDI which was quite common then.
Gigabit Ethernet does still support CSMA/CD but it was the last ethernet version to do so.
@@JayJay-88 late 00s early 10s is when BT brought VDSL widely (and thus moved to ethernet), the switch started in.. 05? 06? in some areas. So the timelines aren’t _that_ different.
When I think of token ring, I think of the Dilbert comic where Dilbert tells the PHB that his computer isn't connected to the network becaus the token has fallen out of the token ring network, leaving the PHB to crawl around on the floor looking for it.
Was keen to watch this but the background visuals and music are too distracting
I worked at IBM in the Thinkpad department in the early 2000s. IBM was famous at the time for hanging on to every bit of technology it had invented. They were still using servers running on Warp OS, and the OS was still being developed internally. We were still using Token Ring at the Research Triangle Park location. In the Thinkpad department, we used those 16mbs PCMCIA token ring cards. As a general rule, the network was pretty stable and reliable. We would regularly ship large Windows software packages from our lab to the laptop manufacturing storage servers dubbed Area 51 using Token Ring.
That is, until a few of my coworkers decided to play a lan game of Worms during a slow period. That would have been fine with TCP/IP, but they chose to use IPX/SPX. The combination of token ring and IPX/SPX broadcasts from a few machines brought our network down pretty quickly,. Halfway through the game, one of our IT guys walked in and asked if anyone had any big file transfers going, because most of the token ring network was at a standstill and no one could access Area 51. My coworkers quickly quit their game of Worms, and the network came back up shortly afterwards.
I worked in a Token Ring shop in the early/mid '90s. It was certainly... an experience.
Wow! I've been reading about computers for almost 30 years and never even heard of the term. Thanks for this excellent piece of IT history!
I worked in networking and servers through the 90's, it was a crazy, I had to know Ethernet, ARCnet and token ring and all the protocols like IPX, TCP and NetBEUI and all the OS'es like 20 different versions of Unix, Dos/windows, OS/2 & WARP, Novell, MacOS and all the various platforms and architectures they ran on, and some clients would have a Unix server, an NT server or LANtastic and a NetWare server all in the same company with SGi, SUN, IBM/PS machines and PCs and mac workstations and you were just expected to know everything about all of this, and get them all working together and speaking on the same network, and building bridges and routers with Linux was pretty much the only solution in most cases. I worked on a lot of Token Ring networks at banks, they mostly ran OS/2 on IBM PS/2 machines and they were the worst when it came to cabling.
Linux was the swiss army kniff of networking back then. I know a few banks that still have pockets of Token ring activley deployed even now.
We had a network like that back in the same time period. Every protocol imaginable along with bridging (not routing) between sites. Thickwire and thinwire initially. I brought them kicking and screaming into 10BaseT because cat-3 cable was already in the floor. Someone brought in an Apple printer that used Apple's networking protocol (Appletalk?), luckily they brought in a contractor who plugged in a network analyzer, looked at all the crap already there, and said "Do not add Appletalk to this mess, send the printer back!"
I eventually got them to buy routers to eliminate the continuous broadcast storm. The TCP/IP server admins moaned a little about re-addressing. Nonroutable protocols like DEC LAT and some IBM stuff got tunneled then fixed ASAP. The DEC and IBM people understood and worked with me. For some reason every Netware admin resisted mightily and HAD to be the default network which was 1. Finally I just "decreed" (and I am a lousy dictator) that there would be NO Novell network one and everyone had to move if they wanted to be routed.
Finally we became a TCP/IP network. "You want on the Internet? You use TCP/IP".
@@davidg4288 I had lots of clients with networks like that, and I think the big problem was software, the engineering guys used CAD on Unix workstations and Unix Servers but had some dos/windows machines for other stuff, the Stores/Warehousing people used inventory software on terminals connected to Unix servers but still had a windows machine for half their work, the accounts people had windows machines and used netware servers and had a terminal to access the unix for inventory stuff, and the designers would have macs for half their work and windows machines for the other half, and everyone was still running around with floppies and CDRs to share files with each other, except the designers, cause no one could read their files and the all used zip drives and you could clean it all up and get everyone on the same protocol and servers, but it was only in the later part of the 90s that people could start having 1 computer to do everything they needed to.
@@RachaelSA Yes I remember the old removable disk "sneaker net". Someone even bought a 9-track tape drive for their IBM PC so they could transfer large files to the IBM mainframe. Before that we had a TRS-80 Model II with an RJE (remote job entry) card that could send files to the mainframe via modem. The IBM PC couldn't talk to the IBM mainframe yet but Radio Shack could. Later the 3270 PC (and the IRMA card) sorta fixed that. The DEC people could always have their PC emulate a VT-100 terminal. In either case it made for a very slow data transfer. The most frustrating was when they were finally all on the same network but the PC's used either netbios or Novell IPX, the DEC VAX used DECnet (or LAT), and the IBM mainframe used SNA. The tower of babble!
Oh yeah, and Token Ring and Ethernet used "different endian" MAC addresses. Probably for a good reason other than annoying network admins.
@@davidg4288 Linux was awesome for all that back then. I would come in and replace all their servers with 1 big linux box, or pick the best server they have and install linux on it and change all the workstations to tcp/ip and run NFS and samba and marsnwe on the same shares and everyone had access to everything in 1 place and i could copy all Unix stuff to the linux and scrap the terminals and give everyone a terminal emulator program. Often I would also set up a mail server on the linux so they could all mail files to each other and if they had internet I would set up a gateway and proxy and fetchmail and some even got irc servers. I would also set up a modem so I could just dial in from anywhere to do support.
When I started my first programming job in 2007 we had a few legacy token ring machines connected to the larger network with one of those bridges. Also an Ethernet machine that used BNC connectors. Fun times.
It would have been nice to here a little bit about bus vs star vs ring topologies. Obviously strengths or perhaps design luck of the star topology played to development / success of the Ethernet switch. You kind of touched on it, but a mention of bus top. was missed and some may be interested to learn about those differences. Maybe diagrammatic aids could have shown it. Great content BTW.
It looks like to me that the bus topology was described at around 1:15, where machines using Ethernet were connected over a shared bus, which was just a single piece of coaxial wire that all machines were connected to.
@@leohuangchunwang True.
I tend to multitask working, listening to music and learning something on youtube. Having the information in this video be presented without background music would have suited this habit a lot better.
We only ran madge MAUs and those one automatically closed off the port to broken links, which stopped broken ring being a regular thing unfortunately.
I remember in 1993 my workplace replaced our simple ring Ethernet network with one long cable per device threaded through the ceiling and down into a network closet. So the multiport switch hardware must've existed even that early.
There were thicker Ethernet cables (10Base5) vs the 10Base2 coax mentioned in the video.
I do know there were devices that acted as a sort of a MAU (Media Access Unit) Hub where you then had multiple AUI (Access Unit Interface) drops.
Who knows what you had in the ceiling. It could have been 10Base5 Cable with a MAU on a vampire tap, or one of the hub like devices where you could then provide several AUI drop cables to connect to Ethernet cards.
I can say I don't miss 10Base2, or 10Base5 or any of the MAU and AUI nonsense.
In terms of cable thickness, we are certainly getting back there with 100gb QDR copper cables. But we don't have drill and tap Thick Coax or trace down where things have come disconnected. AUI cables were notorious for falling off the card.
@@buffuniballer I have an "AUI Transciever" here, which plugs into the back of an old Microvax & converts to Ethernet (RJ45 or co-ax, takes your pick). It has a nifty locking mechanism on the back of it to prevent it from simply falling off! I presume other cards/plugs also had this thing - like a sliding metal shield with 2 T-shaped pins which engaged in 2 slots. Of course, the biggest problem with the transceiver is, now the machine has to sit 6 inches away from any surface, to give it room!
@@theelectricmonk3909
Oh, our AUI adapters had the same. I think the issue was a combination of folks wouldn't get them well latched AND coax was a bit heavier than TP cables.
Not that TP doesn't have its own issues. But those are more about not being able to release the tab on 1U servers with lots of cables. Think Cat5 cables for ILOM and Serial Mgmt covered by QDR cables for production networks.
But they seldom just fall out unless you don't get that tab seated, which can be difficult in the above described environment.
@@buffuniballer Been there, many times: Sometimes the tab gets its end broken off (= can't be released without a small screwdriver, and incredible patience), entirely broken off (= falls out), or just loses its "spring" (= falls out unless manually seated with aforementioned screwdriver, and who remembers to do that?)... then there's the randomly failing cable, because one of the conductors has broken but works if positioned *just so* ... Not to mention wiring up the connectors in the back of a patch panel is an exercise in frustration, trying to remember if the other end is wired to T-568A or T-568B (with hilarious consequences if you get it wrong)... then you accidentally put the insertion tool in the wrong way around & snip off the cable **before** it goes into the IDC block.... Ah, networks. Loathe 'em or hate 'em... can't live without 'em!
@@theelectricmonk3909 "The Network is the Computer" :)
ArcNet over Cat 3 twisted pair was a thing. Spent many days troubleshooting a 40 person LAN in the early 90’s. What doesn’t kill you makes you strong.
Exactly. We had to keep them running while extending the nets with new advancements. I recall running Netware 2.15 on arcnet, token-ring and ethernet at the same time!
We had arcnet running all over campus for all that Johnson Controls building HVAC management stuff. There was a huge celebration when the last of the arcnet went away. (At least, I *think* it went away; Facilities might STILL quietly be supporting bits and pieces of it here and there :D )
Thank you for this fun explanation! I am a network engineer, but I started my career in 2014, so I have only dealt with Ethernet; even study materials vaguely mentioned token ring networks, but nothing beyond that. I was always curious as to how these older technology actually worked.
Token ring was a pain - we had Madge and IBM can we kept getting broken rings (unpleasant) - it got so bad we ripped the whole companies out over a few weekend and replaced with 3com Ethernet. No idea what caused it even happened on dumb maus cables locally between 2 pcs . Glad to see the back of it
didn't get the jitter under control - likely had substandard cables
It was a pain. Having limited or practically no IT maintenance budgets in early 1990s, i was able to locate some Racore cards that allowed me to move the T/R off of coax to twisted pair. Blackbox had an T/R over TP MAU that allowed long extension connections. I thought that was freeing until SMC began shipping low-cost ethernet on TP. I need TP. I need TP for my BH. (sorry, couldn't help it).
Love the background music. It's upbeat and conveys the relentless march of time. I feel sorry for those who can't concentrate without absolute silence. Who else digs that 2/2 ragtime that sounds so cheery and bright?
Nice video, but I couldn't watch past the 12th minute. The background music is too loud and too annoying and the visuals are too 'busy' and distracting. I really liked the subject and actual content but the background music and visuals are way too distracting. Sorry.
Agreed, particularly when the music has vocal sections which make it hard to focus on the narration.
The stock video of 'network technicians' is gloriously hilarious the more you watch lol!
I found the background music on this video to be very distracting, particularly the vocal parts.
There were also managed multiport Ethernet repeaters with 10base2 Coax cables. I remember the DEChub 90 modular series, which also was fully managed and you could read out the stats via SNMP. When I was a student in the 1990s, there were lots of them in use in our University as the twisted-pair cabling was still in the planning phase but Ethernet connectivity was needed everywhere and as "emergency measure" we installed them and Cheapernet cabling. It was always segmented to a few rooms so cabling issues usually did not disconnect everyone.
A lot of IBM sites were using Token Ring until the mid 00s (when I left Big Blue) Prior to that I'd worked on a site that had Novell (the package type Novell warned you on the courses not to use) over Token Ring. No clue if they ever bit the bullet and went to TCP/IP over ethernet (they were mooting it back when I left in 1998). I know the chap who designed their Token Ring network died in the mid 00s, having been made, and they couldn't pay me enough to go back there, so I don't know its final fate.
@@James_Knott Autocorrect - seems it also use the word 'hat' more than I use 'had' after the word 'that'.
When I moved into a dorm at Carnegie Mellon in 1994, they had token ring physical layer connectors. Huge weird IBM connectors.
You had to get the ports in your dorm room activated. You could choose between Token Ring, 10BaseT Ethernet, LocalTalk (230.4kbps Apple proprietary) or RS-232 serial. You could literally plug a terminal into your dorm port.
10BaseT required a balun - balanced/unbalanced transformer adapter. LocalTalk cables were much cheaper.
A year or two after I got there, they removed Token Ring. They had a trade in program where you could swap your token ring card for an Ethernet one.
Moving away from the dorms back then meant getting dial up at 56k or so. Definitely worth staying in the dorms.
I'm not sure that I would agree that Token Ring was designed to solve the problems associated with Ethernet, they were two solutions to network that were developed more or less at the same time. Token Ring to a certain extent is a typical IBM solution to a problem, logical and safe.
Ethernet is a lot more radical because the collision problem is an obvious downside, but lets face it, the collision problem was resolved on day one and for large networks improved on very quickly so it was never really a problem.
The way I had the difference explained to me in the early 90s was that if you had twenty people sitting around the table at a dinner party, token ring would be that going round clockwise, everyone took turns speaking, whereas with Ethernet, people spoke when they had something to say, and if two people spoke at the same time they would apologise and try again. For me, one just felt more "real" than the other and I came to appreciate the simplicity that cabling an office to use ethernet was over that required for Token Ring.
So I am not sure Token Ring was the Betamax, because Betamax was in many ways the superior technology to VHS. Ethernet dominated because it is technically superb.
Ethernet was cheap. Token Ring was expensive. That, more than anything else, made Ethernet preferable. Collisions were a huge problem with big Ethernet networks and that wasn't really resolved until switches became commonplace in the mid-90s.
Routers shipping round layer 3 really did help, but where too expensive to be practical for most institutions. Also they did not solve the problem of broadcast, far too many protocols at the time depended on broadcast to operate correctly even if they used a layer 3 protocols. Then there where the protocols that had no concept of a network address, only station addresses that where the same as the Mac Addr. Routers did not help with those at all, as they where not routable.
Very nice, I need to dial into the local bulletin board system and post (if the line isn’t busy). Will be great to see the responses posted as people individually comment over the next several days.
I like the CAN-bus implementation of multiple access. The devices pull a line low to represent a 1 and leave it to represent a 0. Once the last packet is sent, any device is free to start transmitting. This means multiple devices transmit on the same line at the same time. The instant a device tries to transmit a 0 but detects the line was pulled down to transmit a 1, it stops transmitting and lets the device transmitting a 1 continue. This means there are never any collisions and the implementation is super simple.
CAN Bus is so cool that even HARLEY went to it back in 2014 with their new gen "Rushmore" bikes. It eliminated a LOT of individual wires from the formerly-thick wiring bundles and made things MUCH simpler.
Unfortunately, my 2012 CVO Street Glide is not CAN bus - but it's a LOT PRETTIER than anything that's come out of the MoCo before nor since IMHO.
This reminds me a TON of Dave’s Garage channel and both are great. This is the UK doppelgänger.
Now there is a channel I dont mind being compared to.
You never covered the reasons it became known as "Broken Ring". The biggest one was MAU failures, when a MAU failed you would have to go and reboot each MAU and wait for a new token to be generated, until you found the one with the issue. The next big issue was the Type 1 connectors, they were easily broken and would sometimes not close when they were unplugged. If this happened you would have to go and inspect every connection on the ring or unplug everyone from the MAUs and then plug them back in one at a time until you found the faulty cable.
The Type 1 connectors were stupidly large and fragile. When you could easily fit 48 RJ45 connectors into 1U of rack space but only 10 Type 1 connectors then you know there's something badly wrong. The cable itself was also way thicker than it needed to be so it made cable routing difficult.
@@1anwrang13r they certainly were large, but maybe you guys were rough. 🙂
@@Design_no Surely, a well designed system starts from the premise that the average user hasn't undergone a 3 year qualification course.
Thanks for this entertaining and informative video! Your analogy of the Token Ring LAN being "the Betamax of Networking" is absolutely brilliant!
I was an IBMer in the 80s and 90s, and I worked with Token Ring pretty extensively starting about 1990, in the IBM SouthLake/Westlake technical support center for office systems, west of the DFW airport. Those connectors and cabling were big and clumsy, were a pain to run and connect, and it was expensive to install the MAUs in a closet. But Ethernet was still using coax cabling at the time.
I actually set up a TR LAN in our office in Atlanta when I moved there in 1992 because I had started working with a product called Lotus Notes back in Westlake after the failure of the OfficeVision/2 product IBM had been developing in Westlake. IBM started selling Lotus Notes in 1994, and eventually acquired Lotus in 1995. I call Notes "the Betamax of Distributed Document Sharing"! It was a brilliant product for the time, invented by Ray Ozzie of Iris Associates, which was acquired by Lotus in 1994. Notes enabled document-based databases to be built, with threaded conversations, etc. These databases could be "replicated" across Notes servers and even to Notes workstations. I built many Notes databases with some pretty sophisticated forms, etc. I also installed a product called "Hoover", which provided categorized news articles that got downloaded to the Notes server via a dial-up connection overnight, and these articles were then available for people to read when they arrived at the office the next morning. It was kind of leading edge back in 1992/93. You should do a video on Lotus Notes if you haven't already!
Anyway, back in 1992 I needed a real LAN so that my PS/2 (yeah...) could communicate with the Notes server machine over the network. Everyone else around me only had IBM 3270 cards in their PS/2s so they could use the IBM mainframe 3270 terminal applications over a coax connection. Those coax connections went to every office and cubicle, but NOT so for the big new TR cabling. It took me awhile to help my coworkers and my boss understand what a "network" connection even was, why it was better than just having a 3270 coax connection, and how they could make use of it with new network-based applications like Lotus Notes. In IBM in those days, there were not yet any servers on a LAN in field sales offices, so it took awhile for LAN infrastructure to become the norm. Even in the mid to late 90s, many IBMers were still just using 3270 connections to IBM's ancient mainframe 3270-based email system "PROFS", which later became "OfficeVision/VM" and "OfficeVision/370".
The iBM field sales offices I worked in back the late 90s and early 2000s had pretty much never installed much TR LAN infrastructure. They seemed to go straight to Ethernet when they installed a LAN. I also NEVER saw a TR LAN in a customer site - they were ALWAYS Ethernet. Our ThinkPad laptops by the early-to-mid 90s all had a built-in Ethernet port, but of course no TR port - so that kind of made even IBM go with Ethernet outside of their lab and development sites by the mid 90s.
Speaking of IBM PS/2s... they and the IBM operating system they ran - OS/2 - would also be interesting topics for a video. I haven't looked to see if you've already done one yet, but I will. The PS/2 and OS/2 are very interesting stories and I know quite a lot about them, having lived through that entire crazy era at IBM.
And yeah, I bought a Betamax VCR back around 1980 - because it was SOOO much better technology than VHS! 🤦♂
I was the "network guy" back in those days which usually meant Ethernet (10Base-5, 10Base-2, 10Base-T) but occasionally Token Ring would pop up, especially near the IBM mainframe. Strangely I understood it because we once used Paradyne terminals on the IBM mainframe. The Paradynes were configured in a bidirectional ring and used a token passing protocol, complete with a "beacon" alert if one direction failed. I don't remember the data rate but it was much lower than 4Mbps, they were character displays.
Later I remember FDDI using bidirectional fiber rings, it wasn't a "token ring" though. The speed was 100Mbps so FDDI was made quickly obsolete by cheaper 100Mbps Ethernet.
FDDI survived longer than you think, I'm aware of FDDI rings that were still operating within large orgs at least until 2012. By then it was mostly for legacy workloads that didn't translate well to packet-switched networks though. ISTR there were also some attractive synchronicity properties but I may be thinking of SONET rather than FDDI there.
@@SomeMorganSomewhere You made me look! I know we used FDDI between buildings but I didn't know FDDI could go up to 200 kilometers. I assume that would be over single mode fiber. And apparently it could do switched voice and video as well as Ethernet.
SONET can carry all kinds of things, T1, T3, Ethernet, among many, many others. Maybe FDDI over SONET? SONET was originally a telephone company thing, we used it for a corporate backbone but I've been retired for awhile so who knows. I remember MPLS but that's probably SD-WAN by now. I was never an ISP or telco employee, those would be the experts.
@@davidg4288 Yeah, you are correct single mode fibre. ISTR you could only do 100km for a full dual-ring setup, not entirely sure why it'd be different to a single ring setup, some weird protocol quirk syncing the rings I assume.
FDDI was the tech these companies went (one of them is a Telco so they likely still have it lying around today, along with several other strata of legacy tech ;) ) to for their initial MAN technology, eventually got mostly replaced by MPLS and Ethernet, though they kept their rings around for some legacy stuff that worked better in a circuit-switched network (FDDI-II I think... added circuit-switching capabilities)
MPLS is still around today but it's slowly falling out of favour.
A couple of universities here were using SONET rings to link campuses back in the day, and they definitely had some weird stuff that required synchronicity going on so I may well be conflating the two, it's been many years...
I seem to remember that you had to "energize" the token ring port before you plugged a new computer into it....
When I started working for the University System in 2000, we still had computer labs on both hubs and token ring switches. One of my first jobs was to move them all over to Ethernet switches. I also recall we had a now dead piece of core networking gear in an asynchronous transfer mode (ATM) switch in the mix between buildings.
I was sold on ethernet frame in the late 80's, but there were a lot of passionate magazine managers pushing ATM in the late 1990s and some thought is was the (bomb) best tech in networking. It bombed alright after the famous Denver airport project. Fortunately I stuck it out with the ethernet frame which continues to serve me well.
The background is giving me a migraine.
It’s god-awful.
During the late 1990s/early 2000s I was sys admin for a system that used token-ring to support an OpenVMS cluster in which 3 work-horse application CPUs talked to two servers acting as disk managers. We got it because it was massively faster than the Ethernet we had available at the time. We had TWIN token rings, counter-rotating, which meant we could reach 2x the normal bandwidth of the Ethernet at the same bit rate. Had we NOT used the counter-rotating twin rings but ran them as con-rotating (same direction), we would have about 1.4x the Ethernet bandwidth. If we had used Ethernet as-such, collisions would have had about 0.35 of the token bandwidth. The really nice part was that we could endure a system failure because there were TWO paths to every member. The CPUs were DEC/COMPAQ Alphas. Darned things ran circles around everything else in the site. Our applications started to run so fast that our customers wondered if something was broken. But no, everything ran about 30x faster. Not 30%. 30x!
When we switched to Gigabit ethernet and fiber-channel disk connectivity, we eventually upgraded to more traditional non-ring configurations... but for a while our token ring was outrunning other systems like Secretariat at the Belmont Stakes.
Excellent explained and very loud unnecessary music and very fast distracting images.
Thumb up anyway.
Nice video, thanks. It explains much of what was developing in my early years of discovering pcs and networking.
Would be a great, informative video...... if the background music was not so loud.
I don't recall experiencing TokenRing anywhere when I was younger but I can very well tell the improvements the school networks had when things went from BNC to ethernet.
Don't you dare to turn off that machine in the middle of a ring, otherwise your friends can't do stuff anymore.
Please no more the same dixie music in your videos ...
I remember back in the early nineties my company was running token ring (we were an IBM shop) and had rings on each of 12 floors all tied to a server ring. One night the ring on the legal floor began beaconing. I cannot recall how we determined which card was the culprit but we did and unplugged it. Our euphoria of quickly fixing the problem was short lived as the computer plugged into the next port on the MAU began to beacon.... To make a long story short, we ended up unplugging every computer on the floor (around 60) disconnecting all of the MAU's from each other and then using the initiation tool (yes, there was a special tool you had to initialize each port on the MAU before using it) we reinitialized every port on the 8 MAU's on the floor. Having done this, reconnecting everything and restarting all of the computers, the ring again worked. We never were able to figure out the root cause. We just chalked it up to bad "juju."
"Token Ring is a problem in search of a problem." - Me, after reading the IEEE 802.5 standard in 1989.
The college I used to work at went full in on token ring back in the 80s. Even to this day (as of a few years ago) many of the older buildings have those IBM chunky connectors on wall plates and in the comm closets. I think the funniest thing is, since you can easily adapt those to rj45 jacks, it's actually running Ethernet on top of that physical hardware designed for token ring, and they're even doing PoE over those connectors too. Crazy stuff.
The company that made those patch panels were quite smart. Backend is CAT5e twisted pairs, which attach to the panel in an 8 ping edge connector. In the patch panel goes a little adapter module, to adapt it to IBM connectors, rj45, rj11, dual rj11, COAX, and even rj11 + COAX on a single module. So, it's no wonder they keep using it there since it's so flexible. Only when there's a full renovation do they pull it out and run new lines.
I've only just come across this channel, and I'll be binging your content soon enough. This is very much something I've been doing as well. I've set up my own dial-up internet, ISDB BRI and PRI connectivity, and I'm nearly there for adding in ADSL connectivity as well. I hope to combine it all into a single portable box for demonstrating these now-defunct WAN technologies. Let me know if you want to chat about it!
I had a job back in the 90's for a large department store holding company. The had an IBM mainframe room which did all the transactions for all the POS terminals for around 500+ department stores. I worked in the real estate division. We were charged with developing malls, new department stores, renovations etc.. Because of IBM it was wired with Token Ring even though our division ran off a Novell network. The Token Ring hub was made by 3COM. Came in one morning, and the hub had failed. There was an acrid smell in my computer room, so I figured the power supply had died. We had a terrible time finding _any_ company by that time who still made token ring hubs. The cabling was coax with BNC T shape connectors. We finally did manage to find a hub and swapped it out. Meanwhile the department was dead in the water.
My farther worked in a university when the internet was growing. And he told me years ago that there network was once changed from a linear to a circle layout, and how much it improved. I feel like I finally understood what they did.
This video just went through the beginnings of my career... including the Kalpana 10mbps switches.. I was in a Netware environment with about 8 servers, and each department basically had it's own network. Some were ThinNet (10Base-2) but in '91 when our company moved to a new headquarters, we moved everyone to 10Base-T using synoptics hubs. We had one group who was using a small lantastic network on thinnet still because of proprietary support, and another which had a proprietary IBM system which required token ring. I had configured our Netware 3.11 servers with multiple network cards to allow each network to route to one another. We had Attachmate gateways for 3270 emulation over IPX (we were all IPX/SPX at this point) so all the networks had to be connected. Rebooting a server would bring down connectivity to anyone on the "other side", so finally I convinced my boss to let me buy a Kalpana switch just like the one in the video, and we dedicated one port to each server, and one to each synoptics hub, so that basically every node had equal access to every server, and nobody had to route between networks anymore. It was awesome. When I left in '94 we had started to dabble in TCPIP, but mostly for network management. I was young and excited about tech, and lucky to be in a place that let me spend the money if I'd write a business justification to it.. I learned so much and lived through all of what was in this video!
Fun fact: The first token ring switch was based on the Kalpana switch fabric, front ended with the IBM token ring MAC chip. The IBM 8272 was a joint dev between IBM and Kalpana.
What a blast from the past. I worked with a lot of Token Ring in the 90s, it was pretty common in Australia. The last-gasp I was given to implement was 155Mbps ATM LANE Token Ring running into a Windows NT Server running MPR also plugged into the Ethernet network. The core devices were a pair of Bay networks Centillion 100 chassis. Super clunky and expensive. Fortunately we were given a new budget and transitioned everything to 100Mbps Ethernet with the servers on GbE.
O dear LORD... hateHateHATE ATM! We had a very senior faculty member who had a position on some IBM consortium board and he wrangled us a half dozen 8265 ATM boxes and got the administration to insist we build our core network around 'em, make 'em talk to our Cisco Cat5000s with LANE modules. OC12 was way better than 100 ethernet, right?!?!! After 6 weeks of trying, with our Cisco SE in my office and a conference call with the IBM ATM developers in France, we gave up. I shut down the never-quite-functional ATM devices and my boss ran 'round converting our 100 ethernet out-of-band management network into the campus backbone - a huge ring with a spanning-tree block halfway -round. Had everything cut over in the time it took him to drive 'round to all the core router sites. We couldn't surplus those ATM switches fast enough!
@@mrz80 Sounds like we've chewed some common mud. Things are a lot easier today but you still run into people who drink the Kool-aid and try to implement something dumb due to a good sales pitch
When I was in college in the early '90s, we talked a lot about Token Ring and a lot of other fancy network configurations in some of my classes. One day, in class, we talked a little bit about Ethernet. The point in this class is that even though you could get more efficiency from these fancy network designs, Ethernet was going to win out because of its simplicity. It was so simple that it was barely worth talking about in a graduate level computer engineering class.
It's now been essentially reinvented in Industrial Ethernet. That instead sends one big packet around the ring and each node glues on more data though
in my school, they managed to defeat the problem with multiple PCs booting over network by something quite nice: all PCs booted into windows 3.11 without a harddrive (the PCs had a DOS-bootchip). and the PCs all received their windows-session at the same time, probably by some sort of broadcast.
by the way: the other problem with ethernet in that old implementation: all PCs that are not meant to receive data will receive them anyways and throw them away. it just needs one malicious PC that does not throw the packages away and congratulations, you got somebody sniffing your net.