@@RootBoyJim [1] true, but it is still available in pretty much all Linux distros [2] it was rarely used because there was not much software using Tektronix graphics mode on Unix [3] my deepest condolences 8-)
When my Grandma worked for IBM in the 60’s in the programming department, debugging a program meant getting a printout of a memory dump from the IBM 1401 and going line by line to see what went wrong and then re-writing the program using her chart and then sending it to a secretary who would send it back to the punch card department and then waiting until your punchcard would be tested on one of the on-site machines. Wash, rinse, repeat.
In the summer job I had at the end of my first year, I tried sending my programs to the data-entry clerks to be punched on to cards, but they had a lot of trouble with typos like distinguishing “O” from “0” and “I” from “1”. So in the end I gave up and punched the cards myself.
My father used to work with 1401s. I want to say at his high school but I don't recall. I had a brown three ring binder with some of his punch cards and reference material. Pretty sure it is either in my basement or I gave it to my brother.
@@themax4677 my Grandmother was also trained on the 360 she told me she would much rather program on the 360 because there was less wasted time. She said it could do more and it wasn’t a “glorified calculator”. The 360 had more ram which meant it could as she said “multi task” where the 1401 was stuck doing things one at a time due to its limited ram. When she migrated to the 360, she spent a lot of time developing a program that would literally convert old 1401 code to something the 360 could understand and run.
Serial terminals enjoyed a small resurgence in the mid-90s when people started installing Linux end discovered that they could have a second screen by finding an old terminal for cheap or free. One summer I had one outside on the patio.
@@HybridBattery Actually, it's exactly the same. DEC machines connected terminals to a serial power, just like you do on a Linux box. One of the major advantages of Linux over most of the DEC machines is that Linux includes SSH and telnet servers that allow you to make a similar connection over a network, which wasn't a default feature of DEC machines. You could add it, however.
@@evensgrey I’m specifically referring to the Pro 325/350 which predate Linux (1983). They run a stripped down RSX-11M. The serially connected terminal is a totally separate user. It’s not a dual-screen scenario.
I still like X windows for the fact I can run a GUI application on one machine and get the GUI displayed on another. I find myself using it every so often to get a GUI application on my phone, which is mainly done "because I can", or more practically to run something with a GUI from a remote server, like an image viewer for example.
I use X11 all the time - it's so practical. I basically keep my work stuff in a VM in my rack and, instead of having to risk carrying data with me, I simply VPN back home. 30Mbps is enough for ok performance. I can run MS Teams in Chromium via X, coding happens in vim, git is there, and I truly enjoy the fact that, exactly because performance is not great for animations and too much screen refresh, I actually am not tempted to get distracted with too much web browsing or multititasking. It's also a godsend for retrocomputing. I can be using a SGI machine and use a modern browser via X. Or any sort of combo. I am sad to see X11 slowly dying...
I started using computers in school while using a TTY with a dial-up line. They had a special sound-proof room (kind of like what they use for music practic) that they put the thing in just to keep from driving people crazy. There was a day when they completely ran out of the rolls of paper, but we were undeterred. I went into the mens washroom and took a roll of hand towels, and we used that in the interim until new supplies arrived.
I remember the era before CRT displays. The first minicomputer I used was a PDP-11 running RSTS with a DECWriter. I also used an Apple II in that era as it had just come out.
I started on PDP 11/70, usung Dec Writer terminals & green-bar fan-fold paper that came Up from a box underneath, then scrolled off the back of the machine as you used it. After working for an hour, you may have 40+ ft of paper behind the term, and you can tear it off to take home as your session log, to review & work with offline. While you were still logged in, you could also just reach behind the paper terminal, grab 4ft of paper & stand up to "scroll back" and read code you were working 10 min ago. For the next 4yrs, we had VAX 11/8650 computer with VT100. VT220 terminals, and VMS operating system. . .a really Great system at the time !!
I still come across DECWriters from the 1970s that still work even though the ribbons are nearly worn out. It's incredible that something so mechanical and noisy could be so reliable. They simply will not die.
That's why if you want to do something serious under the hood in Linux, you still reach for the terminal. I'm glad that's built in and no external teletype has to be used.
An interesting aspect of this is that most distros also have a number of virtual consoles that run alongside your main X or Wayland session. They run a terminal session in a much less abstracted display mode, but with all the features of whatever shell they're running. I've used them a number of times when my X session becomes unresponsive to restart the session and kill a misbehaving program instead of having to force a shutdown. They're usually accessed with ctrl-alt-Fn button combos, although the number that each distro puts the main display on varies.
@@Hugehead1234 however, on modern displays the text on virtual consoles is so small they're virtually useless unless you (a) know what you're doing well enough to do it blind and (b) have decent enough typing skills that typos aren't going to leave you up a certain creek without a propellant; and apparently many IT people aren't touch typists.
Well as a desktop os its never going to make it as long as people think that outting shortcuts on vim that you can click constitutes a user experiance. But as a server using a web based UI which have reached a real level of sophistication now, the path is clear for linux and it probably doesn't need to bother trying to be a desktop os.
@@Hugehead1234 Yah, I've done that before when a fullscreen game or something has locked up and won't let me see the desktop. Ctrl-Alt-F2 to get to tty2, log in, open htop, kill the malfunctioning program, exit, then Ctrl-Alt-F7 to go back to my desktop session. ...Or a worse case, hit Ctrl-Alt-Del in one of the tty's to shut down _everything_ and reboot.
@tripplefives no, virtual terminals are not managed by X/Wayland. They are managed by your kernel and the init system. You would have a display without X/Wayland, it would be a VT. Technically yes, when X is running it has to manage the transition over to a TTY should you Ctrl+Alt+F2,F3 etc, since it fully controls the keyboard at that point. Note that switching _from_ a TTY does not require the Ctrl, just Alt+Fx
When I was in my 4th year of high school I learned the escape sequence to move the cursor around on our video terminals. This was on a HP 2000E timeshare system. I used that to write a game I called Logan, which was similar to Robotron (before Robotron) where bots chased you around a yard and you had to make them run into electric poles. Even running on the primitive video terminals of the day, this became insanely popular and eventually took over all 5 terminals in our lab all day long. The computer instructor decided that couldn't happen and deleted it from the system. Rather than recognizing he had a budding computer game author on his hands he decided he wanted his lab back. Not his best moment.
The game you wrote might have been part of the teaching for the class next year. It won't be the first time students are shown an example from a previous year and encouraged by it and challenged to do better.
would love a video on the evolution of XServer and how it went from what it was to what it is today and the motivations behind the migration away from it with the caveats that process alone faces possibly due to the longevity of X being so deeply ingrained in all the graphics libraries and applications written to use it - I'm using Linux Mint, which as far as i can tell, still uses XOrg instead of Wayland
To this day (mid 2023), only KDE Plasma, GNOME, and some tiling window managers like Sway and Hyprland support Wayland. MATE and Xfce are working on it. Linux mint ships with either Cinnamon (by default), Xfce or MATE
@@MasterGeekMXThe cinnamon devs said that they weren't planning on it any time soon. I imagine they're going to be forced to sooner or later. Wayland is already daily drivable for most people with little configuration despite the bugs and driver issues, and with major programs like wine and web browsers slowly implementing Wayland support, X11's dominance feels like it's going to end any day now. For now, though, I can understand why they wouldn't want to commit to porting the entire desktop environment when they could focus on continuing to improve what they already have.
One motivation in the move towards Wayland is security. This is why it often seems limiting in features or why you can't do something in Wayland that you could do in X. Not because devs want to limit features rather some features cannot be implemented in a secure manner. Many changes in gnome also seem motivated by security. For example gnome will not let any other application take screen shots, gnome will prompt the user to share a screenshot with another application if another application requested the screenshot. Imagine a malicious application taking screenshots without your knowledge. Now if a malicious application did that gnome would ask if you want to share the screenshot with the malicious application giving you the option to prevent it and a clue that you have a problem. In Wayland you cannot reparent windows. A useful feature but one that can be used maliciously too. I ran across this because I wanted to reparent some windows for an embedded gui application. Unfortunately in my case the thing I wanted to reparent only works with Wayland so I can't even drop back to X, I simply cannot do what I want. My workaround was to control placement of the other window, well it seems you can't do that in Wayland either. It seems writing my own Wayland server would work but that is beyond my skill set.
The history of X11 is easy: First there was "W" which was slow. Then came X1 to X10 which were shit and no upward compatible. And then came X11R0 to X11R4 which were useable but full of caveats - I started with R4. From X11R4 to R7.7 we mostly got only bugfixes. End of history. en.wikipedia.org/wiki/X_Window_System#Release_history
@@ulbuilderIMO Wayland should add a lot of those "insecure" features but lock them down heavily so that only trusted apps like the desktop environment or other explicitly authorized utilities can use them. As it is though, however good wayland is, its super limiting. For instance, lets say something basic, like you want to rebind some keys for an application. Great, that should be super easy, barely an inconvenience! Except, the application doesn't let you rebind things. Well, easy solution, just make a macro so that when you hit one key, your OS tells the application you did something else. For instance you could rebind a useless "menu" key to instead perform some keyboard shortcut to do what you want! Then, your DE is basically able to rebind the key for you, whether the application likes it or not! Simple problem and a simply elegant solution! Except, no DE does this natively, not even KDE with its otherwise impressive shortcuts, and on Wayland that requires things you cant readily do. (Like detecting what window you have focussed)
the mainframe at Caulfield Institute of Technology in 1975 was the ICL 1904A (?) used teletype but had a handful of vacuum tube terminals. I entered programs in BASIC and got an output on a tty or the VDU which didn’t use any paper. The people doing EDP used FORTRAN on cards, there were these big card readers which started with a whine noise.
This video is longer than what you normally put out. I have to say though like thoroughly enjoyed it. It's more in-depth than what you normally go. I also enjoyed the original BSG reference.... Keep it up!!!!
My first full time job was running a computer graphics lab using Vision Control’s Conjure graphics system, it was an Australian system that was so rare and expensive that there is very little information about it on the web. We used it for generating SCODL files to feed to a Matrix film recorder which used a tiny and perfectly flat 4K screen to expose film one pixel at a time and via separate RGB filtered passes so it took about 15 minute to image a single 24 bit colour 4K frame.
Awesome work as always retrobytes, I’d love to see a follow on from this going into the development of pc graphics standards, CGA, VGA, SVGA, etc. (I’m sure there’s more). Many thanks
Was at the national museum last year. Was a blast. Low amount of visitors, super knowledgable staff eager to show running machines and able to explain to not as knowledgable people like me. Unfortunately it is lacking some of recent history. Would deffo recommend. Seems to be a little underfunded though as it doesnt get gov funds. Buy a coffee and a souvinir from the store inside ❤
This was fun. I worked at DEC from 1981-1985, and during that time I worked on the graphics firmware for the DEC Professional 350 and later was one of the designers of what became the VT340 (the original prototype was much more capable, but too expensive to produce). The Tek 4010, 4010A and 4014 were storage tube vector displays. Vectors could be drawn at low intensity and they would fade unless refreshed and at high intensity and they would remain displayed until the screen was cleared without being refreshed. The tube was cleared not by removing power, but by flooding it with a higher intensity beam causing the phosphor to release lots of photons and quickly return to a low energy state. The text cursor and cross-hair locator cursor used the low-intensity so they were not retained. If I recall correctly (and I may not), there was a way to erase individual vectors using a higher intensity beam that would overload just the the phosphor along the vector, resulting in the phosphor returning to the base state after emitting a lot of photons. In the later 1980 and early 1990s, I was working at a company that produced terminal emulation software for PCs (Persoft, SmartTerm, respectively), and for the VT340 emulation product, I wrote a very accurate Tektronix 4010/4014 emulator (the DEC VT240 and VT340 had Tek emulation mode); though it did not end up in the final product, I originally had an "Easter Egg" in the code that would make the screen flash when cleared, had the cursors leave ghosts when left in one place for too long, and a few additional quirks unique to those storage tube terminals. I also wrote the ANSI, ReGIS and SIXEL parsers for the SmartTerm 240 and 340.
I'm 99% certain Tektronix did not provide a way to erase a vector. Not until the Tektronix 4025 but it wasn't storage tube and plotted about the same speed as a pen plotter.
We had a Teletype machine exactly like that at School in 1980. It had a modem acoustic coupler so it could log into the computer at the local University. We would feed the punch tape back into the machine over and over making copies to produce confettii.
My school had one too. Although the number of people who knew about the "computer room" was minuscule, and I believe the school couldn't afford the telephone call charges to use it. I think there were a couple of Research Machines computers in the room as well that were used.
Speaking of games on teletype, my dad told me how he used to play lunar lander on a teletype machine and the way it worked just sounded a little painful and very slow.
Teletype lunar lander was a beast. Printed a line with all the flight parameters as quick as it could. At the start of the descent it was nearly "real time". Near the end, you'd make a giant crater before you could correct the engine burn. Don't get me started on "fps" :-D
@@MalloryCallas That's literally half of what my dad remembered of it. A HUGE waste of paper and ink. But it was the university machine so he and his classmates did not care one bit
X is incredible old now, but it has kept evolving. It keeps gaining extensions to moderise it, however its still compatible with x apps written decades ago.
@@lawrencedoliveiro9104 yep. I'm using KDE on Wayland on Fedora, and unlike a few years ago it's so stable you won't know the difference unless you poke around. It's still a mess on FreeBSD, however; but then FreeBSD on the desktop is a mess in general. OpenBSD is much better, if slow, and if memory serves there's no Wayland in sight for it.
X11 is the best thing for running programs on a system that is dual-homed on a protected, and local network, where you don't have access to the console. You can ssh into the computer and tunnel the X server back across the network using the X terminal as the console. I do this all the time,a dn saves a great deal of headache.
Certainly brought back memories! My first job was using CP/M on a vt100 compatible terminal. I have particular memories of the Rosy tty and writing CAD software to use the the 4014 as a display. Punch cards, paper tape, tty's, 4014's were still in use even in the early 80's so I got to get a taste of it all. Thanks for a great video!
Your video was very fascinating. I worked for Visual Technology from 1980 to 1987. I started as an assembler and ended as a lead technician. What fed Visual's growth was their development hardware emulation of DEC's terminals as well as those of other companies. The Visual V-100, for example, emulated the DEC VT-100 nearly exactly. Going into the setup allowed the user to switch emulation to ADS, or Lear Siegler, if needed as well as set other parameters. They had a complete line of terminals including those made under contract for various companies including Burroughs. These were called SPRs or special production runs. What also set Visual's terminals apart from the others was the quality. They used Key-tronics ergonomic keyboards and also high-quality CRTs. In addition to standard ANSI and ASCII terminals, Visual produced a line of graphics terminals that was both a standard DEC compatible terminal as well as a graphics terminal. Their V-102 with the graphics option, an add-on daughter card that sat on deadly pin headers that could rip one's hands apart, handled the GTCO and DEC Regis graphics. Their V550 line used a special low persistence CRT to allow for high-resolution graphics, and their V-450 and V480 series utilized dual Z-80s with one dedicated to the standard terminal with the other for graphics with the V-480 being the color version. In 1982, Visual purchased Ontel Corporation. Ontel produced "intelligent" terminals that were beyond a regular terminal. Ontel's products were more like small PCs complete with hard disks, Phoenix and Hawk drives, floppy drives, and a multitude of add-on cards. The systems had dedicated slots for the main cards as well as additional slots for the add-on cards such as word-mover-controllers for world processing, special communications cards, such as the SDLC controllers, and many other interesting cards including RAM cards covered with static RAM chips. Shortly after this purchase, Visual entered the personal computer market with their V-1050 CP/M 3.0 (Plus) computer and later their Commuter Computer a transportable IBM compatible. I had a V-1050 (Kicks self for selling it!). It was a great system that came bundled with all kinds of software including Z-80 assembler and C-BASIC. The system had a unique hardware setup with a 6502 for graphics with dedicated 32K or video memory. I still have my Commuter complete with user manuals and that system is still operational today as it was when I first got it. I only wish now that I kept my V-1050. By the end of the 1980s, Visual was a bit worse for the wear and made one more shot at the waning terminal market. Their foray into bit-mapped graphics and X-terminals while successful, was a bit too little and too late. They never had the sales they expected as that market was terminal due to PCs being able to emulate the very terminals they were producing through software instead of hardware. They closed their doors in 1991 or 1992.
There are many excellent channels on yt but very few are done at the level this one is. Concise and to the point, perfect narration, witty humour and no worn out jokes and TH-cam cliches. I salute you Sir !
Same here. I liked mine so much that I didn’t let them give me a VT220 when it became available. By the time I left the company, my 100 was so old that they let me take it with me. 😊
This is incredible stuff. Thank you for taking the time to dive into this history! I've been taking a deep dive into the history of computing for quite a while now and your content definitely scratches that itch. I'm now a new subscriber, and I look forward to checking your content out. You definitely deserve more recognition!
Absolutely fascinating, so well done. I loved the SGI references, still regret selling mine. I think X gets more grief than it perhaps deserves, there's something to be said about opening the display of a 30 year old system on your modern system, but then I'm far from your typical use case. 🙂
Again a great episode! Noteworthy is the SAGE computersystem (AN/FSQ-7) which predates the PDP-1 (by probably a couple of months).. The system had vector display terminals with their vector lists stored in rotating drums within the display stations
SAGE was a very interesting machine/project. What I could not find was a trust wothy time line of which bits of SAGE became operatonal when. There is a date most sources seem to use, what I could not determin, was that date all the parts of SAGE (including the vector displays), or was it just the some core components of the system.
Unix 7 edition refers to typewriters quite a lot. Init in 7th for example “When init first is executed the console typewriter /dev/console. is opened for reading and writing and the shell is invoked immediately. This feature is used to bring up a single-user system. If the shell terminates, init comes up multi-user and the process described below is started. When init comes up multiuser, it invokes a shell, with input taken from the file /etc/rc. This command file performs housekeeping like removing temporary files, mounting file systems, and starting daemons.”
i was expecting this to be about the display hardware like crt like how you explained at the end, but what this was turned out to be even more interesting for the reasons you listed. subscribed!
The IBM world was also a lot more wide than 3270 or SNA; there was also Twinax for the minis (S/36 - S/38 - AS/400 etc). Both 3270 and 5250 terminals were block mode instead of character mode, and they effectively worked like hardware-based HTML forms. You'd send a display list to the terminal with input and output records, and you would get the records back when the user pressed a key that raised an interrupt (i.e. the F keys, submit, etc). Stuff like arrow keys and a lot of line/screen editing was done locally on the terminal.
Block-mode terminals were designed very much for computer efficiency, not user efficiency. They were a typically IBM way of doing “interaction”. Meanwhile, DEC had terminals where an interrupt was sent to the CPU for every key you pressed. Anathema to an expensive mainframe, but a good fit for user-friendly interactive computing on a DEC mini.
@@RogerioPereiradaSilva77 I have my own personal pet mainframe - I know ;-). CHUNGUS is a z14-ZR1 that lives in my garage. There's also BACKPAIN, which is a P8 running IBM i
Block mode 3270 and 5250 IBM terminals make a lot of sense. The upstream hardware was only interrupted once per screen full of input rather than on every character. There are far more applications than you might think that still use these protocols via terminal emulators. Most Costco stores have a workstation with a 5250 emulator interface for inventory and other stuff that customers can see, usually near the food court and or restrooms.
A few additional notes (and corrections) on the subject of random access displays (as opposed to raster CRTs): What all these have in common is that you address a random point and activate it. In its most basic implementation, this is "point plotting" display or "X/Y display". This just displays a dot at the coordinates in question, maybe at a selected brightness/intensity. This is also what's found on the PDP-1. Notably, the dots drawn are discontinuous, it's just a dot and the next dot. Everything else, e.g., drawing a line or refreshing what's on the screen, is left to the software in the computer. Otherwise, the dots just fade away (these displays typically feature a slow phosphor to stabilize the image), to be replaced by what's drawn next. (Therefore, these displays are also sometimes called "animated displays" or "painted displays".) Technically, it's an elaborate oscilloscope with a digital input, typically using the kind of CRT that had been developed for RADAR. (Remember the slow phosphor? This is a feature you want to have for both applications., as is a high display resolution.) Most notably, there is no memory or register whatsoever in the display, besides the buffers for the current display coordinates. (Given the quite astronomical costs of core memory for what would required for a screen buffer, as well as the rather slow speed of this memory, we can probably see why this was a typical setup for early computer displays. On the other hand, as resolution wasn't limited by any RAM, these displays had also quite a high resolution, in the case of the PDP-1 1028 x 1028, and there was even an option for a small, high density 4096 x 4096 display with a display area of just 3" x 3". We'll see later, why that should be this small.) The next step is adding some memory and processing power in order to load off the refresh cycle to a dedicated piece of hardware. These were typically cabinet sized, just like the computers they were connected to. Now, you could have also vector commands, while the display hardware was still just drawing dots. Also, this was quite a complex and costly piece of hardware. On the other hand, there were actual vector displays, which provided continuous activation, while the beam swept over the screen, from one location to the next one. So, instead of discrete dots, we get a series of lines connecting these dots. In order to do so, there had to be some memory, as well, where we can store a list of coordinates that should be visited in turn, as the image is redrawn on the screen. This is a true vector display, and it comes a bit later in history (because of memory technology). There were even variants, where a sub-circuitry would add the required modulations for drawing characters at a given display location, providing a much faster way to draw text, than the display would actually allow for. Here, speed translates to the amount of text that can be displayed at once, without the display starting to flicker, as the list of display coordinates to visit in a redraw cycle became too long to keep up with the kind of sustain the phosphor of the CRT could provide. On the downside, these characters were often crude and not that great to read. A limiting factor for all these random access displays was image stability. While the beam of a raster display travels just a minimal amount between two display locations and this in a continuous motion, the beam of a random access display has to address various locations, which may be anywhere on the display area of the screen (as implied by "random access"). Meaning, it may have to travel quite a bit at high speed, which is prone to any kind of overshoot and undershoot as the deflection coils are energized to the required level to address a given location. But, if we want a stable and sharp image, the location illuminated as we refresh the screen should be exactly where we illuminated it the cycle before this. In the case of point plotting display, where we are displaying discrete dots, we can address this by adding some time between moving the beam to a display and activating it to a level, where we excite the phosphor to visual level. Here, in terms of the precision and the resolutions at which we can achieve a stable image, the simple point plotting displays really shine, while also putting a limit to what amount of dots we can put on the screen without too much of a flicker. (This also introduces an additional constraint of display size versus precision and stability. Remember the small high-resolution display of the PDP-1?) With vector displays not so much, as we can't just stop before the next location. Which is also, why there are often blurry edges, especially with the more cost effective variety. Last, but not least, there were also memory display tubes, where the CRT "remembers" where it had been activated and redraws this on its own, without any further input. As this is determined by the phosphor, there is no need to store these locations and its quite precise. But, as in the video, this is a story of its own and this comment is already quite long as it is… Also, thanks for a great video! I always wanted some coffee table book on the subject.
This was a surprisingly interesting video, awesome job! Thank you. I really didn't think there was so much to this seemingly simplistic subject that I was completely unaware of. Very enthralling.
28:26 DEC’s terminals actually implemented a superset of the ANSI-standard escape sequences. The ones with the question mark after the CSI are DEC-specific ones. They also became part of the _de-facto_ standard.
EDSAC had bitmap displays in 1949, the SAGE early warning system had displays with light pens (guns) in the mid 50's... Tektronics 1960's... Evans and Sutherland 1970's...
Without looking it up, I'd guess the SAGE displays were much more CRTs driven by analog radar circuitry than actual computer displays. It would have been easier, probably. Light pens don't really need anything from a computer to get an X-Y coordinate-as-a-voltage from a scanning CRT.
18:45 On the Windows side it still echoes in text files. If you create Text file on Linux or other UNIX like,end of line will be marked with next line character. While on windows it will be next line after "return carriage" character. As you might guess this character tells (in this case non existing) teleprinter to return carriage to left most position, before "next line" character tells it to go down to next line. HTTP and telnet protocol also uses carriage return+next line scheme,.
IIRC, the vt100 could not position the cursor on the screen. The text that went to the screen simply advanced line after line. It was only with the vt102, that you could jump around the screen to place the text where you wanted. This is why people were still using line by line editors like ed, with vt100. Only with the vt102 could you emulate something like a full screen text editor. I actually had both a vt100 and a vt102 that I used through the serial ports of an early 90s Linux computer for additional live terminals, years after regular CGA and better displays, and PS2 keyboards and the like we're readily available. So I had the main display and keyboard hooked up to the Linux box, but also the vt100 and vt102 hooked up so that other people could work on it at the same time.
I need to correct myself. Now that I'm thinking about it, what I had was a TTY dumb terminal for one port, and then either a vt100 or vt102 for the other. You were right that the vt100 had full screen text positioning, like the vt102. Other than that, my Uninvited reminiscence is accurate. I would, for example, use type -f to have a constant display of a log on the TTY display, and have TOP running constantly on the vt102. I could keep my eye on them without having to switch away on my main display. Note that I didn't run X on that linux box, which was the primary server for the ISP I owned.
38:25 This was based on an actual standard called “GKS”. I think the “GSX” name denoted a superset. The problem with GKS was, it was never designed for highly interactive graphics, with things changing on the screen all the time. It was more oriented towards CAD-style applications. You wouldn’t want to use it for games, for example.
William's Kilburn Tube. I was Manchester University in the late 1990s when they rebuilt a Manchester Baby for the 50th anniversary (I wrote a full speed emulation of the Baby for a classic Mac!) and I think it's not really true to say that the William's Kilburn tube wasn't understood as a display at the time, for the very simple reason they wrote an animation program to demonstrate its display capabilities, called Kilburn's Nightmare. Which brings me to the next point, it's not true to say that the CRT was observed by pulling down the metal plate on the memory. Instead, there were at least 4 tubes: one for the program (32 words x 32 bits); one for the accumulator; another for the program counter (yep, a whole tube with just one line) and a final tube, called the monitor, which was simple a CRT that showed a copy of one of the other CRTs, selected by 3 buttons. Which brings me to the last point: on the William Kilburn tube, the brightness of the dot didn't denote 0 or 1, the _length_ of the dot denoted 0 or 1, where a 1 was 3x the length of a 0. All this is described in the SSEM Programmer's Reference Manual: curation.cs.manchester.ac.uk/computer50/www.computer50.org/mark1/prog98/ssemref.html Kilburn's Nightmare can be seen on this screenshot describing a Manchester Baby simulator: davidsharp.com/baby/
I see what you mean. The point I was trying to make was that they where concived as a memory device not a display. Yes they definitely knew about the useful side effect of them being able to display stuff and make use of it. However they where still built and conceved of as memory devices. Your right about the line length, I did not explain that part well. It was about pushing out sufficient electrons so that the read beam would not cause an electron splash at a given point of the screen. Bringness was not the best term to use for that, and bit more detail there would have helped.
Another leftover from the teletype: in LaTeX, the monospace font is referenced by \texttt{} for "text teletype." Since LaTeX is dominant for Computer Science and Math papers, and mono space font is typically used for programs, it means most programs published in academia are using the "teletype font" for their programs.
It a really interesting place to visit. The ticketing situation with Bletchley park and The National Museum of Computing is a bit odd however. If you buy a ticket for Bletchley park, it does not get you into TNMOC dispite them being on the same site, as it used to. Bletchley park decided it wanted to keep all the ticket revenue, dispite most visitors wanting to vist both Bletchly park and TNMOC.
@@RetroBytesUK Thanks for the tip-off! When my mate is a bit better the intention is to hire a car and make a day of it, although I'd likely want a whole week with this (at least!). I'd be happy to make whatever donation I can.
We used to correct errors on teletype program entry by cutting/splicing the paper tape (if it was an error far up the program) or with the rubout. It was a s*d of a job so we tended to not make many mistakes. Didn't stop me (first program in 1976) from "doing computers" for ever. Still I mean :-P
I somehow wasn't expecting X and Wayland to get brought up here, but I'm glad they were. I've been using Wayland (specifically Sway, a Wayland compositor meant to work like i3wm) on my crappy laptop for years now, and it runs so much smoother than X ever could.
The blinkenlights reminded me of when a place I worked for bought a computer business in Germany. The Germans had added a "feature" to their Unix machines, a small computer with an LED display that communicated with a daemon through an RS-232 port. It displayed the status of the machine (memory usage, CPU load, etc.) with various numeric displays and of course colored blinking LEDs. We tried very hard to be polite as they explained what this goofy thing was and how it made their Unix machines the best in the world. When we the new owners of their company told them to get rid of it because it was a waste of money, they were shocked and offended! They had spent thousands of dollars developing this device which gave instant and vital information about the system to anyone who happened to be in the computer room and couldn't be bothered to type a command to get the same information.
MacOS X does have X11 available. At least for 10.0 to 10.10. I've run many Unix Apps which used it on my 2012 macbook air. And on my 2008 MBP. It was even downloadable from Apple. It just wasn't the primary display mode. Also, neither of the unix installs I've been using recently use weyland - both still use X11 primary. I could install it, but the unix VEnv on the chomebook already slows the chromebook down plenty.
35:30 That's not a serial port in the middle of the Apple 1 board: that's a 74154 demultiplexer used for the address decoding system. The jumpers next to it set up the memory map. The actual interface to the video system isn't serial, it's parallel, using the 6820 PIA in the lower left corner of the board, just to the left of the CPU. (The computer side doesn't see much difference between this and a UART, however; it still just checks that video system is ready to receive a byte and then writes the byte to the PIA.) The 6820 is also used for input from the keyboard, which is also a parallel interface. The Apple 1 video system doesn't even reach the level of "glass TTY." A TTY at least could backspace and overstrike, even if it couldn't reverse line feed. The only control character available on the Apple 1 video system is CR, which returns the cursor to the left-hand side of the display and moves it down a line, scrolling the display if necessary. You couldn't backspace, and you couldn't even clear the screen through software (though there was a hardware button to do this). It's also quite slow: due to the use of delay loop memory, you can write a new character to the screen only once every display refresh, or 60 characters per second. That's about the same as a 600 bps serial link, faster than a 300 baud modem but slower than a 1200. Oh, and it's not called "X Windows," but the "X Window System."
Well, that was surprisingly interesting and entertaining. I appreciate that you mixed in some humour and comedy without going overboard, at least for my taste. I ook forward to seeing you cover similar topics in the future, if you'd so choose.
What I'll always remember about Xwindows is the massive set of manuals that documented all the intricacies of it. I used monochrome X terminals in university in the early 90s. They were a big step up over the ASCII terminals they were using until a year or two before that.
It took Walmart up until three years ago to switch from terminal based applications for receiving freight. I used to work on a receiving dock and they were using an android terminal emulator that connected via xterm to the mainframe on site. The program we used would ask for a terminal type and autofilled the field with "vt220". They still use terminal programs to post and research problems with orders. The worst part about it is that the android based programs they use now are worse because they use http requests and are actually 100x slower than an xterm connection, and that delay is on top of all of the "fluff" they added in to make the program look more like a phone application like animations.
The smart system is almost completely fazed out now. It was also used for user permissions like acess to the profit and loss app as well as whether or not an associate was eligible to sign up for benefits or not. Now that is managed by some shitty app on the Wire and takes way longer to do things than the smart system did
@quohime1824 I work in a DC which uses even more antiquated UNIX mainframe for receiving and orderfilling. But yeah, moving to html and webapps makes everything so much slower. We used to be able to write macros to do repetitive tasks in the terminal emulators, but now we have to wait for web requests to load the entire webapp, then query a super slow database, then populate the webapp with the database data. All on thin clients or very old android mobile computers that can hardly handle simple transition animations.
Thin clients were meant to be like old terminals. Access a database from multiple different locations. The database it supposed to do all of the heavy calculations and normalize controls between systems. F7 was always APPLY no matter what screen you were on. Now every webapp has a thousand animations and gradients and different teams build different apps and nothing is fast or similar. Power users in the field have been crippled by all of the changes and it's no longer rewarding to fly through screens and fix problems for the unloaders on the dock.
@@rileyjones7231 People use macros quite a lot in FedEx still today as well in 3270 green screen emulators, it is the only way to get many things done, as it rarely support any development. You can use Visual Basic scripts in Excel as well to interact with the terminal emulator. When it gets fun is when whole departments become dependent on a macro, the guy who wrote it left long ago, and there’s a screen change and everything collapses because no one knows how it works :) Agree with you all concerning the web shell interfaces, it’s the same mainframe behind them, but they work really slowly. They told us for 15 years the mainframe was going away, but it’s still here, and will be for a long time. During this period when they were sure it was going away they stopped documenting changes, and there was a big problem with code that no knew what it did or how to change it. They actually have researchers now who study the old code and document it’s functionality, sort of like mainframe archaeologists :)
In the early 60s there was another application that started out using the Tek style terminals called PLATO. This was a computer based education project run out of the University of Illinois. I highly recommend Brian Dear's book The Friendly Orange Glow for a comprehensive history of PLATO and especially the evolution of the display technology. In the late 60s they invented the first plasma screens which also used the pixel wiring grid as the video memory. Really interesting stuff...
"However...there's a but. And it's a big but, I cannot lie." I like big buts. Battlestar Galactica?! I remember watching Mormons in Space as a first-run series, though it was only latterly (pun intended) that I came to realize the series arc was a rehash of Joseph's Myths. I thought it was nice they actually credited Tektronics for their terminal displays.
We had Commodore PET computers at collage in 1981 but we also had a punched card machine. Part of our course involved writing a program on punched cards. Your stack of cards was then sent away for processing. Two weeks later you would get back a printout with the error code showing that your program had failed. Programming on the PET was much more rewarding.
I had the same problem but was too early for micros. Like you said even a simple program would come back because a comma was missing or similar fault. You would repunch the offending card and send them off for another week just to find a later card in the sequence had an error. Rinse repeat. It could take many weeks to get a simple program to run successfully. I envy those who came later and had access to terminals or micros with their almost instant response.
@@crabby7668 Even in 1981 this system was a museum piece. It was there mainly to teach how computers have developed. At the time I thought it completely pointless but it was in fact assembly language which was incredibly important to the home micro revolution at the time. My friend had an Acorn Atom home computer so had learned assembly language on it's built in assembler so he was a real Wiz on the PETs. They also had an LSI-11 a clone of the PDP-11 running VMS I think. I was not interested in that but perhaps I should have been. Had they hooked up the punched card machine to that perhaps our cards could have been processed a lot sooner. Maybe I'm missing the point of the punched card machine, maybe we were supposed to wait two weeks?
@@wayland7150 sounds great. My institution hadn't got that far when I was there. We were supposed to be learrning fortran, but with the turn round time on the punch card it was very hard to do. I remember my first program was just averaging three numbers, but as said it took weeks, because your cards had an error when they returned., which you fixed and sent back. Then it would compile past that point and find another error, rinse repeat. We didn't even have access to interactive terminals. Imagine doing your programming by post instead of on a terminal or pc an,d that will give you an idea of how clumsy it was. I worked in a company with a similar set up, but the punch machines and computers were dedicated to the job so much quicker as you weren't sharing with everyone else.
34:28 Sinclair's Mark14? Nooo!!!! The MK14 designation for the Sience of Cambridge SBC stood for Microcomputer Kit, with 14 major chips, and is correctly pronounced EM KAY fourteen. Watch any video that interviews the devolpers (Steve Furber, Sophie Wilson etc) for confirmation.
I dont think Sophie or Steve had anything todo with the MK14, Chris Curry did who started CPU ltd (and then Acorn) and would subsequently employ Steve and Sphoie. SBC was owned by Sir Clive, who put Chris Curry in charge of it. The creation of MK14 was Chris's idea.
@@RetroBytesUK True, my bad. Sophie and Steve weren't the actual designers but they do talk about its development as part of their 'History of Acorn' type of interviews and were the first names to come to mind. As you say Chris was in chare of SoC, Sir Clive's holding company, and decide to produce the kit. He didn't do the design though and agreed to buy the original design from a guy (who's name escapes me atm) but when he was trying to do a deal with National Semiconductor for the required chips they suggested he just use their reference design that they used for their own Introkit system. The point is none of the original developers referred to it as the mark 14 and neither did the engineers I spoke to when I personally collected my optional expansion hardware (RAM, IO, Cassette i/f) directly from the Soc offices at 6 Kings Parade. Interestingly the local fire prevention officer would have had a coronary if he could have seen the stairway up to the workshop which was 50% blocked with .... Acorn System 1 display/sales boxes.
I really really want to get a teletype, no idea where I would put it. Also you never see them come up, some how Dave (who's one I filmed) manages to find them, but I never have.
@@RetroBytesUK Well I was lucky enough to be in the Science Museum in early 1994, and they had a slightly more modern version behind glass, but it would be constantly hammering out wire news, and on that day Eric Honecker had just resigned as Grand Wizard or whatever the frig he was, which of course triggered the Berlin Wall teardown. And somewhere in this house in London is that very printout, cause there was an attendant who tore off the individual news items and handed them out to small nerds in the room at the time. Ahh the bad old days, I loved them.
Ah man i love the colossus (1943), its my third favourite first computer, right after babbages analytical engine (1837) and zuse z machines (1938, 1940, and 1941)
Babbage never did finish his analytical engine, so it remained a theoretical until a practical version was built in the 1980s. All the zuse machines fall in to not being turing complete, or not finished and/or working. Thats why most moden text books go for Colossus as the first as it was Turing complete, fullly completed and working. Flowers was not the first to have the idea, he was the first to get a full verison complete and working.
@@RetroBytesUK the colossus was actually not turing complete, and the z3 was finished and could operate as a turing complete machine (very badly, it had no conditional branching and so could only be counted as turing complete if it calculated every possible outcome of a given program, which does stack up the compute time very quickly)
amazing video. I watched it till the end, as with all your videos. didn't know about X using shared memory, I thought they still communicated with the server using sockets, but it makes sense from a performance point of view.
You thanked me for getting to the end? I've got a big music fest coming up this Saturday -- 28 acts in one day -- and needed to fix a lighting board. I couldn't listen to every word while I troubleshot and then soldered a new button onto said board, but this was great. Also my first time seeing your videos. Thank *you*, and subscribed. Cross your fingers it worked!
@@RetroBytesUK half hour main stage, 15 minute side stage. the stages are indoors and 20' away from eachother but still named as such. when you start at 3pm, anything's possible :-)
4:02 A couple notes regarding the Williams Kilburn tubes without being too pedantic. The 0 was implemented as a dot and the 1 was a dash. In that way they could recover a clock from the signal. That leads to all the little dots present, and some of them "brighter" because there were the ones, like 2 dots together. And you cannot really open the lid to see the dots, because the lack of refresh would immediately kill the memory. So what they did for practical reasons was to simply place a second independent CRT in parallel, without a lid, to actually see the dots.
Slight correction, SGI did use X-Windows, they just didn't use the Common Desktop Environment which was a layer on top of X11 meant to standardise the look and feel of the UNIX desktop, since (like text shells before it) X11 was "implementation, not policy" and allowed everyone to write their own GUI, so they did - before standardising on CDE. That is a weakness/strength of the UNIX/Linux/BSD desktop that persists today even in Wayland. Yes, it's a strength as well as a weakness. The strength is that nobody has to just settle for one desktop environment, the weakness is that you have to relearn how to use the computer if you move DEs. But honestly, although they all make different design decisions, and which is better is a matter of taste, relearning a GUI isn't *that* hard, especially if you're not one of these people who insists on using the keyboard for navigation when you could just use the mouse. Anyway, an example of a company that *did* use a UNIX windowing system that wasn't X11 was Sun Microsystems with NeWS, which was both network transparent (like X11) and, if memory serves, also used Display Postscript like NeXTSTEP and MacOS. But even they transitioned to X11 pretty sharpish, and developed the Open Look toolkit which mimicked the look and feel of NeWS on X11, before also switching to CDE. I wasn't a computer science student but I did use Solaris on the university computer labs when I studied in Germany in the late 1990s, and by then the Sun workstations (SparcStation 20s IIRC) were using the famous blue-and-pink theme of Solaris CDE. I think Apollo (which ended up being swallowed by Hewlett Packard long before they swallowed Compaq, which by then had swallowed DEC) may have also used their own proprietary windowing system. Their UNIX-like clustering DomainOS was definitely proprietary. Other than that (and apologies for the ramble), it's a great video! 0:02
X11 didn’t come along until about 1987. Prior to that time, the Unix workstation vendors were all doing their own thing. For example, SGI had something they called “MEX”. Sun’s NeWS predated Display PostScript, so they had to come up with their own PostScript extensions for interactive use.
My university primarily had Sun SPARC Stations that used XWindows on UNIX. And there was nothing stopping us opening a terminal to an _Ultra_ SPARC station and connecting an XClient back to our local server on the slow SPARC Station over the secure terminal connection. I memorized the IP addresses of all the fastest UltraSPARCs on campus.
Huge fan of your channel. Your Sun Microsystems videos make my nestalgia nerve tickle. My first rrql corporate job was all Sun. Could you do a deap dive on the Bindix G15? Way before my time, yours too but it is a crazy vacume tube machine! What youd think is the hard drive is memory. It also controls the timing. Its the culmination of what our grand dads would create to call a computer.
Also, funny story about how I almost lost that job, in 2000 sometime. Ameritrade on a Friday, near the end of day. The trading server went down. It cost over a million dollars in the aftermath. Some dumb junior engineer did an init 6 on it logged in as root. He thought it was his local machine. I learned a valuable lesson that day. Always do a uname -a before doing something you could regret. That was a Sun E10k. 4 racks of the things. Our SSE tried to kill init, but it was too late. I remember as I told my boss what I did mid shutdown the lights flickered. It was the major power draw for the whole campus. After that, probably 8pm my boss pulled me into a 1 on 1. He said. Your not fired. Relax. Every engineer does this once. If you didn't learn this time, I'll make sure no one hires you again. At the time there where only 3 companies in our area that I was qualified for. Even when managing VMs in my own home. uname -a every time. Its muscle memory at this point.
Back when I first started in IT, there was a shortage of newer terminals.. something about the factory switching production from a discrete logic design to a microprocessor based design. Anyway, they dug up all these ancient units from anyplace that had any. I had to get enough of them working so the programmers could do their thing. These units used the same sort of delay lines for the screen memory. I never noticed them being sensitive to physical shock. Another odd thing about these relics was that the character generator was a board filled with individual diodes for the bits.
Great video; I enjoyed it immensely. I went from punched card I/O (no printing) on an IBM 1620 through re-wiring an IBM accounting machine to make it print at one line per second, booting a PDP-6 with 36 bit-switches to re-writing the IBM 1401 bootstrap loader (in one card), TTY terminals for APL on an ICL 1903A, a TV and cassette player for the Radio Shack MC-10, and now am struggling to get one of my four Windows laptops in working order. I still have my MC-10. So you can imagine how much I enjoyed watching the video. At Normal Speed! A suggestion: no background music and no background images. For example, from 35:13 through 49:37 I found it hard to focus on the image and the narration. (Well, I ***am*** an old man!) Cheers, and Thanks again.
Loved the detail and the pace of the video! If I can give some feedback, I would avoid the static noise transition, its very aggressive and loud. Keep up the great content!!
interesting video. One area about modern LCD's and displays that you could have expanded on that does have to do with how the computer actually generates the image is to go into the recent development of things like G-Sync, VRR (Variable Refresh Rate) modes that are appearing on recent gaming monitors and TVs and explain what issue they are trying to solve. That does require some sophisticated communication between the PC and monitor including special chipsets on the monitor itself.
You're right gsync and vrr are very interesting technologies, at somepoint I made do somthing on how computers and monitors communicate and how thats changed.
Thanks for this channel I enjoyed it very much I have used to work on PDP11/ 34 with RSX11/ M system I worked on 8 VT100 TERMINALs one VT220 12 LA36 line printers Operating , maintenance, replacing cards It was a great mainframe computer till now And worked on commodore 16 , 64 Amstrad 512 , 1024 NEC , OLIVITY COMPUTERS
By the way, the part about graphics terminals reminded me of The Cuckoo’s Egg by Cliff Stoll. You may already have read the book, but if you’ve only seen the PBS documentary based on it that only covers about 10% of the book. I love how he complains about his BSD-loving and VMS-loving coworkers fighting with each other about which was better. And there’s a whole segment discussing getting a new graphing terminal and spending days trying new visualisation programs for his physics students’ data, or something like that. It’s really immersive and fun if you’re already accustomed to reading terminal outputs, although he certainly does his best to narrate and describe what some things mean. But if there’s no learning curve, you can jump right in and feel (secondhand) nostalgia. Also if you weren’t familiar with it, part of how they tracked down the hacker was by realising he used ‘ls’ flags which weren’t common in their region, but were in other ones! Something they wouldn’t have noticed if they probably weren’t immersed in those platform supremacy arguments 😆
Excellent and fairly comprehensive. Only a couple things I think you passed over. (1) Things like the dot matrix DECwriter which were essentially high speed teletypes that got rid of the spinning ball head printing element and replaced it with the dot matrix head. Much quieter and faster. It had the advantage of still allowing a printed record of your session. The other thing is the fascinating evolution of accelerated graphics hardware and the career of Jay Miner. He developed some of the earliest raster acceleration technology for the Atari 800 series of microcomputers. It allowed you to have hardware sprites and you could switch color mapping, graphics modes and memory mapping on each new raster line. This evolved into the first hardware bit blitter (graphics mover) invented by Jim Clark, the founder of Silicon Graphics. Jay Miner's next computer, the Amiga, also had a hardware bit blitter as well as all of the goodies created for the Atari 800. This wonderful combination of raster-line remapping, hardware bit blitter and sprites enabled the creation of the Amiga's Workbench GUI operating environment. The Amiga was the first personal computer with all of those features, stereo digital audio and a full multi-tasking OS with standardized graphics libraries -- making it way ahead of its time. Unfortunately Commodore mismanaged that whole thing and we had to wait for Linux to give us an alternative to the lesser PC and Mac environments.
One last thing, I used to program real-time graphics on the Tektronix 4010 and 4012 (and I think 4016?). It had storage (write-through) mode where what you wrote to the display, stayed on screen whether it was text or graphics. There was also a mode that used a lower beam strength that required you to refresh the image and thus enabled real-time vector displays. The down side was that it was driven over a serial line which meant that you might not be able to refresh fast enough, or if someone on another session was hogging all of the serial I/O or CPU your real-time display would stutter and die. It was tough to find a microcomputer fast and efficient enough to reproduce the video games of the day like Atari Asteroids, Tempest etc.
Was there ever such a thing as a wax record memory, where something sorta like a vinyl disc made of easily meltable soft material, would have grooves or divots mechanically carved into it, and read sorta like a vinyl record is read, and for erasing a little pointy hot thing, like the tip of soldering iron, would go over where it needs to be reset into a blank state to melt the region into a liquid that would spontaneously get back into flat shape?
I remember using telex terminals with paper tape readers & writers. We always coiled our tapes in a Figure-8 as it wouldn't jam or rip the tape. And if you typed fast you could type faster than the hole punch could punch. I also remember the audio cassette storage and those were valuable. Eventually i worked for CDP (the first IBM PC clone) and wrote software including SCSI device drivers used by anyone who used really massive hard drives. We were in the ANSI SCSI Standards Committee. Ultimately it became other things including the CAN network used in all modern cars (including the Tesla). As for display technology I tried to get Intel to separate video refresh from their CPU as an effort to speed up CPU processing. Somehow Intel never really understood this and i went elsewhere. Those were the days. We made History.
Man, I love that 40's and 50's music you put into your videos to give them the atmosphere of old & retro. Your videos are a great way of learning what they don't teach you at university when studying computer or electronic engineering these days. I am a big fan of yours!
I owned a couple of NEC Spinwriter teleprinters as a teenager. I hooked one up to an old Hayes 300 smartmodem and was able to dial in to my local chat BBS. It worked, but you could only chat for as long as you had more tractor feed paper.
My introduction to computers was in 69-70. Our math teacher had a teletype terminal installed in his classroom and we could write BASIC programs that we could run on a timeshared mainframe someplace downtown. Great fun. From there to the Computer Science program at Northern Arizona University in 75. More teletypes with paper tape punches. Turn your program in and come back a few hours later to see if it ran. Nope, rinse and repeat. Some assignments required you to create a punch card deck and turn those in. Back to school in 82 and using a VAX for some courses, but back to teletypes and paper tape for COBOL programs running on another time shared computer. The advent of the PC was a giant leap. To be able to code and run your programs in real time was wonderful. Creating programs in BASIC and Turbo Pascal. Coding COBOL programs and running/testing in real time on our local IBM 370. We have come a LONG way
27:38 Back in the late 1980s, I realized that 3270 block mode terminals + CICS was effectively client-server computing. 37:27 The VT-100 did it because punch cards are 80 columns.
XTerm still emulates Tektronix storage CRT terminals. Ctrl+middle click on the terminal and select 'Show Tek window'.
It can also be built with an option for DEC ReGIS graphics. Much more advanced than crummy Tektronix stuff.
@@lawrencedoliveiro9104 And sixel too. Just switch it to vt340 mode.
@@fhunter1testfyi quite a few terminal emulators do or did support pixel. It's seen a resurgence recently too with the popularity of cli workflows.
[1] no one uses Xterm anymore [2] no one really used Tektronix mode [3] I used a Real Tektronix [4] with APL charset
@@RootBoyJim [1] true, but it is still available in pretty much all Linux distros [2] it was rarely used because there was not much software using Tektronix graphics mode on Unix [3] my deepest condolences 8-)
When my Grandma worked for IBM in the 60’s in the programming department, debugging a program meant getting a printout of a memory dump from the IBM 1401 and going line by line to see what went wrong and then re-writing the program using her chart and then sending it to a secretary who would send it back to the punch card department and then waiting until your punchcard would be tested on one of the on-site machines. Wash, rinse, repeat.
In the summer job I had at the end of my first year, I tried sending my programs to the data-entry clerks to be punched on to cards, but they had a lot of trouble with typos like distinguishing “O” from “0” and “I” from “1”. So in the end I gave up and punched the cards myself.
In USSR, one iteration of that cycle was called "approaching the machine". You could be fined for debugging a program with too many approaches.
My father used to work with 1401s. I want to say at his high school but I don't recall. I had a brown three ring binder with some of his punch cards and reference material. Pretty sure it is either in my basement or I gave it to my brother.
@@themax4677 my Grandmother was also trained on the 360 she told me she would much rather program on the 360 because there was less wasted time. She said it could do more and it wasn’t a “glorified calculator”. The 360 had more ram which meant it could as she said “multi task” where the 1401 was stuck doing things one at a time due to its limited ram. When she migrated to the 360, she spent a lot of time developing a program that would literally convert old 1401 code to something the 360 could understand and run.
yeah and people have nostalgia for this period. smh. we left these technologies behind because they were worth leaving behind.
Serial terminals enjoyed a small resurgence in the mid-90s when people started installing Linux end discovered that they could have a second screen by finding an old terminal for cheap or free. One summer I had one outside on the patio.
DEC micros took it a step further. If you plugged in a terminal, you got a second login. Basically multi-user on one PC.
@@HybridBattery Actually, it's exactly the same. DEC machines connected terminals to a serial power, just like you do on a Linux box. One of the major advantages of Linux over most of the DEC machines is that Linux includes SSH and telnet servers that allow you to make a similar connection over a network, which wasn't a default feature of DEC machines. You could add it, however.
@@evensgrey I’m specifically referring to the Pro 325/350 which predate Linux (1983). They run a stripped down RSX-11M. The serially connected terminal is a totally separate user. It’s not a dual-screen scenario.
@@evensgrey Linus Torvalds was only 13 when I tried this trick, and ssh would not be invented for more than another decade.
Is that REALLY your name?
I still like X windows for the fact I can run a GUI application on one machine and get the GUI displayed on another. I find myself using it every so often to get a GUI application on my phone, which is mainly done "because I can", or more practically to run something with a GUI from a remote server, like an image viewer for example.
I use X11 all the time - it's so practical. I basically keep my work stuff in a VM in my rack and, instead of having to risk carrying data with me, I simply VPN back home. 30Mbps is enough for ok performance. I can run MS Teams in Chromium via X, coding happens in vim, git is there, and I truly enjoy the fact that, exactly because performance is not great for animations and too much screen refresh, I actually am not tempted to get distracted with too much web browsing or multititasking.
It's also a godsend for retrocomputing. I can be using a SGI machine and use a modern browser via X. Or any sort of combo.
I am sad to see X11 slowly dying...
Most X extensions don't work over network.
@@IkarusKommt "Most" ?
This is such a weird idea for raised on Iphone and Ipad human, it's so weird and simple 🤔
Thank you so much for sharing this, this is brilliant
@@IkarusKommt It works for what I need. Of course, no remote rendering shenanigans and such...
I started using computers in school while using a TTY with a dial-up line. They had a special sound-proof room (kind of like what they use for music practic) that they put the thing in just to keep from driving people crazy.
There was a day when they completely ran out of the rolls of paper, but we were undeterred. I went into the mens washroom and took a roll of hand towels, and we used that in the interim until new supplies arrived.
@@James_KnottAcoustic hoods :)
@@James_Knottwe even used them in the machine room, might as well cut down what sound we can.
The day I entered high school in 1970 and was introduced to a hard wired ASR-33 wired to a PDP/8-I changed my life like a religious experience.
I remember the era before CRT displays. The first minicomputer I used was a PDP-11 running RSTS with a DECWriter. I also used an Apple II in that era as it had just come out.
DECWritter was the bomb! In the late 1970s my highschool had 2 TTY33s and 1 DECWriter. I hogged that DECWritter as much as I could.
I used the PDP11/03 BA11ME. ODT to boot 8" floppies
I started on PDP 11/70, usung Dec Writer terminals & green-bar fan-fold paper that came Up from a box underneath, then scrolled off the back of the machine as you used it.
After working for an hour, you may have 40+ ft of paper behind the term, and you can tear it off to take home as your session log, to review & work with offline. While you were still logged in, you could also just reach behind the paper terminal, grab 4ft of paper & stand up to "scroll back" and read code you were working 10 min ago.
For the next 4yrs, we had VAX 11/8650 computer with VT100. VT220 terminals, and VMS operating system. . .a really Great system at the time !!
What? HP had a CRT display in the 9100A in 1968!
Followed by the HP9845.
I still come across DECWriters from the 1970s that still work even though the ribbons are nearly worn out. It's incredible that something so mechanical and noisy could be so reliable. They simply will not die.
That's why if you want to do something serious under the hood in Linux, you still reach for the terminal. I'm glad that's built in and no external teletype has to be used.
An interesting aspect of this is that most distros also have a number of virtual consoles that run alongside your main X or Wayland session. They run a terminal session in a much less abstracted display mode, but with all the features of whatever shell they're running. I've used them a number of times when my X session becomes unresponsive to restart the session and kill a misbehaving program instead of having to force a shutdown. They're usually accessed with ctrl-alt-Fn button combos, although the number that each distro puts the main display on varies.
@@Hugehead1234 however, on modern displays the text on virtual consoles is so small they're virtually useless unless you (a) know what you're doing well enough to do it blind and (b) have decent enough typing skills that typos aren't going to leave you up a certain creek without a propellant; and apparently many IT people aren't touch typists.
Well as a desktop os its never going to make it as long as people think that outting shortcuts on vim that you can click constitutes a user experiance. But as a server using a web based UI which have reached a real level of sophistication now, the path is clear for linux and it probably doesn't need to bother trying to be a desktop os.
@@Hugehead1234 Yah, I've done that before when a fullscreen game or something has locked up and won't let me see the desktop. Ctrl-Alt-F2 to get to tty2, log in, open htop, kill the malfunctioning program, exit, then Ctrl-Alt-F7 to go back to my desktop session. ...Or a worse case, hit Ctrl-Alt-Del in one of the tty's to shut down _everything_ and reboot.
@tripplefives no, virtual terminals are not managed by X/Wayland. They are managed by your kernel and the init system. You would have a display without X/Wayland, it would be a VT. Technically yes, when X is running it has to manage the transition over to a TTY should you Ctrl+Alt+F2,F3 etc, since it fully controls the keyboard at that point. Note that switching _from_ a TTY does not require the Ctrl, just Alt+Fx
Thanks!
When I was in my 4th year of high school I learned the escape sequence to move the cursor around on our video terminals. This was on a HP 2000E timeshare system. I used that to write a game I called Logan, which was similar to Robotron (before Robotron) where bots chased you around a yard and you had to make them run into electric poles. Even running on the primitive video terminals of the day, this became insanely popular and eventually took over all 5 terminals in our lab all day long. The computer instructor decided that couldn't happen and deleted it from the system. Rather than recognizing he had a budding computer game author on his hands he decided he wanted his lab back. Not his best moment.
You have to wonder how many promissing computer games programming careers where snuffed out that way.
Kinda reminds of Tetris. Created from work computers and it ended up being so big it affected productivity.
The game you wrote might have been part of the teaching for the class next year.
It won't be the first time students are shown an example from a previous year and encouraged by it and challenged to do better.
@@20chocsaday Yeah as it was I was discouraged from developing games and a valuable example was deleted.
would love a video on the evolution of XServer and how it went from what it was to what it is today and the motivations behind the migration away from it with the caveats that process alone faces possibly due to the longevity of X being so deeply ingrained in all the graphics libraries and applications written to use it - I'm using Linux Mint, which as far as i can tell, still uses XOrg instead of Wayland
To this day (mid 2023), only KDE Plasma, GNOME, and some tiling window managers like Sway and Hyprland support Wayland. MATE and Xfce are working on it. Linux mint ships with either Cinnamon (by default), Xfce or MATE
@@MasterGeekMXThe cinnamon devs said that they weren't planning on it any time soon. I imagine they're going to be forced to sooner or later. Wayland is already daily drivable for most people with little configuration despite the bugs and driver issues, and with major programs like wine and web browsers slowly implementing Wayland support, X11's dominance feels like it's going to end any day now. For now, though, I can understand why they wouldn't want to commit to porting the entire desktop environment when they could focus on continuing to improve what they already have.
One motivation in the move towards Wayland is security. This is why it often seems limiting in features or why you can't do something in Wayland that you could do in X. Not because devs want to limit features rather some features cannot be implemented in a secure manner. Many changes in gnome also seem motivated by security. For example gnome will not let any other application take screen shots, gnome will prompt the user to share a screenshot with another application if another application requested the screenshot. Imagine a malicious application taking screenshots without your knowledge. Now if a malicious application did that gnome would ask if you want to share the screenshot with the malicious application giving you the option to prevent it and a clue that you have a problem. In Wayland you cannot reparent windows. A useful feature but one that can be used maliciously too. I ran across this because I wanted to reparent some windows for an embedded gui application. Unfortunately in my case the thing I wanted to reparent only works with Wayland so I can't even drop back to X, I simply cannot do what I want. My workaround was to control placement of the other window, well it seems you can't do that in Wayland either. It seems writing my own Wayland server would work but that is beyond my skill set.
The history of X11 is easy: First there was "W" which was slow. Then came X1 to X10 which were shit and no upward compatible. And then came X11R0 to X11R4 which were useable but full of caveats - I started with R4. From X11R4 to R7.7 we mostly got only bugfixes. End of history. en.wikipedia.org/wiki/X_Window_System#Release_history
@@ulbuilderIMO Wayland should add a lot of those "insecure" features but lock them down heavily so that only trusted apps like the desktop environment or other explicitly authorized utilities can use them. As it is though, however good wayland is, its super limiting. For instance, lets say something basic, like you want to rebind some keys for an application. Great, that should be super easy, barely an inconvenience! Except, the application doesn't let you rebind things. Well, easy solution, just make a macro so that when you hit one key, your OS tells the application you did something else. For instance you could rebind a useless "menu" key to instead perform some keyboard shortcut to do what you want! Then, your DE is basically able to rebind the key for you, whether the application likes it or not! Simple problem and a simply elegant solution! Except, no DE does this natively, not even KDE with its otherwise impressive shortcuts, and on Wayland that requires things you cant readily do. (Like detecting what window you have focussed)
love your content and how you break down things to make it easy to understand
the mainframe at Caulfield Institute of Technology in 1975 was the ICL 1904A (?) used teletype but had a handful of vacuum tube terminals. I entered programs in BASIC and got an output on a tty or the VDU which didn’t use any paper. The people doing EDP used FORTRAN on cards, there were these big card readers which started with a whine noise.
You tackled some difficult technologies very well there thank you!
This video is longer than what you normally put out. I have to say though like thoroughly enjoyed it. It's more in-depth than what you normally go. I also enjoyed the original BSG reference....
Keep it up!!!!
My first full time job was running a computer graphics lab using Vision Control’s Conjure graphics system, it was an Australian system that was so rare and expensive that there is very little information about it on the web. We used it for generating SCODL files to feed to a Matrix film recorder which used a tiny and perfectly flat 4K screen to expose film one pixel at a time and via separate RGB filtered passes so it took about 15 minute to image a single 24 bit colour 4K frame.
Awesome work as always retrobytes, I’d love to see a follow on from this going into the development of pc graphics standards, CGA, VGA, SVGA, etc. (I’m sure there’s more). Many thanks
Was at the national museum last year. Was a blast. Low amount of visitors, super knowledgable staff eager to show running machines and able to explain to not as knowledgable people like me. Unfortunately it is lacking some of recent history.
Would deffo recommend. Seems to be a little underfunded though as it doesnt get gov funds.
Buy a coffee and a souvinir from the store inside ❤
Where is it located?
@@wilfredpayne433Bletchley Park, Milton Keynes. MK3 6EB.
United Kingdom.
This was fun. I worked at DEC from 1981-1985, and during that time I worked on the graphics firmware for the DEC Professional 350 and later was one of the designers of what became the VT340 (the original prototype was much more capable, but too expensive to produce).
The Tek 4010, 4010A and 4014 were storage tube vector displays. Vectors could be drawn at low intensity and they would fade unless refreshed and at high intensity and they would remain displayed until the screen was cleared without being refreshed. The tube was cleared not by removing power, but by flooding it with a higher intensity beam causing the phosphor to release lots of photons and quickly return to a low energy state. The text cursor and cross-hair locator cursor used the low-intensity so they were not retained. If I recall correctly (and I may not), there was a way to erase individual vectors using a higher intensity beam that would overload just the the phosphor along the vector, resulting in the phosphor returning to the base state after emitting a lot of photons. In the later 1980 and early 1990s, I was working at a company that produced terminal emulation software for PCs (Persoft, SmartTerm, respectively), and for the VT340 emulation product, I wrote a very accurate Tektronix 4010/4014 emulator (the DEC VT240 and VT340 had Tek emulation mode); though it did not end up in the final product, I originally had an "Easter Egg" in the code that would make the screen flash when cleared, had the cursors leave ghosts when left in one place for too long, and a few additional quirks unique to those storage tube terminals.
I also wrote the ANSI, ReGIS and SIXEL parsers for the SmartTerm 240 and 340.
I'm 99% certain Tektronix did not provide a way to erase a vector. Not until the Tektronix 4025 but it wasn't storage tube and plotted about the same speed as a pen plotter.
We had a Teletype machine exactly like that at School in 1980. It had a modem acoustic coupler so it could log into the computer at the local University. We would feed the punch tape back into the machine over and over making copies to produce confettii.
My school had one too. Although the number of people who knew about the "computer room" was minuscule, and I believe the school couldn't afford the telephone call charges to use it. I think there were a couple of Research Machines computers in the room as well that were used.
Speaking of games on teletype, my dad told me how he used to play lunar lander on a teletype machine and the way it worked just sounded a little painful and very slow.
Teletype lunar lander was a beast. Printed a line with all the flight parameters as quick as it could. At the start of the descent it was nearly "real time". Near the end, you'd make a giant crater before you could correct the engine burn. Don't get me started on "fps" :-D
So? A waste of paper?
@@MalloryCallas That's literally half of what my dad remembered of it. A HUGE waste of paper and ink. But it was the university machine so he and his classmates did not care one bit
A more fun, but potentially dangerous game, was catch the teletype.
@@EngineerOfChaos I remember printing banners was big back in the mid 70's and 80's. Teletypes and dot matrix printers.
Nicely done!
It's funny how X still exists, and I'm still using it. Obviously (I think) I'm using it at the local machine but it's crazy to think how old it is.
X is incredible old now, but it has kept evolving. It keeps gaining extensions to moderise it, however its still compatible with x apps written decades ago.
Most of the X11 developers are concentrating on Wayland now. The sun is very definitely, if slowly, setting on X11.
@@lawrencedoliveiro9104 yep. I'm using KDE on Wayland on Fedora, and unlike a few years ago it's so stable you won't know the difference unless you poke around.
It's still a mess on FreeBSD, however; but then FreeBSD on the desktop is a mess in general. OpenBSD is much better, if slow, and if memory serves there's no Wayland in sight for it.
X11 is the best thing for running programs on a system that is dual-homed on a protected, and local network, where you don't have access to the console.
You can ssh into the computer and tunnel the X server back across the network using the X terminal as the console. I do this all the time,a dn saves a great deal of headache.
Certainly brought back memories! My first job was using CP/M on a vt100 compatible terminal. I have particular memories of the Rosy tty and writing CAD software to use the the 4014 as a display. Punch cards, paper tape, tty's, 4014's were still in use even in the early 80's so I got to get a taste of it all. Thanks for a great video!
Such great presentation always
Thanks Autotrope.
Your video was very fascinating. I worked for Visual Technology from 1980 to 1987. I started as an assembler and ended as a lead technician. What fed Visual's growth was their development hardware emulation of DEC's terminals as well as those of other companies. The Visual V-100, for example, emulated the DEC VT-100 nearly exactly. Going into the setup allowed the user to switch emulation to ADS, or Lear Siegler, if needed as well as set other parameters. They had a complete line of terminals including those made under contract for various companies including Burroughs. These were called SPRs or special production runs. What also set Visual's terminals apart from the others was the quality. They used Key-tronics ergonomic keyboards and also high-quality CRTs.
In addition to standard ANSI and ASCII terminals, Visual produced a line of graphics terminals that was both a standard DEC compatible terminal as well as a graphics terminal. Their V-102 with the graphics option, an add-on daughter card that sat on deadly pin headers that could rip one's hands apart, handled the GTCO and DEC Regis graphics. Their V550 line used a special low persistence CRT to allow for high-resolution graphics, and their V-450 and V480 series utilized dual Z-80s with one dedicated to the standard terminal with the other for graphics with the V-480 being the color version.
In 1982, Visual purchased Ontel Corporation. Ontel produced "intelligent" terminals that were beyond a regular terminal. Ontel's products were more like small PCs complete with hard disks, Phoenix and Hawk drives, floppy drives, and a multitude of add-on cards. The systems had dedicated slots for the main cards as well as additional slots for the add-on cards such as word-mover-controllers for world processing, special communications cards, such as the SDLC controllers, and many other interesting cards including RAM cards covered with static RAM chips.
Shortly after this purchase, Visual entered the personal computer market with their V-1050 CP/M 3.0 (Plus) computer and later their Commuter Computer a transportable IBM compatible. I had a V-1050 (Kicks self for selling it!). It was a great system that came bundled with all kinds of software including Z-80 assembler and C-BASIC. The system had a unique hardware setup with a 6502 for graphics with dedicated 32K or video memory. I still have my Commuter complete with user manuals and that system is still operational today as it was when I first got it. I only wish now that I kept my V-1050.
By the end of the 1980s, Visual was a bit worse for the wear and made one more shot at the waning terminal market. Their foray into bit-mapped graphics and X-terminals while successful, was a bit too little and too late. They never had the sales they expected as that market was terminal due to PCs being able to emulate the very terminals they were producing through software instead of hardware. They closed their doors in 1991 or 1992.
There are many excellent channels on yt but very few are done at the level this one is. Concise and to the point, perfect narration, witty humour and no worn out jokes and TH-cam cliches. I salute you Sir !
This was very well done and an interesting and not too dry watch. Really enjoyed it.
Thanks, I was wondering if I had gone too nerd on this particular one.
@@RetroBytesUKnot nerd enough since there was no mention of RIP lol
The VT100 was the fist display I used in a real (paying) job. And I used it for programming for three years.
Same here. I liked mine so much that I didn’t let them give me a VT220 when it became available. By the time I left the company, my 100 was so old that they let me take it with me. 😊
This is incredible stuff. Thank you for taking the time to dive into this history! I've been taking a deep dive into the history of computing for quite a while now and your content definitely scratches that itch.
I'm now a new subscriber, and I look forward to checking your content out. You definitely deserve more recognition!
Excellent video and pleased to see Xterm get some coverage. 👍😀
Absolutely fascinating, so well done. I loved the SGI references, still regret selling mine. I think X gets more grief than it perhaps deserves, there's something to be said about opening the display of a 30 year old system on your modern system, but then I'm far from your typical use case. 🙂
Again a great episode!
Noteworthy is the SAGE computersystem (AN/FSQ-7) which predates the PDP-1 (by probably a couple of months).. The system had vector display terminals with their vector lists stored in rotating drums within the display stations
SAGE was a very interesting machine/project. What I could not find was a trust wothy time line of which bits of SAGE became operatonal when. There is a date most sources seem to use, what I could not determin, was that date all the parts of SAGE (including the vector displays), or was it just the some core components of the system.
Unix 7 edition refers to typewriters quite a lot. Init in 7th for example
“When init first is executed the console typewriter
/dev/console. is opened for reading and writing and the
shell is invoked immediately. This feature is used to bring
up a single-user system. If the shell terminates, init
comes up multi-user and the process described below is
started.
When init comes up multiuser, it invokes a shell, with input
taken from the file /etc/rc. This command file performs
housekeeping like removing temporary files, mounting file
systems, and starting daemons.”
While things changed a bit with systemd, it is also the same thing happens today when you boot a modern UNIX/Linux
i was expecting this to be about the display hardware like crt like how you explained at the end, but what this was turned out to be even more interesting for the reasons you listed. subscribed!
The IBM world was also a lot more wide than 3270 or SNA; there was also Twinax for the minis (S/36 - S/38 - AS/400 etc). Both 3270 and 5250 terminals were block mode instead of character mode, and they effectively worked like hardware-based HTML forms. You'd send a display list to the terminal with input and output records, and you would get the records back when the user pressed a key that raised an interrupt (i.e. the F keys, submit, etc). Stuff like arrow keys and a lot of line/screen editing was done locally on the terminal.
Heh! You are describing how CICS works there and believe or not there are still tons of CICS-based applications still in use today.
Block-mode terminals were designed very much for computer efficiency, not user efficiency. They were a typically IBM way of doing “interaction”.
Meanwhile, DEC had terminals where an interrupt was sent to the CPU for every key you pressed. Anathema to an expensive mainframe, but a good fit for user-friendly interactive computing on a DEC mini.
@@RogerioPereiradaSilva77 I have my own personal pet mainframe - I know ;-). CHUNGUS is a z14-ZR1 that lives in my garage. There's also BACKPAIN, which is a P8 running IBM i
@@YvanJanssens That's cool... What OS are you running on the z14?
Block mode 3270 and 5250 IBM terminals make a lot of sense. The upstream hardware was only interrupted once per screen full of input rather than on every character. There are far more applications than you might think that still use these protocols via terminal emulators. Most Costco stores have a workstation with a 5250 emulator interface for inventory and other stuff that customers can see, usually near the food court and or restrooms.
A few additional notes (and corrections) on the subject of random access displays (as opposed to raster CRTs):
What all these have in common is that you address a random point and activate it. In its most basic implementation, this is "point plotting" display or "X/Y display". This just displays a dot at the coordinates in question, maybe at a selected brightness/intensity. This is also what's found on the PDP-1. Notably, the dots drawn are discontinuous, it's just a dot and the next dot. Everything else, e.g., drawing a line or refreshing what's on the screen, is left to the software in the computer. Otherwise, the dots just fade away (these displays typically feature a slow phosphor to stabilize the image), to be replaced by what's drawn next. (Therefore, these displays are also sometimes called "animated displays" or "painted displays".) Technically, it's an elaborate oscilloscope with a digital input, typically using the kind of CRT that had been developed for RADAR. (Remember the slow phosphor? This is a feature you want to have for both applications., as is a high display resolution.) Most notably, there is no memory or register whatsoever in the display, besides the buffers for the current display coordinates.
(Given the quite astronomical costs of core memory for what would required for a screen buffer, as well as the rather slow speed of this memory, we can probably see why this was a typical setup for early computer displays. On the other hand, as resolution wasn't limited by any RAM, these displays had also quite a high resolution, in the case of the PDP-1 1028 x 1028, and there was even an option for a small, high density 4096 x 4096 display with a display area of just 3" x 3". We'll see later, why that should be this small.)
The next step is adding some memory and processing power in order to load off the refresh cycle to a dedicated piece of hardware. These were typically cabinet sized, just like the computers they were connected to. Now, you could have also vector commands, while the display hardware was still just drawing dots. Also, this was quite a complex and costly piece of hardware.
On the other hand, there were actual vector displays, which provided continuous activation, while the beam swept over the screen, from one location to the next one. So, instead of discrete dots, we get a series of lines connecting these dots. In order to do so, there had to be some memory, as well, where we can store a list of coordinates that should be visited in turn, as the image is redrawn on the screen. This is a true vector display, and it comes a bit later in history (because of memory technology).
There were even variants, where a sub-circuitry would add the required modulations for drawing characters at a given display location, providing a much faster way to draw text, than the display would actually allow for. Here, speed translates to the amount of text that can be displayed at once, without the display starting to flicker, as the list of display coordinates to visit in a redraw cycle became too long to keep up with the kind of sustain the phosphor of the CRT could provide. On the downside, these characters were often crude and not that great to read.
A limiting factor for all these random access displays was image stability. While the beam of a raster display travels just a minimal amount between two display locations and this in a continuous motion, the beam of a random access display has to address various locations, which may be anywhere on the display area of the screen (as implied by "random access"). Meaning, it may have to travel quite a bit at high speed, which is prone to any kind of overshoot and undershoot as the deflection coils are energized to the required level to address a given location. But, if we want a stable and sharp image, the location illuminated as we refresh the screen should be exactly where we illuminated it the cycle before this. In the case of point plotting display, where we are displaying discrete dots, we can address this by adding some time between moving the beam to a display and activating it to a level, where we excite the phosphor to visual level. Here, in terms of the precision and the resolutions at which we can achieve a stable image, the simple point plotting displays really shine, while also putting a limit to what amount of dots we can put on the screen without too much of a flicker. (This also introduces an additional constraint of display size versus precision and stability. Remember the small high-resolution display of the PDP-1?) With vector displays not so much, as we can't just stop before the next location. Which is also, why there are often blurry edges, especially with the more cost effective variety.
Last, but not least, there were also memory display tubes, where the CRT "remembers" where it had been activated and redraws this on its own, without any further input. As this is determined by the phosphor, there is no need to store these locations and its quite precise. But, as in the video, this is a story of its own and this comment is already quite long as it is…
Also, thanks for a great video! I always wanted some coffee table book on the subject.
Thanks! I've never heard of this technology before. Of course the first thing I thought was how do you refresh this... it's pretty much CRT DRAM.
Thanks for making a really interesting video. It's amazing how many throwbacks we have still in computing that potentially predate the computer.
Remember playing Star Trek on the teletype back in the early 80’s. Height of fun at the time !
Who needs graphics !
This was a surprisingly interesting video, awesome job! Thank you. I really didn't think there was so much to this seemingly simplistic subject that I was completely unaware of. Very enthralling.
28:26 DEC’s terminals actually implemented a superset of the ANSI-standard escape sequences. The ones with the question mark after the CSI are DEC-specific ones. They also became part of the _de-facto_ standard.
EDSAC had bitmap displays in 1949, the SAGE early warning system had displays with light pens (guns) in the mid 50's... Tektronics 1960's... Evans and Sutherland 1970's...
Without looking it up, I'd guess the SAGE displays were much more CRTs driven by analog radar circuitry than actual computer displays. It would have been easier, probably. Light pens don't really need anything from a computer to get an X-Y coordinate-as-a-voltage from a scanning CRT.
18:45 On the Windows side it still echoes in text files.
If you create Text file on Linux or other UNIX like,end of line will be marked with next line character.
While on windows it will be next line after "return carriage" character.
As you might guess this character tells (in this case non existing) teleprinter to return carriage to left most position, before "next line" character tells it to go down to next line.
HTTP and telnet protocol also uses carriage return+next line scheme,.
IIRC, the vt100 could not position the cursor on the screen. The text that went to the screen simply advanced line after line. It was only with the vt102, that you could jump around the screen to place the text where you wanted.
This is why people were still using line by line editors like ed, with vt100. Only with the vt102 could you emulate something like a full screen text editor.
I actually had both a vt100 and a vt102 that I used through the serial ports of an early 90s Linux computer for additional live terminals, years after regular CGA and better displays, and PS2 keyboards and the like we're readily available.
So I had the main display and keyboard hooked up to the Linux box, but also the vt100 and vt102 hooked up so that other people could work on it at the same time.
I need to correct myself. Now that I'm thinking about it, what I had was a TTY dumb terminal for one port, and then either a vt100 or vt102 for the other.
You were right that the vt100 had full screen text positioning, like the vt102. Other than that, my Uninvited reminiscence is accurate.
I would, for example, use type -f to have a constant display of a log on the TTY display, and have TOP running constantly on the vt102. I could keep my eye on them without having to switch away on my main display. Note that I didn't run X on that linux box, which was the primary server for the ISP I owned.
I usually say ah that's what that is for in these videos, this one was ANSI.sys. like a 4d light bulb moment.
The fact that you used Django Reinhardt and Stephane Grappelli's Minor Swing in this video makes it even better. Impeccable music taste.
38:25 This was based on an actual standard called “GKS”. I think the “GSX” name denoted a superset.
The problem with GKS was, it was never designed for highly interactive graphics, with things changing on the screen all the time. It was more oriented towards CAD-style applications. You wouldn’t want to use it for games, for example.
William's Kilburn Tube. I was Manchester University in the late 1990s when they rebuilt a Manchester Baby for the 50th anniversary (I wrote a full speed emulation of the Baby for a classic Mac!) and I think it's not really true to say that the William's Kilburn tube wasn't understood as a display at the time, for the very simple reason they wrote an animation program to demonstrate its display capabilities, called Kilburn's Nightmare.
Which brings me to the next point, it's not true to say that the CRT was observed by pulling down the metal plate on the memory. Instead, there were at least 4 tubes: one for the program (32 words x 32 bits); one for the accumulator; another for the program counter (yep, a whole tube with just one line) and a final tube, called the monitor, which was simple a CRT that showed a copy of one of the other CRTs, selected by 3 buttons.
Which brings me to the last point: on the William Kilburn tube, the brightness of the dot didn't denote 0 or 1, the _length_ of the dot denoted 0 or 1, where a 1 was 3x the length of a 0.
All this is described in the SSEM Programmer's Reference Manual:
curation.cs.manchester.ac.uk/computer50/www.computer50.org/mark1/prog98/ssemref.html
Kilburn's Nightmare can be seen on this screenshot describing a Manchester Baby simulator:
davidsharp.com/baby/
I see what you mean. The point I was trying to make was that they where concived as a memory device not a display. Yes they definitely knew about the useful side effect of them being able to display stuff and make use of it. However they where still built and conceved of as memory devices.
Your right about the line length, I did not explain that part well. It was about pushing out sufficient electrons so that the read beam would not cause an electron splash at a given point of the screen. Bringness was not the best term to use for that, and bit more detail there would have helped.
Another leftover from the teletype: in LaTeX, the monospace font is referenced by \texttt{} for "text teletype." Since LaTeX is dominant for Computer Science and Math papers, and mono space font is typically used for programs, it means most programs published in academia are using the "teletype font" for their programs.
Real nice video - glad you touched on screen tech. at the end there, and yeah, the focus you chose was very appropriate B)
Fascinating video. I MUST get to Bletchley Park at some time to have a good look around! Thanks! 👍
It a really interesting place to visit. The ticketing situation with Bletchley park and The National Museum of Computing is a bit odd however. If you buy a ticket for Bletchley park, it does not get you into TNMOC dispite them being on the same site, as it used to. Bletchley park decided it wanted to keep all the ticket revenue, dispite most visitors wanting to vist both Bletchly park and TNMOC.
@@RetroBytesUK Thanks for the tip-off! When my mate is a bit better the intention is to hire a car and make a day of it, although I'd likely want a whole week with this (at least!). I'd be happy to make whatever donation I can.
We used to correct errors on teletype program entry by cutting/splicing the paper tape (if it was an error far up the program) or with the rubout.
It was a s*d of a job so we tended to not make many mistakes.
Didn't stop me (first program in 1976) from "doing computers" for ever. Still I mean :-P
I somehow wasn't expecting X and Wayland to get brought up here, but I'm glad they were.
I've been using Wayland (specifically Sway, a Wayland compositor meant to work like i3wm) on my crappy laptop for years now, and it runs so much smoother than X ever could.
so nicely done! I hope to use this video as a resource (with credit of course) in some of my documentaries! Keep up the great work!
The blinkenlights reminded me of when a place I worked for bought a computer business in Germany. The Germans had added a "feature" to their Unix machines, a small computer with an LED display that communicated with a daemon through an RS-232 port. It displayed the status of the machine (memory usage, CPU load, etc.) with various numeric displays and of course colored blinking LEDs. We tried very hard to be polite as they explained what this goofy thing was and how it made their Unix machines the best in the world.
When we the new owners of their company told them to get rid of it because it was a waste of money, they were shocked and offended! They had spent thousands of dollars developing this device which gave instant and vital information about the system to anyone who happened to be in the computer room and couldn't be bothered to type a command to get the same information.
MacOS X does have X11 available. At least for 10.0 to 10.10. I've run many Unix Apps which used it on my 2012 macbook air. And on my 2008 MBP. It was even downloadable from Apple. It just wasn't the primary display mode. Also, neither of the unix installs I've been using recently use weyland - both still use X11 primary. I could install it, but the unix VEnv on the chomebook already slows the chromebook down plenty.
35:30 That's not a serial port in the middle of the Apple 1 board: that's a 74154 demultiplexer used for the address decoding system. The jumpers next to it set up the memory map.
The actual interface to the video system isn't serial, it's parallel, using the 6820 PIA in the lower left corner of the board, just to the left of the CPU. (The computer side doesn't see much difference between this and a UART, however; it still just checks that video system is ready to receive a byte and then writes the byte to the PIA.) The 6820 is also used for input from the keyboard, which is also a parallel interface.
The Apple 1 video system doesn't even reach the level of "glass TTY." A TTY at least could backspace and overstrike, even if it couldn't reverse line feed. The only control character available on the Apple 1 video system is CR, which returns the cursor to the left-hand side of the display and moves it down a line, scrolling the display if necessary. You couldn't backspace, and you couldn't even clear the screen through software (though there was a hardware button to do this). It's also quite slow: due to the use of delay loop memory, you can write a new character to the screen only once every display refresh, or 60 characters per second. That's about the same as a 600 bps serial link, faster than a 300 baud modem but slower than a 1200.
Oh, and it's not called "X Windows," but the "X Window System."
Well, that was surprisingly interesting and entertaining. I appreciate that you mixed in some humour and comedy without going overboard, at least for my taste. I ook forward to seeing you cover similar topics in the future, if you'd so choose.
What I'll always remember about Xwindows is the massive set of manuals that documented all the intricacies of it. I used monochrome X terminals in university in the early 90s. They were a big step up over the ASCII terminals they were using until a year or two before that.
Yes Peri, that's what PCB stands for - love it!
I used to repair Teletype ASR 33 and ASR 35. I also trouble shoot PDP-11"s down to the chip level including. the arithmetic unit.
It took Walmart up until three years ago to switch from terminal based applications for receiving freight. I used to work on a receiving dock and they were using an android terminal emulator that connected via xterm to the mainframe on site. The program we used would ask for a terminal type and autofilled the field with "vt220". They still use terminal programs to post and research problems with orders. The worst part about it is that the android based programs they use now are worse because they use http requests and are actually 100x slower than an xterm connection, and that delay is on top of all of the "fluff" they added in to make the program look more like a phone application like animations.
The smart system is almost completely fazed out now. It was also used for user permissions like acess to the profit and loss app as well as whether or not an associate was eligible to sign up for benefits or not. Now that is managed by some shitty app on the Wire and takes way longer to do things than the smart system did
@quohime1824 I work in a DC which uses even more antiquated UNIX mainframe for receiving and orderfilling. But yeah, moving to html and webapps makes everything so much slower.
We used to be able to write macros to do repetitive tasks in the terminal emulators, but now we have to wait for web requests to load the entire webapp, then query a super slow database, then populate the webapp with the database data. All on thin clients or very old android mobile computers that can hardly handle simple transition animations.
Thin clients were meant to be like old terminals. Access a database from multiple different locations. The database it supposed to do all of the heavy calculations and normalize controls between systems. F7 was always APPLY no matter what screen you were on. Now every webapp has a thousand animations and gradients and different teams build different apps and nothing is fast or similar. Power users in the field have been crippled by all of the changes and it's no longer rewarding to fly through screens and fix problems for the unloaders on the dock.
@@rileyjones7231 People use macros quite a lot in FedEx still today as well in 3270 green screen emulators, it is the only way to get many things done, as it rarely support any development. You can use Visual Basic scripts in Excel as well to interact with the terminal emulator. When it gets fun is when whole departments become dependent on a macro, the guy who wrote it left long ago, and there’s a screen change and everything collapses because no one knows how it works :)
Agree with you all concerning the web shell interfaces, it’s the same mainframe behind them, but they work really slowly. They told us for 15 years the mainframe was going away, but it’s still here, and will be for a long time. During this period when they were sure it was going away they stopped documenting changes, and there was a big problem with code that no knew what it did or how to change it. They actually have researchers now who study the old code and document it’s functionality, sort of like mainframe archaeologists :)
In the early 60s there was another application that started out using the Tek style terminals called PLATO. This was a computer based education project run out of the University of Illinois. I highly recommend Brian Dear's book The Friendly Orange Glow for a comprehensive history of PLATO and especially the evolution of the display technology. In the late 60s they invented the first plasma screens which also used the pixel wiring grid as the video memory. Really interesting stuff...
Thank God you mentioned Wayland at the end 🙏I was almost worried for a hot minute that you wouldnt.
Great video!
"However...there's a but. And it's a big but, I cannot lie."
I like big buts.
Battlestar Galactica?! I remember watching Mormons in Space as a first-run series, though it was only latterly (pun intended) that I came to realize the series arc was a rehash of Joseph's Myths. I thought it was nice they actually credited Tektronics for their terminal displays.
We had Commodore PET computers at collage in 1981 but we also had a punched card machine. Part of our course involved writing a program on punched cards. Your stack of cards was then sent away for processing. Two weeks later you would get back a printout with the error code showing that your program had failed. Programming on the PET was much more rewarding.
I had the same problem but was too early for micros. Like you said even a simple program would come back because a comma was missing or similar fault. You would repunch the offending card and send them off for another week just to find a later card in the sequence had an error. Rinse repeat. It could take many weeks to get a simple program to run successfully. I envy those who came later and had access to terminals or micros with their almost instant response.
@@crabby7668 Even in 1981 this system was a museum piece. It was there mainly to teach how computers have developed. At the time I thought it completely pointless but it was in fact assembly language which was incredibly important to the home micro revolution at the time. My friend had an Acorn Atom home computer so had learned assembly language on it's built in assembler so he was a real Wiz on the PETs. They also had an LSI-11 a clone of the PDP-11 running VMS I think. I was not interested in that but perhaps I should have been. Had they hooked up the punched card machine to that perhaps our cards could have been processed a lot sooner. Maybe I'm missing the point of the punched card machine, maybe we were supposed to wait two weeks?
@@wayland7150 sounds great. My institution hadn't got that far when I was there. We were supposed to be learrning fortran, but with the turn round time on the punch card it was very hard to do. I remember my first program was just averaging three numbers, but as said it took weeks, because your cards had an error when they returned., which you fixed and sent back. Then it would compile past that point and find another error, rinse repeat. We didn't even have access to interactive terminals. Imagine doing your programming by post instead of on a terminal or pc an,d that will give you an idea of how clumsy it was.
I worked in a company with a similar set up, but the punch machines and computers were dedicated to the job so much quicker as you weren't sharing with everyone else.
34:28 Sinclair's Mark14? Nooo!!!! The MK14 designation for the Sience of Cambridge SBC stood for Microcomputer Kit, with 14 major chips, and is correctly pronounced EM KAY fourteen. Watch any video that interviews the devolpers (Steve Furber, Sophie Wilson etc) for confirmation.
I dont think Sophie or Steve had anything todo with the MK14, Chris Curry did who started CPU ltd (and then Acorn) and would subsequently employ Steve and Sphoie. SBC was owned by Sir Clive, who put Chris Curry in charge of it. The creation of MK14 was Chris's idea.
@@RetroBytesUK True, my bad. Sophie and Steve weren't the actual designers but they do talk about its development as part of their 'History of Acorn' type of interviews and were the first names to come to mind. As you say Chris was in chare of SoC, Sir Clive's holding company, and decide to produce the kit. He didn't do the design though and agreed to buy the original design from a guy (who's name escapes me atm) but when he was trying to do a deal with National Semiconductor for the required chips they suggested he just use their reference design that they used for their own Introkit system. The point is none of the original developers referred to it as the mark 14 and neither did the engineers I spoke to when I personally collected my optional expansion hardware (RAM, IO, Cassette i/f) directly from the Soc offices at 6 Kings Parade. Interestingly the local fire prevention officer would have had a coronary if he could have seen the stairway up to the workshop which was 50% blocked with .... Acorn System 1 display/sales boxes.
In the words of the man himself ... th-cam.com/video/awlqzippsSc/w-d-xo.html Ian Williamson - My life and the MK14 - Science of Cambridge
Love these videos. Takes me back to my DEC days as field tech.
Glorified typewriter yes. But man, it transmitted some major world history . Battles were won or lost on it's clackatee clack. Love your videos dude.
I really really want to get a teletype, no idea where I would put it. Also you never see them come up, some how Dave (who's one I filmed) manages to find them, but I never have.
@@RetroBytesUK Well I was lucky enough to be in the Science Museum in early 1994, and they had a slightly more modern version behind glass, but it would be constantly hammering out wire news, and on that day Eric Honecker had just resigned as Grand Wizard or whatever the frig he was, which of course triggered the Berlin Wall teardown. And somewhere in this house in London is that very printout, cause there was an attendant who tore off the individual news items and handed them out to small nerds in the room at the time. Ahh the bad old days, I loved them.
Ah man i love the colossus (1943), its my third favourite first computer, right after babbages analytical engine (1837) and zuse z machines (1938, 1940, and 1941)
Babbage never did finish his analytical engine, so it remained a theoretical until a practical version was built in the 1980s. All the zuse machines fall in to not being turing complete, or not finished and/or working. Thats why most moden text books go for Colossus as the first as it was Turing complete, fullly completed and working. Flowers was not the first to have the idea, he was the first to get a full verison complete and working.
@@RetroBytesUK the colossus was actually not turing complete, and the z3 was finished and could operate as a turing complete machine (very badly, it had no conditional branching and so could only be counted as turing complete if it calculated every possible outcome of a given program, which does stack up the compute time very quickly)
No love for the beastie at 1:00? The LEO, created by the Lyons Tea-Shop Company?
Zuse machines were programmable calculators, not computers.
amazing video. I watched it till the end, as with all your videos. didn't know about X using shared memory, I thought they still communicated with the server using sockets, but it makes sense from a performance point of view.
You thanked me for getting to the end? I've got a big music fest coming up this Saturday -- 28 acts in one day -- and needed to fix a lighting board. I couldn't listen to every word while I troubleshot and then soldered a new button onto said board, but this was great. Also my first time seeing your videos. Thank *you*, and subscribed. Cross your fingers it worked!
That's alot of acts in 1 day, your change over times must be tiny.
@@RetroBytesUK half hour main stage, 15 minute side stage. the stages are indoors and 20' away from eachother but still named as such. when you start at 3pm, anything's possible :-)
hehehehe btw, if you happen to be in anchorage alaska, stop by van's dive bar this saturday!
This video was beyond great… love this kind of content
4:02 A couple notes regarding the Williams Kilburn tubes without being too pedantic. The 0 was implemented as a dot and the 1 was a dash. In that way they could recover a clock from the signal. That leads to all the little dots present, and some of them "brighter" because there were the ones, like 2 dots together. And you cannot really open the lid to see the dots, because the lack of refresh would immediately kill the memory. So what they did for practical reasons was to simply place a second independent CRT in parallel, without a lid, to actually see the dots.
Slight correction, SGI did use X-Windows, they just didn't use the Common Desktop Environment which was a layer on top of X11 meant to standardise the look and feel of the UNIX desktop, since (like text shells before it) X11 was "implementation, not policy" and allowed everyone to write their own GUI, so they did - before standardising on CDE. That is a weakness/strength of the UNIX/Linux/BSD desktop that persists today even in Wayland.
Yes, it's a strength as well as a weakness. The strength is that nobody has to just settle for one desktop environment, the weakness is that you have to relearn how to use the computer if you move DEs. But honestly, although they all make different design decisions, and which is better is a matter of taste, relearning a GUI isn't *that* hard, especially if you're not one of these people who insists on using the keyboard for navigation when you could just use the mouse.
Anyway, an example of a company that *did* use a UNIX windowing system that wasn't X11 was Sun Microsystems with NeWS, which was both network transparent (like X11) and, if memory serves, also used Display Postscript like NeXTSTEP and MacOS. But even they transitioned to X11 pretty sharpish, and developed the Open Look toolkit which mimicked the look and feel of NeWS on X11, before also switching to CDE. I wasn't a computer science student but I did use Solaris on the university computer labs when I studied in Germany in the late 1990s, and by then the Sun workstations (SparcStation 20s IIRC) were using the famous blue-and-pink theme of Solaris CDE.
I think Apollo (which ended up being swallowed by Hewlett Packard long before they swallowed Compaq, which by then had swallowed DEC) may have also used their own proprietary windowing system. Their UNIX-like clustering DomainOS was definitely proprietary.
Other than that (and apologies for the ramble), it's a great video! 0:02
X11 didn’t come along until about 1987. Prior to that time, the Unix workstation vendors were all doing their own thing. For example, SGI had something they called “MEX”.
Sun’s NeWS predated Display PostScript, so they had to come up with their own PostScript extensions for interactive use.
@@lawrencedoliveiro9104 so we were both half right - thanks for the correction.
16:30 Yep, that was *MY* first computer interface, mid-late 1970's. And yes, I remember just how loud the Teletype machines could be.
My university primarily had Sun SPARC Stations that used XWindows on UNIX.
And there was nothing stopping us opening a terminal to an _Ultra_ SPARC station and connecting an XClient back to our local server on the slow SPARC Station over the secure terminal connection.
I memorized the IP addresses of all the fastest UltraSPARCs on campus.
I used to remote login to my uni's few ultrasparcs from the much old sparcstations to avoid have to queue to get physically on them.
@@RetroBytesUK
Good Times
I took CompSci at Brunel, you?
@@MostlyPennyCat Same but at Sheffield.
The first interactive computer display that i know of is the DSKEY on the Apollo Guidance Computer, the first real time computer.
Huge fan of your channel. Your Sun Microsystems videos make my nestalgia nerve tickle. My first rrql corporate job was all Sun. Could you do a deap dive on the Bindix G15? Way before my time, yours too but it is a crazy vacume tube machine! What youd think is the hard drive is memory. It also controls the timing. Its the culmination of what our grand dads would create to call a computer.
Also, funny story about how I almost lost that job, in 2000 sometime. Ameritrade on a Friday, near the end of day. The trading server went down. It cost over a million dollars in the aftermath. Some dumb junior engineer did an init 6 on it logged in as root. He thought it was his local machine. I learned a valuable lesson that day. Always do a uname -a before doing something you could regret. That was a Sun E10k. 4 racks of the things. Our SSE tried to kill init, but it was too late. I remember as I told my boss what I did mid shutdown the lights flickered. It was the major power draw for the whole campus. After that, probably 8pm my boss pulled me into a 1 on 1. He said. Your not fired. Relax. Every engineer does this once. If you didn't learn this time, I'll make sure no one hires you again. At the time there where only 3 companies in our area that I was qualified for. Even when managing VMs in my own home. uname -a every time. Its muscle memory at this point.
I had no idea magnetostrictive delay lines were used for video memory. This was a really enjoyable video!
Back when I first started in IT, there was a shortage of newer terminals.. something about the factory switching production from a discrete logic design to a microprocessor based design. Anyway, they dug up all these ancient units from anyplace that had any. I had to get enough of them working so the programmers could do their thing. These units used the same sort of delay lines for the screen memory. I never noticed them being sensitive to physical shock. Another odd thing about these relics was that the character generator was a board filled with individual diodes for the bits.
Great video; I enjoyed it immensely. I went from punched card I/O (no printing) on an IBM 1620 through re-wiring an IBM accounting machine to make it print at one line per second, booting a PDP-6 with 36 bit-switches to re-writing the IBM 1401 bootstrap loader (in one card), TTY terminals for APL on an ICL 1903A, a TV and cassette player for the Radio Shack MC-10, and now am struggling to get one of my four Windows laptops in working order. I still have my MC-10.
So you can imagine how much I enjoyed watching the video. At Normal Speed!
A suggestion: no background music and no background images. For example, from 35:13 through 49:37 I found it hard to focus on the image and the narration. (Well, I ***am*** an old man!)
Cheers, and Thanks again.
brilliant video!! This is better than anything on TV!! Thank you!!!
Loved the detail and the pace of the video! If I can give some feedback, I would avoid the static noise transition, its very aggressive and loud. Keep up the great content!!
What's with that weird watermark/book cover at the lower right of the computer facade being overlaid on screen at 9:15?
I honestly most enjoy the videos where Mr. Retrobytes places that deep pause in between PCB-Way.
interesting video. One area about modern LCD's and displays that you could have expanded on that does have to do with how the computer actually generates the image is to go into the recent development of things like G-Sync, VRR (Variable Refresh Rate) modes that are appearing on recent gaming monitors and TVs and explain what issue they are trying to solve. That does require some sophisticated communication between the PC and monitor including special chipsets on the monitor itself.
You're right gsync and vrr are very interesting technologies, at somepoint I made do somthing on how computers and monitors communicate and how thats changed.
The power driven keyboards on teletypes were always interesting to type on. Power assisted keying!
Senior linux engineer, today I learned what TTY stands for ha.
Thanks for this channel
I enjoyed it very much
I have used to work on PDP11/ 34 with RSX11/ M system
I worked on 8 VT100 TERMINALs one VT220 12 LA36 line printers
Operating , maintenance, replacing cards
It was a great mainframe computer till now
And worked on commodore 16 , 64
Amstrad 512 , 1024
NEC , OLIVITY
COMPUTERS
By the way, the part about graphics terminals reminded me of The Cuckoo’s Egg by Cliff Stoll. You may already have read the book, but if you’ve only seen the PBS documentary based on it that only covers about 10% of the book.
I love how he complains about his BSD-loving and VMS-loving coworkers fighting with each other about which was better. And there’s a whole segment discussing getting a new graphing terminal and spending days trying new visualisation programs for his physics students’ data, or something like that.
It’s really immersive and fun if you’re already accustomed to reading terminal outputs, although he certainly does his best to narrate and describe what some things mean. But if there’s no learning curve, you can jump right in and feel (secondhand) nostalgia.
Also if you weren’t familiar with it, part of how they tracked down the hacker was by realising he used ‘ls’ flags which weren’t common in their region, but were in other ones! Something they wouldn’t have noticed if they probably weren’t immersed in those platform supremacy arguments 😆
Excellent and fairly comprehensive. Only a couple things I think you passed over. (1) Things like the dot matrix DECwriter which were essentially high speed teletypes that got rid of the spinning ball head printing element and replaced it with the dot matrix head. Much quieter and faster. It had the advantage of still allowing a printed record of your session. The other thing is the fascinating evolution of accelerated graphics hardware and the career of Jay Miner. He developed some of the earliest raster acceleration technology for the Atari 800 series of microcomputers. It allowed you to have hardware sprites and you could switch color mapping, graphics modes and memory mapping on each new raster line. This evolved into the first hardware bit blitter (graphics mover) invented by Jim Clark, the founder of Silicon Graphics. Jay Miner's next computer, the Amiga, also had a hardware bit blitter as well as all of the goodies created for the Atari 800. This wonderful combination of raster-line remapping, hardware bit blitter and sprites enabled the creation of the Amiga's Workbench GUI operating environment. The Amiga was the first personal computer with all of those features, stereo digital audio and a full multi-tasking OS with standardized graphics libraries -- making it way ahead of its time. Unfortunately Commodore mismanaged that whole thing and we had to wait for Linux to give us an alternative to the lesser PC and Mac environments.
One last thing, I used to program real-time graphics on the Tektronix 4010 and 4012 (and I think 4016?). It had storage (write-through) mode where what you wrote to the display, stayed on screen whether it was text or graphics. There was also a mode that used a lower beam strength that required you to refresh the image and thus enabled real-time vector displays. The down side was that it was driven over a serial line which meant that you might not be able to refresh fast enough, or if someone on another session was hogging all of the serial I/O or CPU your real-time display would stutter and die. It was tough to find a microcomputer fast and efficient enough to reproduce the video games of the day like Atari Asteroids, Tempest etc.
Was there ever such a thing as a wax record memory, where something sorta like a vinyl disc made of easily meltable soft material, would have grooves or divots mechanically carved into it, and read sorta like a vinyl record is read, and for erasing a little pointy hot thing, like the tip of soldering iron, would go over where it needs to be reset into a blank state to melt the region into a liquid that would spontaneously get back into flat shape?
Just noticed, is the PDP vector display repurposed as the display used on Space Lab Regula 1 in ST:II??
I remember using telex terminals with paper tape readers & writers. We always coiled our tapes in a Figure-8 as it wouldn't jam or rip the tape.
And if you typed fast you could type faster than the hole punch could punch.
I also remember the audio cassette storage and those were valuable.
Eventually i worked for CDP (the first IBM PC clone) and wrote software including SCSI device drivers used by anyone who used really massive hard drives. We were in the ANSI SCSI Standards Committee.
Ultimately it became other things including the CAN network used in all modern cars (including the Tesla).
As for display technology I tried to get Intel to separate video refresh from their CPU as an effort to speed up CPU processing. Somehow Intel never really understood this and i went elsewhere.
Those were the days.
We made History.
Man, I love that 40's and 50's music you put into your videos to give them the atmosphere of old & retro. Your videos are a great way of learning what they don't teach you at university when studying computer or electronic engineering these days. I am a big fan of yours!
The music is more like 1920s era.
I owned a couple of NEC Spinwriter teleprinters as a teenager. I hooked one up to an old Hayes 300 smartmodem and was able to dial in to my local chat BBS. It worked, but you could only chat for as long as you had more tractor feed paper.
My introduction to computers was in 69-70. Our math teacher had a teletype terminal installed in his classroom and we could write BASIC programs that we could run on a timeshared mainframe someplace downtown. Great fun. From there to the Computer Science program at Northern Arizona University in 75. More teletypes with paper tape punches. Turn your program in and come back a few hours later to see if it ran. Nope, rinse and repeat. Some assignments required you to create a punch card deck and turn those in. Back to school in 82 and using a VAX for some courses, but back to teletypes and paper tape for COBOL programs running on another time shared computer. The advent of the PC was a giant leap. To be able to code and run your programs in real time was wonderful. Creating programs in BASIC and Turbo Pascal. Coding COBOL programs and running/testing in real time on our local IBM 370. We have come a LONG way
Really enjoyed this, subscribed.
27:38 Back in the late 1980s, I realized that 3270 block mode terminals + CICS was effectively client-server computing.
37:27 The VT-100 did it because punch cards are 80 columns.
Thank you for sharing. For some reason as a 45 year old, I am finally digging derp into how computers really work