I worked at Sun for 14 years then was assimilated by Oracle for 7 more years. Still worked on Sun stuff even though it was now Oracle. Then Oracle laid everyone off. So I still have fond memories of Sun. Nothing good to say about Oracle.
Yeah, I felt the same way when DEC was bought by Compaq. I didn't work directly for DEC but for one of their contractors. Just like with Sun, pretty much all the stuff that made DEC cool was killed almost right away.
Does anyone not named Larry Ellison ever have anything good to say about Oracle? Someone once referred to Larry Ellison as “Leisure Suit Larry.” The joke must be at least 5 years old at this point, and still hilarious.
"I worked at Sun for 14 years then was assimilated by Oracle for 7 more years." Do you have any theories on why SPARC didn't become the All American competitior to the plucky little Bri'ish ARM?
Back in the day I had a SparcStation 10, later with 4 Ross HyperSPARC CPUs and 128 Meg of RAM. We were running a manufacturing application developed by a company in Toronto by the name of Cherniak Software. They had ported it to the SparcStation at our behest from a Harris HCX-7/9 platform (of CCI Tahoe CPU / 4.3 BSD Tahoe fame). When they ported it originally to the Harris (they were at the time an Olivetti shop but we wanted Harris, it was the fastest thing they had ever run their code on. When they later ported it to the Sparc architecture, they were absolutely speechless. When we upgraded to HyperSPARC, they just stopped trying to come up with adjectives to describe how fast it was. Your video brings back memories. The very brilliant software engineer (I always joked that he considered a compiler merely a convenient way to generate object code - not a necessary one) who designed the database tools we used passed away last year, and the founder of the company the year before. I still use the tools they developed in another application today.
SPARC was the only machine used to develop ASICs back in the day. Used them for designing F22 EW and commercial graphics chips. We used SUN machines and some clones. Years ago SUN open sourced the SPARC processor. en.wikipedia.org/wiki/OpenSPARC. One of the only companies to open source something so amazing as a processor, even if older technology. Something I wish Intel, AMD, and Nvidia would do. If something is 20+ years old, open source it for enthusiasts. You can build SPARCs for FPGAs now days.
Growing up sun, and therefore sparc were very much everywhere in our house as my father worked as an engineer (having previously worked at Burroughs), and whilst I never really used sparc outside of tinkering myself, there was always a few sparc systems running , which as an inquisitive kid obsessed with computers was great. Mostly it was what was deemed to be broken when removed from one of the major London banks when doing repair, upgrade or replacement. The rules then were as long as it was never sold or passed on it was fine if you could fix it. Generally my father was rather good at fixing most things so there was a enterprise server running in the garage for years, never needed any extra heating in there! Plus various types of sparcstation around the house. My father moved into management at sun later on was made redundant by oracle 2 years ahead of retirement, but apparently he negotiated the package to cover his pension so he didn't lose out. This was a great video and i look forward to the sun video at some future date.
Breathtaking content. I provided tech support for Autodesk AutoCAD on IBM AIX, HP-UX, SGI Irix, DEC Vax and Alpha, and various Sun workstations back in the day (mid-1990s). All of these machines were a few yards/meters within my reach. Good times.
That SS10 you have is actually an SS10SX, an incredibly rare machine, much rarer than the Hypersparcs you have in it. I'd very much appreciate if you could take some high res pics of the motherboard and post them online because there are almost no good pics of that machine.
@@DVRC sx frame buffer circuitry built onto the main board, just add a vsimm (also frighteningly rare and pricey these days) and you have very high end graphics capabilities...for the era
@@DVRC It uses the same chipset as the SS20, but it came out before so maybe better to say it's the other way around. Mbus only runs at 40Mhz, no ISDN ports, audio codec built in so no need for a speaker box. It uses different VSIMM and aux video boards from the SS20, VSIMM from the SS20 may work in it but the hardware is so rare I wouldn't risk it. Spec wise the SS20 is a better machine and much easier find and get parts for. Only real advantage the SS10SX has is it uses an SS10 case which I happen to think looks better.
13:35 NeWS, the “Networked (Extensible?) Windowing System”. This was based on PostScript, before there was such a thing as Display PostScript. So Sun added its own extensions for on-screen drawing and interaction handling. And then abandoned it all when X11 came along. Some people really liked the idea of being able to load small pieces of autonomous code into the display server that could handle low-level user interactions without bothering the main processes running on the CPU. Done right, this could certainly make the system feel more responsive, particularly over slow networks or remote links. The problem (as with the whole issue of multithreading that became popular in the 1990s) was that synchronizing all these independent threads of control was very hard to get right, and was extremely prone to hard-to-reproduce timing-related bugs. “Knock-knock.” “Race condition.” “Who’s there?”
This was the GUI for SGI Personal Iris, before also being replaced by X11. I preferred NeWS. Not just because the Motif WM was boring and clunky to look at, but it ran at reduced performance compared to NeWS on a given system. X11 made previously useful 4D20/4D25 configurations frustratingly worse to use and slower.
@@SeanCC It is true that, back in the 1990s, platforms that integrated the GUI into the OS kernel had a performance advantage. But that hasn’t been the case for a while: nowadays it is Linux+X11 that is more responsive and less resource-hungry than Microsoft’s or Apple’s platforms.
@@lawrencedoliveiro9104 nowadays I think it's more about hardware overcoming/overcompensating for low performance code. I've been using Windows, Linux and Apple for over twenty years in the meantime and there's still so, so much that Linux+X has to catch up to. But it's better now than it was in the early '00s for sure.
Before the NeWS GUI Sun had Sun Tools, and Sunview. X11 offered a key advantage of sharing server resources over a network on X terminals so Sun came out with something, then eventually supported both. X11 was supported under VMS & PCs too.
27:51 Worth noting that, while Alpha was 64-bit-native from the beginning, it did have some 32-bit compatibility in the form of “TASO” mode (“Truncated Address-Space Option”). This used only 32-bit addresses, and was used to run Windows NT, which if you remember, in spite of its much-vaunted portability across different processor architectures, was still only 32-bit at the time.
OpenVMS/Alpha even kept existing ABI:s in 32-bit compatibility, including default C integer types (int = long = 32 bits, long long = 64 bits), leaving most things running in the traditional 32-bit VAX virtual address space regions P0/1 + S0/S1, sign-extended to the far ends of the greater 64-bit VA space, and a special 64-bit "Very Large Memory" extension to get access to the 64-bit VA regions filling the gap in the middle. That made porting from OpenVMS/VAX to Alpha almost trivial, with all data layouts being identical, but did not age well, when data set sizes for software that had "just worked" for 10-20 years, seamlessly through mixed VAX/Alpha cluster migrations, started approaching the hard-stop 1GB limit of P0 space. Fun times.
Also, the Patterson and Hennessy Computer Organization and Design textbook is still possibly the single best textbook (in terms of going from zero to nearly an expert) on any topic I've ever read. Hugely influential.
Thank you for the very informative video. I worked at Ross Technology from 1989 to 1996 and lived through a piece of what you covered. While Cypress Semiconductor was the original owner of Ross and initial primary silicon supplier, Cypress sold its interest to Fujitsu during a time when Cypress' overall business was struggling (thanks in part to a difficult technology transition in Cypress' wafer fabs). I lost a lot of sleep in those years :). When Ross stumbled at the Sparc 8 to Sparc 9 transition Fujitsu bought the remaining IP. Fujitsu had also been the primary backer of Solbourne computers and Amdahl. I wonder what the world of computers would look like if Fujitsu had acquired Sun instead of Oracle doing so.
I think the world would have been a much more interesting place if they had. I wonder if fujitsu would have stepped in if the timing had not been around a number of large accounting scandals in Japan at the time. Thats the sort of thing that could have made access to captial more difficult, and reduced their willingness to take on more debit.
I've good memories of SPARC architecture from before I started working for Sun. I was tasked with designing the hardware for an VMEbus FDDI card. Needless to say we had every processor vendor imaginable pitching up to offer their latest and greatest, all massively power hungry, stupidly expensive, and threatened to make the whole board a non-starter just based on cost. The Fujitsu rocked up with their SPARClite, low power (comparatively) and less than a twentieth of the price (just 25 pounds in small volumes) - it was intended to go into laser printers for the rendering engine. Some of money and space we saved by using that CPU was traded for giving it some dedicated fast SRAM to work in and FPGAs to do DMA and run the networking checksums. And that little 25Mhz CPU with its assist could work its way through socket buffers and saturate the backplane and the network. We had a bake-off against Cray, and won.
Also @12:43 - Motorola didn't start down the PowerPC path initially. They created the 88k architecture. The 88k was a bit of an alien architecture because it was a Harvard Architecture (code and data were in separate address spaces). Fun fact: the PowerPC bus architecture is based on the MC88110 bus architecture.
Motorola didn't even want to go PowerPC so I was told, as they couldn't control the instruction set (which was derived from one made by IBM). But the green being thrown around got them to change their mind.
Love this channel so much! Always hits my retro computer fix without skimping on the lore and technical aspects. While most channels just share software clips and nostalgia, this channel is a beacon in the dark!
You're right. It reminds me of things I should have known more about long ago. That mention of Windowed Registers partly solved that nagging feeling of What is in that memory location. It would also be good for that old pocket calculator of mine with memories A to F. I would have made more use of them. The problem of knowing what is in a location involves having memory to do it at a time when memory was expensive.
When I went to Brunel University in September of 1997 to study Computer Science we had access to two types of computer. 1) 486s running Windows 3.1 for Workgroups 2) Sun SPARCs and UltraSPARCs If you were stuck on a Sparc, we would ssh or rsh into an UltraSPARC and use it to do our compiling as it was much faster. We would even attach an X Server to it for a faster desktop.
Fantastic video, I have a real soft spot for SPARC since designing a hobbyist system on a FPGA based around the Leon3 SPARC soft CPU and designed a video processor and other peripherals for it. I had it running Doom about a decade ago. It was amazing to have such a relatively powerful system to play with. It was really amazing of Sun to release the details for the SPARC architecture as open source. It is an interesting architecture to learn about with the register windows and things like that and the Leon3 project is very cool, would recommend to anyone looking to learn about designing their own FPGA system on a chip.
The most enjoyable video I've watched in ages. Worked with Sun from 1995 to 2010 and still have a soft spot for them. It's still Sun ZFS to me also (another of their fantastic innovations), I just can't call it Oracle to this day! Thanks for the video.
I have two SPARCclassic, SPARCStation IPX, and a Enterprise 450. I had a lot of fun when I got them and they helped out in my career as a System Administrator (HPUX, Sun Solaris, Linux). Thanks for the video, brings back great memories.
I taught computer systems engineering in Australia in the 90s and I remember visiting a company in Melbourne called GCS who had designed their own SPARC workstation. It was the work of a Masters student at Monash University. We bought one for my department , We had Mentor Graphics CAE software at the time including a VHDL to logic synth program. We had undergrads designing CPUs/microcontrollers and implementing on Altera FPGAs. It was a great time to be involved in digital design.
Thanks for this video! I love old Unix machines and have both SPARC and UltraSPARC in my collection so this history was very interesting to me. Looking forward to that Sun history video whenever you get around to it!
I used Sparcs in college and grad school, up to a 150mhz UltraSparc 128MB of ram in 1997. Everyone thought RISC and Sun were the leading edge of computing. We had no idea what was happening in x86 world or by what means they would dominate in performance (not just cost/performance, pure performance) in just a few short years. 😅
Seeing the little metal tabs on the Mbus cards sure brought back memories. One of my favorite things about the Sun workstations is the open prom bootloader. Able to do so much in there to netboot, or load from cdrom or boot from external scsi. It made maintaining the systems so much easier.
Excellent content. I love how you've been able to join the dots in my memory of computing history, in less than an hour of entertaining and informative content. You're a natural - plus have put a lot of effort in. I wish you all the best. I can imagine that once you're spotted by academia, these videos will be playing in colleges and universities all over the planet. Great content to introduce these technical details without baffling the audience through lack of context. Bravo 👏👏👏
Nice video. When I was in grad school in Japan in the early 1990s we had a large number of Toshiba SPARC LT AS1000 laptops in the student labs. I used PVM (Parallel Virtual Machine) to turn them all into a fairly powerful distributed parallel supercomputer with quite good performance results.
When I first got into IT you would still see some old Sun Sparcstations set up in various server rooms. Apparently people really liked using them for terminals. At one place, I think it was Brocade or Hammerhead, they were using the Sun workstation just to translate a particular protocol from one type to another. All I know was that it was the last thing we took down and the first thing we plugged in when we redid their server room. What cracked me up was the Sun pizza box workstations that had the CDrom drive mounted on the side, not the front. So you had to have that side of your desk clear to allow it to open!
Fascinating as always - and it seems like you've calmed down the stock footage visuals, so thanks for that :) SPARC was always my favourite of the 90s workstation processors, partly because of the architecture, but mostly because it had the coolest logo.
When it comes to SPARC, I have a bit of an obscure history that I am 100% use none of you have ever heard of before. An Iranian company name Parse Semiconductor actually made a bunch of small, embedded-oriented SPARC CPUs in Iran. Yes, Iran, country in the middle east Iran. This obscure bit of history is almost completely lost nowadays lol.
SPARC was beaten by ARM on the idea of making instructions conditional. On ARM, execution of every instruction depends on the processor status result from the previous instruction. Two other innovations are that every instruction includes a register shift or rotate by an arbitrary amount, and the service instruction for implementing software interrupts.
@@lawrencedoliveiro9104 Yes, I should have mentioned that conditional execution was mostly removed for 64-bit ARM. The reasons for this are quite long, and there is much debate about it, which is itself very interesting. When I first read about the original 32-bit instruction set back in the late 1980s, conditional execution really caught my eye, along with the so-called "barrel-shifter" and SVC or SWI instruction, which were elegant innovations at the time. Contemporary CISC ISAs were becoming very ugly by comparison.
Very enjoyable and brought back some great memories from the 90s, especially the first time I used a 64-way SPARC box. One thing about RISC vs CISC is compiler technology and the limited amount of memory and CPU power of development systems. Given unlimited memory and CPU power, a compiler and optimiser can consider many more options for code generation and take advantage of a complex instruction set (see Itanium). But back then compilers were limited by the dev systems they ran on and much library code consisted of hand-optimised assembler for performance. Today, very few people can consistently write better assembler than a free C compiler.
I couldn't afford a Sparc based workstation in their heyday but I did use a Sparc based machine at work. It was a Meiko CS2 which I guess most people will be unfamiliar with. The CS2 was a massively parallel supercomputer (although ours wasn't so massive). It was based around SuperSPARC (and later hyperSPARC) processors along with optional Fujitsu μVP vector processors (we didn't have any of those). It was the successor to the Meiko Computing Surface which used Transputers and i860s. As the wikipedia page documents Meiko also produced a FPU for early SPARCStations and licensed the design to Sun for their MicroSPARC designs.
heh. when I was a student, designing a RISC processor from the ground up was one of our projects. great experience, and nicely within the scope of 'can be done in a semester or two, simply enough to complete, complete enough to run something'
Just thanks, before this I thought when Sparc was mentioned it was Sun Unix variant, It never occurred to me that it was an actual CPU. Great video, you obviously put a great deal of effort into your research. Also, I had to pause the video to laugh after "You have access to a great deal of electricity".
SHOUTcast was initially compiled for Solaris Sparc. I ran an internet radio for a long time, the drum'n'bass channel was 2nd biggest on the net with BassDrive being larger. I briefly 'worked' remotely for Nullsoft on SHOUTcast right during the acquisition, while that meant my 'work' hit a brick wall, I did get some relay servers from AOL that kept us running with anywhere from 200-800mbps constant bandwidth (upto hundreds of tb/month) until AOL wanted to get the rest of the Sparc servers out of their racks in like 2009..Actually a ton of popular internet radio stations had the same relay servers. A huge chunk of internet radio pre-2009ish all went thru a Solaris box. So yes, all SHOUTcast development, the precursor to a ton of internet streaming, was initially developed for and compiled on Solaris Sparc boxen. Maybe TH-cam wouldn't be what it is today without SHOUTcast and by extension Solaris & Sparc!
This brings back lots of memories - I think it was the first RISC that I had ever looked at. I had kind of lost touch with where this all ended up, however. Kind of sad, I guess. Ironically RISC is now the domain of single board computers, embedded systems and smartphones.
Good vibe. The creator definitely has talent and knowledge. I will watch more. However, a lot of repeated slow video scrolling of the equipment seems wasteful. Would prefer a once over and perhaps extra image shots from a variety of angles to see more of the whole. While I don't look forward to committing to lengthy videos, I'm there to the end. Maybe a more concise delivery could improve metrics.
I remember going on a field trip in 1999 got my computer programing class to an elevator company's test facility and seeing an ultrasparc server. It had 2 GB of RAM which blew my mind at the time.
Thanks !! I worked at a place that put their own software packages onto Sun platforms. Sun-2, Sun-3, and Sun-4. Oh those SPARC machines were enjoyable to use all around.
Might have added a mention for Tadpole, who were the only company who build SPARC based Solaris laptops, as far as I recall. (ok, there was the microsparc-based Sun Voyager, but calling that a laptop is quite a stretch..) Also, while the Ross CPUs won against the Fujitsu microSPARC CPUs in terms of raw clock speed, their rather slow off-die cache hamstrung quite a few applications, especially databases (like Oracle, who were none too pleased), where a single 200MHz Ross CPU had trouble competing against the 125MHz microSPARC CPUs. Also oh my god the insane power those Rosses burnt. While you could theoretically put four CPUs (2 MBus cards of 2 CPUs each) into a SparcStation 20, it wasn't officially supported as it could overload the power supply (later SS10 compensated for that by adding 30W to the PSU), and you definitely wanted to add two more fans into the case to get the heat out because the two fans inside the PSU were woefully underpowered for that task.
I worked on those hyperSPARCs at Ross. We wanted to put impingement fans on the CPU cards but Sun said absolutely not :). There was very little space allocated for the four CPUS. The competing TI CPUS never got over two per box as I recall. Power density was a huge problem for sure. We were also one of the first CPU makers to integrated four die inside one package. hyperSPARC had a CPU die, two cache memories and a memory/buss controller die all inside one PGA package. That is now a common approach, but it was bleeding edge in the early 1990s. Fans attached directly to the CPU heatsink became commonplace not long after we were told no :(.
@@johnhorner5711 As a 3rd party, I'm not sure how much say Sun would've actually had in fan vs. no fan. However, there wasn't much space for a fan. The clones we used (Tatung?) had space for a fan, but we never needed to add any. I don't recall if we ever had any quad's.
I used a Tadpole laptop running Solaris, as a mobile demonstration system for a networked backup system in the late 90's. Expensive and very solidly built.
Back in the mid-90's i was at MCI - and our shop was all about SPARCStation 5's (oddball 85 MHz CPU), SPARCStation 20s, Sun Enterprise E3000s / 3500's, E5500's, E6500s, E10k's, and Sun UltraSparc 140s workstations. Good times.
Thanks for this trip of memories of my early computing career. My first exposure to mainstream UNIX systems was on 68000 based NCR tower, then SPARC based ICL. Moved onto Sun when ICL messed up their channel strategy. Worked on many server deployments.
I once owned a SparcStation 10, in 2001 or so, but none of the monitors I bought for it worked properly. A replacement of the video card was prohibitively expensive, so with heavy heart I eventually tossed the lot. It had two Ross Technologies HyperSparc processors inside and I was gobsmacked to see the attention to noise reduction and everything inside just oozed quality design once I opened up the case.
One of my home workstations was a Sparc 5, with a sun monitor in 1996 (with solaris 2.5.1). It was new and unused (still in the box) that some father bought (or I bet just got included through buying some $$$$$ servers). for his son, but they found it more confusing since it wasn't a PC. When I bought it, I was already a unix administrator, so I loved it. At work, they gave me a SUN Ultra 2 Creator workstation with dual cpu and a high end video card amd a SUN monitor that was $50K CDN new... as sun stuff was crazy expensive. I'm still working with SUN/Oracle OS on SUN/Oracle hardware still to this day. Mostly T8 servers and running oracle vm, and lots and lots of LDOMs all on 11.4 (note, 11.4 will be going until at least 2034).
29:33 IBM Power (or at least PowerPC) used a similar idea. If you looked at the original 32-bit PowerPC spec, it was clear this was operating as a cut-down 64-bit processor, and making it a fully 64-bit instruction set was just a matter of filling in some gaps. Motorola did a similar thing with the original 68000 processor which, while being notionally 16-bit, was effectively a cut-down 32-bit processor. And again, the evolution to the fully 32-bit 68020 instruction set was little more than a matter of filling in some gaps.
Yeah, the variable length of instructions on x86 and 68k are quite a good idea. Now if x86 would not have prefixes… SuperFx copied that prefix thing for some reason
Used to work for ICL /Fujitsu many years ago. We ran SPARCStations for some public sector stuff until 2004. Fujitsu, ICL and Amdahl were all part of the same group (Fujitsu UK). Love seeing the old stuff again.
I have been an interested observer of the computer revolution for over 40 years. I have particular interest in the comparison and contrast of user and owner experiences relating to when they were good vs. when they were frustrating. Now nearing the tender age of 60, I am interested in the scientific study of the history of human and computer interaction, to see how it might be applied to raising computer customer expectations, to reduce instances of profit made off of people's ignorance, and try to increase instances of product makers doing a good job of solving specific problems. Some kind of coherent and interesting presentation of computer history, with emphasis on how things could have been done to improve owner and user experiences, might prove useful in giving current buyers of digital tech better intelligence, and then subsequently hold product makers to a higher standard for what they provide.
Loved your bite at the government contract job IT projects, we have quite a few of those horribly scandalous projects in Norway as well. But then again, doesn't every country struggle with that...
I love Sun Microsystem solutions, it was my 1st real server and workstation equipment and Unix env. I was introduced at my university, thanx to friendly administrator at my Uni. We had Sun UltraSparc server and Sparc workstations and Ray terminals. That was great years, end of '90 and begining of '00.
Fantastic. I would love to see more risc and some performance comparisons, and what the industry’s opinions were. All of them had their own bespoke workstations and bespoke flavors of unix, I can’t imagine anyone having more thank one type of risc workstation. I have heard the Alpha was the fastest, but by how much, and after DEC stopped, when did Sparc and MIPS catch up?
Looking forward to the Sun video! It would also be neat to see one comparing the different RISC architectures. How similar are they to one another? What are the strong points of each RISC architecture and why do so many different ones persist in a market that has otherwise seen so many architectures and platforms die out?
I was the proud owner of a decommissioned SS690MP that was dual proc M-bus. It was a headless device, so since the deskside chassis had 3 MBus slots I took the mainboad out of a 3/60 which only took power from the MBus, and slotted it into the 3rd slot as a monitor system. It was a power-hungry beast and VERY HEAVY to move. About half the size of a dorm fridge and aa heavy as at least 3 of them. What I wouldn't give to have the (16) 16MB Parity 30 pin sims nowadays.
I worked at a Motorola fab in the late ‘90s, when the 68000 gave way to the PowerPC. The 68000 series was a popular CPU in its day. The 68040 running at 33 MHz and above, and the 68060 were really great CPUs. But RISC was taking over. The variety of RISC CPUs being made in the ‘90s was great to watch happen. Especially when the Wintel platform dominated sales. But neither Windows nor Intel were the cutting edge. All the RISC CPUs were fascinating to me.
you deserve way more people watching your channel i like your content since im a history nerd and tech nerd most of my channels are videogames, history and tech channels i watch alot and your a great channel
How you access data with instructions is actually far more impactful to the definition of risc vs cisc than the number of instructions. There are plenty of RISC ISAs out there with very large instruction counts and also some very simple CISCs that have a small number of instructions but it's whether or not the CPU is strictly load/store or not that really determines the classification.
I worked with Sun spark stuff, i remember the fact id always tell people who asked what i did but had no idea who sun microsystems was, is that Sun microsystems were powering Ebay & were used to make the Toy Story film. in 2003 or whenever, Sun relocated all their manufacturing from the UK to Thailand so that was the end of that.
Solaris was a bag O' joy ... ICK! SGI Irix was a joy, the dynamic kernel, the extendable FS across a raid or jbod, adding a FS or dsk or tty during runtime... I loved the IRIX flavor, but I also love rocky road ice cream so maybe its a preference
15:55 Interesting point about the multiply and divide. One of the key things that RISC designers were doing was analyzing instruction traces on real-world programs, to see how frequently particular instructions were actually being used. And it seemed they were finding that integer multiplies and divides were not really that common. I think the original SPARC had a “multiply-step” or “divide-step” instruction, something like that--you had to use several of them to perform a complete operation. I think later they realized that, just maybe, instruction traces were not telling them the whole story. Previously, the legendary Seymour Cray had floating-point operations in his legendary Cray-1 and successor machines, but he left out the divide instruction, because it would have taken too many steps and slowed down the execution pipeline. Instead, you had to multiply by the reciprocal of the dividend. There was a special instruction for computing the reciprocal, but it used some kind of successive-approximation technique, so you had to execute it twice to get full accuracy. One could argue this was a form of RISC--breaking down a complex operation into separate simpler instructions, to keep the hardware simple--and fast.
I can see the logic in that, it’s like how you don’t really have a separate subtract circuitry, you just compute the 2s-complement negative number to shove into the normal adder circuit. Why divide when you can multiply by one-over the divisor? Same kind of angle. (Though as you say deriving the reciprocal was more involved than deriving 2s-complement)
Excellent video, I commented inline before you brought up Amdhal and Ross, Oddly back in the 80's Sun was next door or very close beside Ampex on the El Camino Real.... Ampex the audio/video tape maker invested in by Bing Crosby... One technology going out beside a new one
@4:30: it is at this point that I realized that the CEO of a company I worked at in the late 90s was the second author of the RISC paper (and that he worked at bell labs). Wow. Neat.
Always look forward to these videos. Can't wait for the next one. I know it takes a considerable amount of time to research, write, shoot and edit though.
I was a project manager for building fault tolerant versions of the Sun Sparc systems, enjoyed my time at Sun, albeit brief. There were some good people there that lost their livelihoods when Oracle fired the entire Campus.
@@RetroBytesUK I mean, if you just look at how they operate and what they create, this seems to be completely accurate. Is banal evil still banal if you're in the business of doing evil? Or is the statement of it being banal simply redundant?
Still use SPARC today! The M8 CPU is a freakin beast. Great chip. Intel and AMD are finally approaching what M8 was doing 5+ years ago. Just a shame what Oracle has done to their entire hardware biz. SPARC was on such a great path …. M8 is a pretty incredible swan song.
@30:04: In order to make 32 bit instructions work correctly in 64-bit registers, the 2's-complement negative values would have to be sign-extended. This means that if bit 31 was 1, then all bits from 32-63 would also have to be 1. If bit 31 is 0, then the 0 would be extended from bits 32 to 63. BTW, RISC-V also took this approach with the differences between their 32 bit and 64 bit ISA's too. There are instructions to load 64 and store 64.
They did, it was quite a big Sun shop, although I was there in 1996. The labs were mostly just X servers. I saw them eventually get replaced with SGIs and Linux boxes, with the SGIs also eventually getting replaced by Linux boxes and a few Macs.
@@digitalarchaeologist5102 W was there in 91-94 when the sun stuff was in its heyday. Still remember the 40 sun SLC workstations with optical mice needing optical mouse mats (stuck to the table) Although I think the SLC were 68020 based so not sparc
@@marksterling8286 Ah yes the reflective looking mouse mats. I also remember the yellow thicknet that ran everywhere and how often people seemed to kick out the tranceivers
Don't forget LEON, a range of SPARC architecture processors made by ESA for use in spacecraft. Sadly not open source though, like pretty much all ESA stuff.
Started a new job in 2000. Got shown to my desk. One end had a SparcStation 5 with a 20" Sun monitor, the other a higher end Dell PC with a 20" Sony monitor. Also worked with DEC Alpha, SGI, HP, IBM, etc, and many others. We had clients for so many OSes it was crazy (and servers for quite a few). In multiple word widths for some. 32 and 64 for RISC, O32, N32 and N64 for SGI MIPS, and more. We had clients for systems and OSes I'd never even heard of, like Sequent's Dynix and Dynix/PTX. Mostly Unix like systems, but also Windows and few completely different ones.
I have a soft spot for SPARC. Really, one of the best processors I have ever worked with. And it is open source. Sadly, it never gained traction outside of a few niche applications. Have a SunStation 10 in the closet I used as my networks router and firewall for years before gigabit ethernet arrived.
This is no longer sparc, nor sun, it's the introduction to all the (less) common CPU archs! Everything from M68K to Power to MIPS to Sparc, all in the first 10min! So more CPU archs please! M68K! Power! MIPS!
Back in my CS classes, we learned about this wonderfully elegant concept of the “stack machine”, which was so easy to generate code for, that you could use it to implement a working compiler as part of an undergraduate programming-language course. Unfortunately, in the real world, its performance was pretty hopeless. And RISC was the complete opposite, having lots of registers--along with the complexity of managing them--to achieve much greater performance. Trouble is, writing assembly language for such an architecture was no fun at all.
Very interesting, RISC was taking advantage of a time where reducing the op code ROM let you do significantly more in other areas. On modern chips complex decode takes up very little area so there's less to choose between approaches. Alpha used traps to emulate compound instructions, there were a fair number for UNIX but almost twice as many for VMS. When the Pentium came out, its relative low cost for the performance took the wind out of the UNIX workstation market, the legal battle between the UNIX groups and tying up BSDI gave WinNT an opportunity, despite of poor drivers and awful reliability back then.
SPARCstations was where I got my start with UNIX in 1993 and why I've been a Linux dev since 1994. Linux never made me a rich man but I sleep well at night, having been able to avoid Microsoft products for 30 years. I work with Microsoft tools around the edges of my job, and they are all terrible with the sole exception of Excel which is acceptable.
Scientific Atlanta cable boxes used to have SPARC CPUs in them. Also Intel's (successful) high end embedded CPU, the i960 had register windows. As did Altera's not-as-successful NIOS 1 soft-core CPU.
Minor correction: RISC does not always mean fewer instructions in a given instruction set compared to CISC. There have been RISC chips that had more instructions than comparable CISC chips. As one of my profs explained, it's not _Reduced Instruction Set_ Computing, but rather _Reduced Instruction_ Set Computing. The point is that instructions are typically smaller in terms of length, and tend to be uniform (CISC instructions are more likely to vary in length). This leads to (as mentioned in the video) simplified CPU architecture, e.g. a greatly simplified decoding logic (no need to handle multiple addressing modes!) and easier pipelining. Other ways that a RISC architecture simplifies things can include a lack of hardware checks for instruction completion; this really goes more to simplifying pipelining, where a subsequent instruction in a pipeline may depend upon the results of an instruction further along in the pipeline. Pipelined CISC architectures typically include circuitry that checks for dependencies, and will stall the pipeline until a dependency is resolved, while RISC simply omits it. The general driving philosophy that divides the two worlds is a view of what the compiler should do. In CISC models, the processor does comparatively a lot, allowing for a simpler compiler and more straightforward assembly. Also, because the CISC model typically has multiple addressing modes and instructions that just "do more", it makes for relatively compact code. RISC, in contrast, relies upon a compiler to resolve hardware quirks and present code in an order that will work correctly, e.g. inserting NOPs where a pipeline delay is needed. Compilers for RISC architectures typically need to be designed with a deeper knowledge of how the processor operates. So, RISC compilers are trickier to program for, as a rule, because the hardware does less. The benefit, as the video mentioned, is a simplified hardware that's easier to design, can run faster, consumes less power, and potentially is cheaper to make and debug. So why choose one vs. the other? Well, back in the early days of computing, memory and storage were very expensive, while the CPU was comparatively cheap. It was more cost efficient to design a processor that could do more, which would yield significantly more compact code, and a simplified compiler (which also didn't take up much storage space). As the video noted, one CISC instruction could do the task of four RISC instructions, and when memory and storage space was at a premium, that savings meant a lot. Sure, the RISC approach may execute faster, but that advantage would have been easily outweighed by the astronomical cost of the additional storage needed to accommodate the larger code to store any sort of useful program - not viable in the personal computing market. CISC was the logical choice. However, tables have turned: in modern systems, RAM and storage space are plentiful and cheap, while speed and power consumption are much greater concerns. So, RISC now has the advantage.
A 8086 only needs 15% of its die for decoding ( see photos). The ALU and register file is larger. MIPS has address register + offset as addressing mode. And relative jumps. Like almost all CPUs. So you mean those push and pop instructions? STOSW ? ARM got a stack pointer. If you allow interrupt priority , you need a SP. I really don’t see why we can’t have an SP like a PC with its own adder or at least connection to the ALU. SPARC overdid it a bit with the sliding window. Hardware could not support more than 32 fast registers. Maybe if they would have pushed to market while 16 Bit instructions were still okay? While we don’t really want flags, lots of high level programming languages need to be able to throw exceptions, like carry, overflow, division by zero. Need a stack for this! Ah memory indirect. Did the x86 have this? I never used it. I thought that was a 6502 thing with its zero page.
I was a sysadmin at a large regional ISP in the late 90s and the core systems were Tatung SS20 clones (most 2x 85MHz IIRC) running carefully tuned SunOS 4.1.4. Main storage was provided by a NetApp filer. Network backbone was 100Mbit FDDI. When the first ultrasparc showed up, it took a while before it was able to handle any duties at the same performance level as its predecessors. We were NOT originally fans of Solaris although it did improve with time.
A couple of other honourable mentions that spring to mind. Cray used SPARC in a number of its super computer products and I remember... I think it was Intergraph that were planning on releasing a SPARC based workstations running Windows NT, during the craze when NT was trying run on anything RISC + X86. I don't recall if they ever did make SPARC workstations in the end running some UNIX flavour, but apparently NT was ported to SPARC however never escaped into the wild.
@@RetroBytesUK Interesting, that it was fully functional. Maybe one day it might turn up somewhere. I love this kind of thing for the sheer novelty of it, which alludes to how different things may have turned out.
I worked at Sun for 14 years then was assimilated by Oracle for 7 more years. Still worked on Sun stuff even though it was now Oracle. Then Oracle laid everyone off. So I still have fond memories of Sun. Nothing good to say about Oracle.
Yeah, I felt the same way when DEC was bought by Compaq. I didn't work directly for DEC but for one of their contractors.
Just like with Sun, pretty much all the stuff that made DEC cool was killed almost right away.
Does anyone not named Larry Ellison ever have anything good to say about Oracle?
Someone once referred to Larry Ellison as “Leisure Suit Larry.” The joke must be at least 5 years old at this point, and still hilarious.
Like Tandem, I managed their Unix Software Lab in Austin. Had lots of Suns and SGI.
Oracle. One Rich Asshole Called Larry Ellison.
"I worked at Sun for 14 years then was assimilated by Oracle for 7 more years."
Do you have any theories on why SPARC didn't become the All American competitior to the plucky little Bri'ish ARM?
Back in the day I had a SparcStation 10, later with 4 Ross HyperSPARC CPUs and 128 Meg of RAM. We were running a manufacturing application developed by a company in Toronto by the name of Cherniak Software. They had ported it to the SparcStation at our behest from a Harris HCX-7/9 platform (of CCI Tahoe CPU / 4.3 BSD Tahoe fame). When they ported it originally to the Harris (they were at the time an Olivetti shop but we wanted Harris, it was the fastest thing they had ever run their code on. When they later ported it to the Sparc architecture, they were absolutely speechless. When we upgraded to HyperSPARC, they just stopped trying to come up with adjectives to describe how fast it was. Your video brings back memories. The very brilliant software engineer (I always joked that he considered a compiler merely a convenient way to generate object code - not a necessary one) who designed the database tools we used passed away last year, and the founder of the company the year before. I still use the tools they developed in another application today.
SPARC was the only machine used to develop ASICs back in the day. Used them for designing F22 EW and commercial graphics chips. We used SUN machines and some clones. Years ago SUN open sourced the SPARC processor. en.wikipedia.org/wiki/OpenSPARC. One of the only companies to open source something so amazing as a processor, even if older technology. Something I wish Intel, AMD, and Nvidia would do. If something is 20+ years old, open source it for enthusiasts. You can build SPARCs for FPGAs now days.
IBM "open sourced" the POWER architecture by the way.
Just casually drop that you designed F22 EW hardware with no more detail 😅 that’s a story son
@@mattmurphy7030they might not be able to say anything else
Growing up sun, and therefore sparc were very much everywhere in our house as my father worked as an engineer (having previously worked at Burroughs), and whilst I never really used sparc outside of tinkering myself, there was always a few sparc systems running , which as an inquisitive kid obsessed with computers was great. Mostly it was what was deemed to be broken when removed from one of the major London banks when doing repair, upgrade or replacement. The rules then were as long as it was never sold or passed on it was fine if you could fix it. Generally my father was rather good at fixing most things so there was a enterprise server running in the garage for years, never needed any extra heating in there! Plus various types of sparcstation around the house.
My father moved into management at sun later on was made redundant by oracle 2 years ahead of retirement, but apparently he negotiated the package to cover his pension so he didn't lose out.
This was a great video and i look forward to the sun video at some future date.
Breathtaking content.
I provided tech support for Autodesk AutoCAD on IBM AIX, HP-UX, SGI Irix, DEC Vax and Alpha, and various Sun workstations back in the day (mid-1990s).
All of these machines were a few yards/meters within my reach.
Good times.
That SS10 you have is actually an SS10SX, an incredibly rare machine, much rarer than the Hypersparcs you have in it. I'd very much appreciate if you could take some high res pics of the motherboard and post them online because there are almost no good pics of that machine.
Next time I have it out for filming I will.
Good eye there. I'd forgotten how rare they are with how they're sort of a half way point between the 10 and the 20.
I didn't know about the SS 10SX. What's the difference with a SS 10?
@@DVRC sx frame buffer circuitry built onto the main board, just add a vsimm (also frighteningly rare and pricey these days) and you have very high end graphics capabilities...for the era
@@DVRC It uses the same chipset as the SS20, but it came out before so maybe better to say it's the other way around. Mbus only runs at 40Mhz, no ISDN ports, audio codec built in so no need for a speaker box. It uses different VSIMM and aux video boards from the SS20, VSIMM from the SS20 may work in it but the hardware is so rare I wouldn't risk it. Spec wise the SS20 is a better machine and much easier find and get parts for. Only real advantage the SS10SX has is it uses an SS10 case which I happen to think looks better.
13:35 NeWS, the “Networked (Extensible?) Windowing System”. This was based on PostScript, before there was such a thing as Display PostScript. So Sun added its own extensions for on-screen drawing and interaction handling. And then abandoned it all when X11 came along.
Some people really liked the idea of being able to load small pieces of autonomous code into the display server that could handle low-level user interactions without bothering the main processes running on the CPU. Done right, this could certainly make the system feel more responsive, particularly over slow networks or remote links. The problem (as with the whole issue of multithreading that became popular in the 1990s) was that synchronizing all these independent threads of control was very hard to get right, and was extremely prone to hard-to-reproduce timing-related bugs.
“Knock-knock.”
“Race condition.”
“Who’s there?”
This was the GUI for SGI Personal Iris, before also being replaced by X11. I preferred NeWS. Not just because the Motif WM was boring and clunky to look at, but it ran at reduced performance compared to NeWS on a given system. X11 made previously useful 4D20/4D25 configurations frustratingly worse to use and slower.
@@SeanCC It is true that, back in the 1990s, platforms that integrated the GUI into the OS kernel had a performance advantage. But that hasn’t been the case for a while: nowadays it is Linux+X11 that is more responsive and less resource-hungry than Microsoft’s or Apple’s platforms.
@@lawrencedoliveiro9104 nowadays I think it's more about hardware overcoming/overcompensating for low performance code. I've been using Windows, Linux and Apple for over twenty years in the meantime and there's still so, so much that Linux+X has to catch up to. But it's better now than it was in the early '00s for sure.
Before the NeWS GUI Sun had Sun Tools, and Sunview. X11 offered a key advantage of sharing server resources over a network on X terminals so Sun came out with something, then eventually supported both.
X11 was supported under VMS & PCs too.
I loled at your knock knock joke
27:51 Worth noting that, while Alpha was 64-bit-native from the beginning, it did have some 32-bit compatibility in the form of “TASO” mode (“Truncated Address-Space Option”). This used only 32-bit addresses, and was used to run Windows NT, which if you remember, in spite of its much-vaunted portability across different processor architectures, was still only 32-bit at the time.
It aslo was very usful for its emulation of 32bit platforms, like x86 in NT and Linux, and VAX under VMS.
OpenVMS/Alpha even kept existing ABI:s in 32-bit compatibility, including default C integer types (int = long = 32 bits, long long = 64 bits), leaving most things running in the traditional 32-bit VAX virtual address space regions P0/1 + S0/S1, sign-extended to the far ends of the greater 64-bit VA space, and a special 64-bit "Very Large Memory" extension to get access to the 64-bit VA regions filling the gap in the middle. That made porting from OpenVMS/VAX to Alpha almost trivial, with all data layouts being identical, but did not age well, when data set sizes for software that had "just worked" for 10-20 years, seamlessly through mixed VAX/Alpha cluster migrations, started approaching the hard-stop 1GB limit of P0 space. Fun times.
Also, the Patterson and Hennessy Computer Organization and Design textbook is still possibly the single best textbook (in terms of going from zero to nearly an expert) on any topic I've ever read. Hugely influential.
Thank you for the very informative video. I worked at Ross Technology from 1989 to 1996 and lived through a piece of what you covered. While Cypress Semiconductor was the original owner of Ross and initial primary silicon supplier, Cypress sold its interest to Fujitsu during a time when Cypress' overall business was struggling (thanks in part to a difficult technology transition in Cypress' wafer fabs). I lost a lot of sleep in those years :). When Ross stumbled at the Sparc 8 to Sparc 9 transition Fujitsu bought the remaining IP. Fujitsu had also been the primary backer of Solbourne computers and Amdahl. I wonder what the world of computers would look like if Fujitsu had acquired Sun instead of Oracle doing so.
I think the world would have been a much more interesting place if they had. I wonder if fujitsu would have stepped in if the timing had not been around a number of large accounting scandals in Japan at the time. Thats the sort of thing that could have made access to captial more difficult, and reduced their willingness to take on more debit.
I've good memories of SPARC architecture from before I started working for Sun. I was tasked with designing the hardware for an VMEbus FDDI card. Needless to say we had every processor vendor imaginable pitching up to offer their latest and greatest, all massively power hungry, stupidly expensive, and threatened to make the whole board a non-starter just based on cost. The Fujitsu rocked up with their SPARClite, low power (comparatively) and less than a twentieth of the price (just 25 pounds in small volumes) - it was intended to go into laser printers for the rendering engine. Some of money and space we saved by using that CPU was traded for giving it some dedicated fast SRAM to work in and FPGAs to do DMA and run the networking checksums. And that little 25Mhz CPU with its assist could work its way through socket buffers and saturate the backplane and the network. We had a bake-off against Cray, and won.
Also @12:43 - Motorola didn't start down the PowerPC path initially. They created the 88k architecture. The 88k was a bit of an alien architecture because it was a Harvard Architecture (code and data were in separate address spaces). Fun fact: the PowerPC bus architecture is based on the MC88110 bus architecture.
Motorola didn't even want to go PowerPC so I was told, as they couldn't control the instruction set (which was derived from one made by IBM). But the green being thrown around got them to change their mind.
Noticed that mistake too.
PowerPC was all IBM, and it was the fastest thing at the time!
Love this channel so much! Always hits my retro computer fix without skimping on the lore and technical aspects. While most channels just share software clips and nostalgia, this channel is a beacon in the dark!
I love the historical context. Also as an American, I can't enough of the across-the-pond accent.
You're right. It reminds me of things I should have known more about long ago.
That mention of Windowed Registers partly solved that nagging feeling of What is in that memory location. It would also be good for that old pocket calculator of mine with memories A to F. I would have made more use of them.
The problem of knowing what is in a location involves having memory to do it at a time when memory was expensive.
When I went to Brunel University in September of 1997 to study Computer Science we had access to two types of computer.
1) 486s running Windows 3.1 for Workgroups
2) Sun SPARCs and UltraSPARCs
If you were stuck on a Sparc, we would ssh or rsh into an UltraSPARC and use it to do our compiling as it was much faster. We would even attach an X Server to it for a faster desktop.
Fantastic video, I have a real soft spot for SPARC since designing a hobbyist system on a FPGA based around the Leon3 SPARC soft CPU and designed a video processor and other peripherals for it. I had it running Doom about a decade ago. It was amazing to have such a relatively powerful system to play with. It was really amazing of Sun to release the details for the SPARC architecture as open source. It is an interesting architecture to learn about with the register windows and things like that and the Leon3 project is very cool, would recommend to anyone looking to learn about designing their own FPGA system on a chip.
ah yes i forgot about this one too ! LEON made SPARC fly to the stars !!!
The most enjoyable video I've watched in ages. Worked with Sun from 1995 to 2010 and still have a soft spot for them. It's still Sun ZFS to me also (another of their fantastic innovations), I just can't call it Oracle to this day! Thanks for the video.
I have two SPARCclassic, SPARCStation IPX, and a Enterprise 450. I had a lot of fun when I got them and they helped out in my career as a System Administrator (HPUX, Sun Solaris, Linux). Thanks for the video, brings back great memories.
I taught computer systems engineering in Australia in the 90s and I remember visiting a company in Melbourne called GCS who had designed their own SPARC workstation. It was the work of a Masters student at Monash University. We bought one for my department ,
We had Mentor Graphics CAE software at the time including a VHDL to logic synth program. We had undergrads designing CPUs/microcontrollers and implementing on Altera FPGAs. It was a great time to be involved in digital design.
Thanks for this video! I love old Unix machines and have both SPARC and UltraSPARC in my collection so this history was very interesting to me. Looking forward to that Sun history video whenever you get around to it!
I used Sparcs in college and grad school, up to a 150mhz UltraSparc 128MB of ram in 1997. Everyone thought RISC and Sun were the leading edge of computing. We had no idea what was happening in x86 world or by what means they would dominate in performance (not just cost/performance, pure performance) in just a few short years. 😅
Seeing the little metal tabs on the Mbus cards sure brought back memories. One of my favorite things about the Sun workstations is the open prom bootloader. Able to do so much in there to netboot, or load from cdrom or boot from external scsi. It made maintaining the systems so much easier.
LOVE SPARC! I had a sparcstation 5 for $200 when they were just 8 years old, currently spending a lot more than that to restore an SS20.
Excellent content.
I love how you've been able to join the dots in my memory of computing history, in less than an hour of entertaining and informative content.
You're a natural - plus have put a lot of effort in. I wish you all the best. I can imagine that once you're spotted by academia, these videos will be playing in colleges and universities all over the planet.
Great content to introduce these technical details without baffling the audience through lack of context.
Bravo 👏👏👏
Nice video. When I was in grad school in Japan in the early 1990s we had a large number of Toshiba SPARC LT AS1000 laptops in the student labs. I used PVM (Parallel Virtual Machine) to turn them all into a fairly powerful distributed parallel supercomputer with quite good performance results.
Many thx for that ... after my microVax the machine I miss most is my Sparc workstation. Great days.
When I first got into IT you would still see some old Sun Sparcstations set up in various server rooms. Apparently people really liked using them for terminals. At one place, I think it was Brocade or Hammerhead, they were using the Sun workstation just to translate a particular protocol from one type to another. All I know was that it was the last thing we took down and the first thing we plugged in when we redid their server room.
What cracked me up was the Sun pizza box workstations that had the CDrom drive mounted on the side, not the front.
So you had to have that side of your desk clear to allow it to open!
Fascinating as always - and it seems like you've calmed down the stock footage visuals, so thanks for that :)
SPARC was always my favourite of the 90s workstation processors, partly because of the architecture, but mostly because it had the coolest logo.
The SPARC architecture... or SPARCitecture, if you will. I'm sure you won't, but there it is.
Your "sidebar stack" reminds me of SPARC register windows. ;-)
When it comes to SPARC, I have a bit of an obscure history that I am 100% use none of you have ever heard of before. An Iranian company name Parse Semiconductor actually made a bunch of small, embedded-oriented SPARC CPUs in Iran. Yes, Iran, country in the middle east Iran. This obscure bit of history is almost completely lost nowadays lol.
Fascinating!
SPARC was beaten by ARM on the idea of making instructions conditional. On ARM, execution of every instruction depends on the processor status result from the previous instruction. Two other innovations are that every instruction includes a register shift or rotate by an arbitrary amount, and the service instruction for implementing software interrupts.
Interesting that idea of conditional instructions has been dropped in 64-bit ARM.
@@lawrencedoliveiro9104 Yes, I should have mentioned that conditional execution was mostly removed for 64-bit ARM. The reasons for this are quite long, and there is much debate about it, which is itself very interesting. When I first read about the original 32-bit instruction set back in the late 1980s, conditional execution really caught my eye, along with the so-called "barrel-shifter" and SVC or SWI instruction, which were elegant innovations at the time. Contemporary CISC ISAs were becoming very ugly by comparison.
@@lawrencedoliveiro9104 It creates an dependency which makes super scalar out of order execution scheduling difficult.
@@lawrencedoliveiro9104 Super scalar out of order architectures don't like long wires touching some status bit somewhere remotely...
Very enjoyable and brought back some great memories from the 90s, especially the first time I used a 64-way SPARC box. One thing about RISC vs CISC is compiler technology and the limited amount of memory and CPU power of development systems. Given unlimited memory and CPU power, a compiler and optimiser can consider many more options for code generation and take advantage of a complex instruction set (see Itanium). But back then compilers were limited by the dev systems they ran on and much library code consisted of hand-optimised assembler for performance. Today, very few people can consistently write better assembler than a free C compiler.
I couldn't afford a Sparc based workstation in their heyday but I did use a Sparc based machine at work. It was a Meiko CS2 which I guess most people will be unfamiliar with. The CS2 was a massively parallel supercomputer (although ours wasn't so massive). It was based around SuperSPARC (and later hyperSPARC) processors along with optional Fujitsu μVP vector processors (we didn't have any of those). It was the successor to the Meiko Computing Surface which used Transputers and i860s.
As the wikipedia page documents Meiko also produced a FPU for early SPARCStations and licensed the design to Sun for their MicroSPARC designs.
heh. when I was a student, designing a RISC processor from the ground up was one of our projects. great experience, and nicely within the scope of 'can be done in a semester or two, simply enough to complete, complete enough to run something'
Just thanks, before this I thought when Sparc was mentioned it was Sun Unix variant, It never occurred to me that it was an actual CPU. Great video, you obviously put a great deal of effort into your research. Also, I had to pause the video to laugh after "You have access to a great deal of electricity".
SHOUTcast was initially compiled for Solaris Sparc.
I ran an internet radio for a long time, the drum'n'bass channel was 2nd biggest on the net with BassDrive being larger. I briefly 'worked' remotely for Nullsoft on SHOUTcast right during the acquisition, while that meant my 'work' hit a brick wall, I did get some relay servers from AOL that kept us running with anywhere from 200-800mbps constant bandwidth (upto hundreds of tb/month) until AOL wanted to get the rest of the Sparc servers out of their racks in like 2009..Actually a ton of popular internet radio stations had the same relay servers. A huge chunk of internet radio pre-2009ish all went thru a Solaris box.
So yes, all SHOUTcast development, the precursor to a ton of internet streaming, was initially developed for and compiled on Solaris Sparc boxen. Maybe TH-cam wouldn't be what it is today without SHOUTcast and by extension Solaris & Sparc!
I appreciate the visual illustration of a kettle of fish, what a nutty expression that is!
Thanks a lot for this detailed dive! This reminded me to include a conditional move instruction in my hobby CPU 😂
This brings back lots of memories - I think it was the first RISC that I had ever looked at. I had kind of lost touch with where this all ended up, however. Kind of sad, I guess. Ironically RISC is now the domain of single board computers, embedded systems and smartphones.
Very nice history of SPARC! I enjoyed it. Sun will always have a special place in my heart even though I came in near the end of it's life.
I remember when we got our first Fuji Tahoe systems at SunSoft. We started to test Solaris 7 development builds for their sun4us cpus.
How does this channel only have 54k subscribers. This is really good content.
Good vibe. The creator definitely has talent and knowledge. I will watch more. However, a lot of repeated slow video scrolling of the equipment seems wasteful. Would prefer a once over and perhaps extra image shots from a variety of angles to see more of the whole. While I don't look forward to committing to lengthy videos, I'm there to the end. Maybe a more concise delivery could improve metrics.
ARM also got conditional instructions, that everyone apparently ditched for doubling up instructions with thumb instead.
Also dumped in Aarch64.
I remember going on a field trip in 1999 got my computer programing class to an elevator company's test facility and seeing an ultrasparc server. It had 2 GB of RAM which blew my mind at the time.
Thanks !! I worked at a place that put their own software packages onto Sun platforms. Sun-2, Sun-3, and Sun-4. Oh those SPARC machines were enjoyable to use all around.
Great stuff. I look forward to your future Sun video, where I will hopefully learn more about my SparcStation with "PC" expansion board 😸.
Thanks, I always wanted a video explaining about the SPARC processors, they feel "underrated" to me.
Might have added a mention for Tadpole, who were the only company who build SPARC based Solaris laptops, as far as I recall. (ok, there was the microsparc-based Sun Voyager, but calling that a laptop is quite a stretch..)
Also, while the Ross CPUs won against the Fujitsu microSPARC CPUs in terms of raw clock speed, their rather slow off-die cache hamstrung quite a few applications, especially databases (like Oracle, who were none too pleased), where a single 200MHz Ross CPU had trouble competing against the 125MHz microSPARC CPUs. Also oh my god the insane power those Rosses burnt. While you could theoretically put four CPUs (2 MBus cards of 2 CPUs each) into a SparcStation 20, it wasn't officially supported as it could overload the power supply (later SS10 compensated for that by adding 30W to the PSU), and you definitely wanted to add two more fans into the case to get the heat out because the two fans inside the PSU were woefully underpowered for that task.
Heat really is an issue with those hyper sparcs, I have 4 in that ss10. In summer I cant run it with the lid on.
I have one of those "ultrabooks" (not the IBM laptop.) I would call it more of a small lug-able than a laptop. They are quite the slab.
I worked on those hyperSPARCs at Ross. We wanted to put impingement fans on the CPU cards but Sun said absolutely not :). There was very little space allocated for the four CPUS. The competing TI CPUS never got over two per box as I recall. Power density was a huge problem for sure. We were also one of the first CPU makers to integrated four die inside one package. hyperSPARC had a CPU die, two cache memories and a memory/buss controller die all inside one PGA package. That is now a common approach, but it was bleeding edge in the early 1990s. Fans attached directly to the CPU heatsink became commonplace not long after we were told no :(.
@@johnhorner5711 As a 3rd party, I'm not sure how much say Sun would've actually had in fan vs. no fan. However, there wasn't much space for a fan. The clones we used (Tatung?) had space for a fan, but we never needed to add any. I don't recall if we ever had any quad's.
I used a Tadpole laptop running Solaris, as a mobile demonstration system for a networked backup system in the late 90's. Expensive and very solidly built.
Back in the mid-90's i was at MCI - and our shop was all about SPARCStation 5's (oddball 85 MHz CPU), SPARCStation 20s, Sun Enterprise E3000s / 3500's, E5500's, E6500s, E10k's, and Sun UltraSparc 140s workstations. Good times.
Thanks for this trip of memories of my early computing career. My first exposure to mainstream UNIX systems was on 68000 based NCR tower, then SPARC based ICL. Moved onto Sun when ICL messed up their channel strategy. Worked on many server deployments.
I once owned a SparcStation 10, in 2001 or so, but none of the monitors I bought for it worked properly. A replacement of the video card was prohibitively expensive, so with heavy heart I eventually tossed the lot. It had two Ross Technologies HyperSparc processors inside and I was gobsmacked to see the attention to noise reduction and everything inside just oozed quality design once I opened up the case.
One of my home workstations was a Sparc 5, with a sun monitor in 1996 (with solaris 2.5.1). It was new and unused (still in the box) that some father bought (or I bet just got included through buying some $$$$$ servers). for his son, but they found it more confusing since it wasn't a PC. When I bought it, I was already a unix administrator, so I loved it. At work, they gave me a SUN Ultra 2 Creator workstation with dual cpu and a high end video card amd a SUN monitor that was $50K CDN new... as sun stuff was crazy expensive. I'm still working with SUN/Oracle OS on SUN/Oracle hardware still to this day. Mostly T8 servers and running oracle vm, and lots and lots of LDOMs all on 11.4 (note, 11.4 will be going until at least 2034).
29:33 IBM Power (or at least PowerPC) used a similar idea. If you looked at the original 32-bit PowerPC spec, it was clear this was operating as a cut-down 64-bit processor, and making it a fully 64-bit instruction set was just a matter of filling in some gaps.
Motorola did a similar thing with the original 68000 processor which, while being notionally 16-bit, was effectively a cut-down 32-bit processor. And again, the evolution to the fully 32-bit 68020 instruction set was little more than a matter of filling in some gaps.
Yeah, the variable length of instructions on x86 and 68k are quite a good idea. Now if x86 would not have prefixes… SuperFx copied that prefix thing for some reason
Unfortunately, the evolution of x86 to 32-bit was not so smooth.
Used to work for ICL /Fujitsu many years ago. We ran SPARCStations for some public sector stuff until 2004. Fujitsu, ICL and Amdahl were all part of the same group (Fujitsu UK). Love seeing the old stuff again.
I used to work for Technology PLC which was owned by ICL, before ICL fully became part of Fujitsu.
Great video! You made dry material fascinating to watch. I appreciate the depth of information you covered.
Never knew about Sparc but i did hear about RISC , awesome to hear about this unique cpu
It was 1990 when I installed the first 1 GB disk into my SPARC workstation. My colleguas and I were fascinated about that "unlimited" storage space!
I have been an interested observer of the computer revolution for over 40 years. I have particular interest in the comparison and contrast of user and owner experiences relating to when they were good vs. when they were frustrating. Now nearing the tender age of 60, I am interested in the scientific study of the history of human and computer interaction, to see how it might be applied to raising computer customer expectations, to reduce instances of profit made off of people's ignorance, and try to increase instances of product makers doing a good job of solving specific problems. Some kind of coherent and interesting presentation of computer history, with emphasis on how things could have been done to improve owner and user experiences, might prove useful in giving current buyers of digital tech better intelligence, and then subsequently hold product makers to a higher standard for what they provide.
Loved your bite at the government contract job IT projects, we have quite a few of those horribly scandalous projects in Norway as well. But then again, doesn't every country struggle with that...
RetroBytes - The History of -- * instant upvote! *
I love Sun Microsystem solutions, it was my 1st real server and workstation equipment and Unix env. I was introduced at my university, thanx to friendly administrator at my Uni. We had Sun UltraSparc server and Sparc workstations and Ray terminals. That was great years, end of '90 and begining of '00.
Fantastic. I would love to see more risc and some performance comparisons, and what the industry’s opinions were. All of them had their own bespoke workstations and bespoke flavors of unix, I can’t imagine anyone having more thank one type of risc workstation.
I have heard the Alpha was the fastest, but by how much, and after DEC stopped, when did Sparc and MIPS catch up?
Looking forward to the Sun video! It would also be neat to see one comparing the different RISC architectures. How similar are they to one another? What are the strong points of each RISC architecture and why do so many different ones persist in a market that has otherwise seen so many architectures and platforms die out?
I was the proud owner of a decommissioned SS690MP that was dual proc M-bus. It was a headless device, so since the deskside chassis had 3 MBus slots I took the mainboad out of a 3/60 which only took power from the MBus, and slotted it into the 3rd slot as a monitor system. It was a power-hungry beast and VERY HEAVY to move. About half the size of a dorm fridge and aa heavy as at least 3 of them. What I wouldn't give to have the (16) 16MB Parity 30 pin sims nowadays.
@17:07 Isn't that Ross Geller (David Schwimmer) from friends? If it isn't, then Dr Roger D Ross must be his twin!
I love to watch SUN stories..like solaris sun sparc java..etc.. they r robust product
I worked at a Motorola fab in the late ‘90s, when the 68000 gave way to the PowerPC. The 68000 series was a popular CPU in its day. The 68040 running at 33 MHz and above, and the 68060 were really great CPUs. But RISC was taking over. The variety of RISC CPUs being made in the ‘90s was great to watch happen. Especially when the Wintel platform dominated sales. But neither Windows nor Intel were the cutting edge. All the RISC CPUs were fascinating to me.
Really enjoyed this 42 minutes. Quality (and nostalgic) content
you deserve way more people watching your channel i like your content since im a history nerd and tech nerd most of my channels are videogames, history and tech channels i watch alot and your a great channel
How you access data with instructions is actually far more impactful to the definition of risc vs cisc than the number of instructions. There are plenty of RISC ISAs out there with very large instruction counts and also some very simple CISCs that have a small number of instructions but it's whether or not the CPU is strictly load/store or not that really determines the classification.
I worked with Sun spark stuff, i remember the fact id always tell people who asked what i did but had no idea who sun microsystems was, is that Sun microsystems were powering Ebay & were used to make the Toy Story film. in 2003 or whenever, Sun relocated all their manufacturing from the UK to Thailand so that was the end of that.
Solaris was a bag O' joy ... ICK! SGI Irix was a joy, the dynamic kernel, the extendable FS across a raid or jbod, adding a FS or dsk or tty during runtime... I loved the IRIX flavor, but I also love rocky road ice cream so maybe its a preference
15:55 Interesting point about the multiply and divide. One of the key things that RISC designers were doing was analyzing instruction traces on real-world programs, to see how frequently particular instructions were actually being used. And it seemed they were finding that integer multiplies and divides were not really that common. I think the original SPARC had a “multiply-step” or “divide-step” instruction, something like that--you had to use several of them to perform a complete operation.
I think later they realized that, just maybe, instruction traces were not telling them the whole story.
Previously, the legendary Seymour Cray had floating-point operations in his legendary Cray-1 and successor machines, but he left out the divide instruction, because it would have taken too many steps and slowed down the execution pipeline. Instead, you had to multiply by the reciprocal of the dividend. There was a special instruction for computing the reciprocal, but it used some kind of successive-approximation technique, so you had to execute it twice to get full accuracy.
One could argue this was a form of RISC--breaking down a complex operation into separate simpler instructions, to keep the hardware simple--and fast.
I can see the logic in that, it’s like how you don’t really have a separate subtract circuitry, you just compute the 2s-complement negative number to shove into the normal adder circuit. Why divide when you can multiply by one-over the divisor? Same kind of angle. (Though as you say deriving the reciprocal was more involved than deriving 2s-complement)
Excellent video, I commented inline before you brought up Amdhal and Ross, Oddly back in the 80's Sun was next door or very close beside Ampex on the El Camino Real.... Ampex the audio/video tape maker invested in by Bing Crosby... One technology going out beside a new one
Thanks so much @RetroBytes! I've been looking forward to this video for weeks. P.S. Are you going to do another celebration for 65,535 subs?
Yes I am!
@4:30: it is at this point that I realized that the CEO of a company I worked at in the late 90s was the second author of the RISC paper (and that he worked at bell labs). Wow. Neat.
Always look forward to these videos. Can't wait for the next one. I know it takes a considerable amount of time to research, write, shoot and edit though.
1:28 - Huh?! I've only got the beard since about an hour ago - how could you _possibly_ know that when making the video?! You must be a _true genius!_
😆
I was a project manager for building fault tolerant versions of the Sun Sparc systems, enjoyed my time at Sun, albeit brief. There were some good people there that lost their livelihoods when Oracle fired the entire Campus.
Why is it whenever I hear a story involving Oracle, they're always making things worse for everyone around them?
I believe thats the primary business, the database thing is just a side line to help fund making things worse.
They succeeded in sabotaging every single open-source project they inherited from Sun.
@@RetroBytesUK
I mean, if you just look at how they operate and what they create, this seems to be completely accurate.
Is banal evil still banal if you're in the business of doing evil? Or is the statement of it being banal simply redundant?
I have zero love for Oracle :(.
Oracle never had customers, just hostages. 🙂
Still use SPARC today! The M8 CPU is a freakin beast. Great chip. Intel and AMD are finally approaching what M8 was doing 5+ years ago. Just a shame what Oracle has done to their entire hardware biz. SPARC was on such a great path …. M8 is a pretty incredible swan song.
@30:04: In order to make 32 bit instructions work correctly in 64-bit registers, the 2's-complement negative values would have to be sign-extended. This means that if bit 31 was 1, then all bits from 32-63 would also have to be 1. If bit 31 is 0, then the 0 would be extended from bits 32 to 63.
BTW, RISC-V also took this approach with the differences between their 32 bit and 64 bit ISA's too. There are instructions to load 64 and store 64.
12:14 Oh no, it's upside down.
Great video, thank you. I was at Teesside university in 1991 they had a lot of sparc systems.
They did, it was quite a big Sun shop, although I was there in 1996. The labs were mostly just X servers. I saw them eventually get replaced with SGIs and Linux boxes, with the SGIs also eventually getting replaced by Linux boxes and a few Macs.
@@digitalarchaeologist5102
W was there in 91-94 when the sun stuff was in its heyday. Still remember the 40 sun SLC workstations with optical mice needing optical mouse mats (stuck to the table)
Although I think the SLC were 68020 based so not sparc
@@marksterling8286 Ah yes the reflective looking mouse mats. I also remember the yellow thicknet that ran everywhere and how often people seemed to kick out the tranceivers
@@marksterling8286
The SLC had a sun4c processor.
Don't forget LEON, a range of SPARC architecture processors made by ESA for use in spacecraft. Sadly not open source though, like pretty much all ESA stuff.
Not just spacecraft - LEON design was open sourced and cloned a lot, I have an old Netgear NAS at home with a LEON CPU.
I think LEON is an implementation of open sparc.
@@RetroBytesUK and it's LGPL'd, IIRC.
Started a new job in 2000. Got shown to my desk. One end had a SparcStation 5 with a 20" Sun monitor, the other a higher end Dell PC with a 20" Sony monitor. Also worked with DEC Alpha, SGI, HP, IBM, etc, and many others. We had clients for so many OSes it was crazy (and servers for quite a few). In multiple word widths for some. 32 and 64 for RISC, O32, N32 and N64 for SGI MIPS, and more. We had clients for systems and OSes I'd never even heard of, like Sequent's Dynix and Dynix/PTX. Mostly Unix like systems, but also Windows and few completely different ones.
I have a soft spot for SPARC. Really, one of the best processors I have ever worked with. And it is open source. Sadly, it never gained traction outside of a few niche applications.
Have a SunStation 10 in the closet I used as my networks router and firewall for years before gigabit ethernet arrived.
This is no longer sparc, nor sun, it's the introduction to all the (less) common CPU archs!
Everything from M68K to Power to MIPS to Sparc, all in the first 10min!
So more CPU archs please! M68K! Power! MIPS!
PowerPC and MIPS Definitely!
Back in my CS classes, we learned about this wonderfully elegant concept of the “stack machine”, which was so easy to generate code for, that you could use it to implement a working compiler as part of an undergraduate programming-language course. Unfortunately, in the real world, its performance was pretty hopeless.
And RISC was the complete opposite, having lots of registers--along with the complexity of managing them--to achieve much greater performance. Trouble is, writing assembly language for such an architecture was no fun at all.
Correction, as powerful as a Vauxhall Nova, that has just done a ram-raid, on the Woolies pick-a-mix.
😆
Sounds like something Clarkson would do.
@@RachaelSA ram-raid a Woolies for the pick-a-mix in a nova?......Yeah, i could see that xD
Very interesting, RISC was taking advantage of a time where reducing the op code ROM let you do significantly more in other areas.
On modern chips complex decode takes up very little area so there's less to choose between approaches.
Alpha used traps to emulate compound instructions, there were a fair number for UNIX but almost twice as many for VMS.
When the Pentium came out, its relative low cost for the performance took the wind out of the UNIX workstation market, the legal battle between the UNIX groups and tying up BSDI gave WinNT an opportunity, despite of poor drivers and awful reliability back then.
SPARCstations was where I got my start with UNIX in 1993 and why I've been a Linux dev since 1994. Linux never made me a rich man but I sleep well at night, having been able to avoid Microsoft products for 30 years. I work with Microsoft tools around the edges of my job, and they are all terrible with the sole exception of Excel which is acceptable.
Scientific Atlanta cable boxes used to have SPARC CPUs in them. Also Intel's (successful) high end embedded CPU, the i960 had register windows. As did Altera's not-as-successful NIOS 1 soft-core CPU.
as usual awesome video, great research, fantastic presenation.
Very Interesting, I love Proccessing Arch science. Especially modern alternatives to x86. Thank you.
Well actually.... Great video as always mate :-)
Minor correction: RISC does not always mean fewer instructions in a given instruction set compared to CISC. There have been RISC chips that had more instructions than comparable CISC chips. As one of my profs explained, it's not _Reduced Instruction Set_ Computing, but rather _Reduced Instruction_ Set Computing. The point is that instructions are typically smaller in terms of length, and tend to be uniform (CISC instructions are more likely to vary in length). This leads to (as mentioned in the video) simplified CPU architecture, e.g. a greatly simplified decoding logic (no need to handle multiple addressing modes!) and easier pipelining. Other ways that a RISC architecture simplifies things can include a lack of hardware checks for instruction completion; this really goes more to simplifying pipelining, where a subsequent instruction in a pipeline may depend upon the results of an instruction further along in the pipeline. Pipelined CISC architectures typically include circuitry that checks for dependencies, and will stall the pipeline until a dependency is resolved, while RISC simply omits it.
The general driving philosophy that divides the two worlds is a view of what the compiler should do. In CISC models, the processor does comparatively a lot, allowing for a simpler compiler and more straightforward assembly. Also, because the CISC model typically has multiple addressing modes and instructions that just "do more", it makes for relatively compact code. RISC, in contrast, relies upon a compiler to resolve hardware quirks and present code in an order that will work correctly, e.g. inserting NOPs where a pipeline delay is needed. Compilers for RISC architectures typically need to be designed with a deeper knowledge of how the processor operates. So, RISC compilers are trickier to program for, as a rule, because the hardware does less. The benefit, as the video mentioned, is a simplified hardware that's easier to design, can run faster, consumes less power, and potentially is cheaper to make and debug.
So why choose one vs. the other? Well, back in the early days of computing, memory and storage were very expensive, while the CPU was comparatively cheap. It was more cost efficient to design a processor that could do more, which would yield significantly more compact code, and a simplified compiler (which also didn't take up much storage space). As the video noted, one CISC instruction could do the task of four RISC instructions, and when memory and storage space was at a premium, that savings meant a lot. Sure, the RISC approach may execute faster, but that advantage would have been easily outweighed by the astronomical cost of the additional storage needed to accommodate the larger code to store any sort of useful program - not viable in the personal computing market. CISC was the logical choice. However, tables have turned: in modern systems, RAM and storage space are plentiful and cheap, while speed and power consumption are much greater concerns. So, RISC now has the advantage.
A 8086 only needs 15% of its die for decoding ( see photos). The ALU and register file is larger.
MIPS has address register + offset as addressing mode. And relative jumps. Like almost all CPUs. So you mean those push and pop instructions? STOSW ? ARM got a stack pointer. If you allow interrupt priority , you need a SP. I really don’t see why we can’t have an SP like a PC with its own adder or at least connection to the ALU. SPARC overdid it a bit with the sliding window. Hardware could not support more than 32 fast registers. Maybe if they would have pushed to market while 16 Bit instructions were still okay?
While we don’t really want flags, lots of high level programming languages need to be able to throw exceptions, like carry, overflow, division by zero. Need a stack for this!
Ah memory indirect. Did the x86 have this? I never used it. I thought that was a 6502 thing with its zero page.
I don't think Oracle ever gave a hoot about Sun's hardware, they were in for MySQL than Sun had bought.
Oracle always wanted a dedicated database server design.
YAY! I have been waiting so long for another RETROBYTES video!
I was a sysadmin at a large regional ISP in the late 90s and the core systems were Tatung SS20 clones (most 2x 85MHz IIRC) running carefully tuned SunOS 4.1.4. Main storage was provided by a NetApp filer. Network backbone was 100Mbit FDDI. When the first ultrasparc showed up, it took a while before it was able to handle any duties at the same performance level as its predecessors. We were NOT originally fans of Solaris although it did improve with time.
A couple of other honourable mentions that spring to mind. Cray used SPARC in a number of its super computer products and I remember... I think it was Intergraph that were planning on releasing a SPARC based workstations running Windows NT, during the craze when NT was trying run on anything RISC + X86. I don't recall if they ever did make SPARC workstations in the end running some UNIX flavour, but apparently NT was ported to SPARC however never escaped into the wild.
Apparently they had a completely working port for SPARC, its just as you said in never made it to customers.
@@RetroBytesUK Interesting, that it was fully functional. Maybe one day it might turn up somewhere. I love this kind of thing for the sheer novelty of it, which alludes to how different things may have turned out.
@@digitalarchaeologist5102 I'd love to know which sparc based machine they ran it on for all their testing.
Love the "segway" gag.