I remember working on the N64 back in '99.. was a junior developer working on South Park Rally, well.. originally it was a port of Deth Karz (from Melbourne House) to the N64 which got canned, and then the game ended up quite differently. 4K texture cache was painful but just had to be smart about how you render. R4K assembly was pretty fun though.
@@crashoveride879 Tech-wise I thought it was great, from a gameplay aspect, not as good as it could have been. I really wish we had gotten to finish Dethkarz. Would have loved to seen it on the N64.
@referral madness These days, compilers are pretty good at generating smart code, however from a learning perspective, I think it's still worth learning. If you plan on writing code for older machines then it is a great skill to have.
Nitpick: a five-stage pipeline doesn't mean that the MIPS chip can execute five instructions at the same time, exactly. It means that it can have up to five instructions in progress at any one time, with each at a different stage of the pipeline. Only one instruction starts and/or finishes per clock cycle. That's a pretty important distinction: a chip that can start, run, and complete five totally concurrent instructions at once would have been much more impressive and powerful (and expensive) than just a single-threaded chip with a five-stage pipeline.
That also created a really annoying pipeline problem where you couldn't use a register value until 1 clock cycle after a load instruction, if you tried, you'd get the old value for that one cycle. This was difficult to debug.
Winston/Brian: Happy you guys wrote that up. When I heard that.... I was like 5 at a time. Hell no. That would have been a lot of money in 1996. Thanks guys!
Great comment. For some perspective , an original Pentium had a 5 stage pipeline so this was not a particularly novel thing for the time. Later Intel CPUs in the 90s and early 2000s ranged from the 6 stage all the way up to I think 20 on the Pentium 4.
@@cdoublejj There's many reasons the emulations are difficult to get accurate. Timing is the main one. Some games expect timing to be pretty much exact. Some titles generate their display lists, as the GPU is executing them. They are doing this dynamically under the assumption that have X clock cycles in which to do their calculation. Emulations do not, generally, take this in to account and these are VERY HARD to track down. Another example is some oddities of the MIPS processor. For example: mov r0,0 str r0,[r1] nop mov r0,0x80 ldr r0,[r1] add r2,r0,1 What value will r2 have? You would expect to see 1. But, no, you'd get 0x81 (I haven't tried this - i'm trying to recall the actual 'bug'). If there was a nop after the ldr, everything would be OK. If you single stepped this in a debugger, you would get what you expect; but when it runs, you wouldn't (EXCEPT if an interrupt happened between the ldr and the add).
That generation of early 3D consoles was so interesting for the different approaches to the same problems. The N64 took the approach of cutting down professional grade hardware to consumer level prices, leading to some difficult limitations particularly in terms of memory. By comparison the Saturn built much more on the previous generation of bespoke sprite/tile warping technology while the PlayStation implemented a lot of basics of modern 3D hardware from the ground up, but left out all the more taxing features.
To me is not exactly the Hardware what was important of that generation. Out of all those game systems the N64 was the only one that had a clear vision of how interacting in a 3D space should be achieved. The N64 stablished the base of how all game consoles worked after that.
Yeah, it is. I get what he's saying and he's right to a large degree. They got 3D games, environments and movement, even cameras right while most others were really struggling. I mean compare Tomb Raider to Mario 64. It's a fair comparison yet, TR was so clunky in comparison. That said, it's unfair to give them ALL of the credit for the way modern 3D games were made thereafter. I think the Xbox 360 and particularly Gears of War rewritten the rules on large character in 3rd person that usually handle slow and terribly now moving like a cat and building a solid base for gameplay with the first really good cover based shooter system. So everyone gets to move things on a bit at a time but I get your point
Mario 64 did establish how good 3D camera movement should behave, Spyro that took that example very seriously became really good and fun, while other playstation games (mentioned in one AVGN episodes) that didn't end up being criticized on how the player have to fight the the game's camera.
@@funky7chunky everyone adds little bits. Max payne gave us slow down to shoot mech that's now a crutch on console shooters, like red dead. But nintendo basically gave the industry a place to start from, which is huge. It's like apple, like it or not, they invtented the modern smart phone. (i was nokia symbian user too, and windows mobile, and bb, put your god damn pitch forks down, fanboys.
Oh my god if you build an n64 cartridge with Bluetooth and Wi-Fi receivers in them you could basically have a cartridge OS which is totally untraceable
To be fair, i think the title is a bit misleading. It's not that the N64 was hard to develop for PER SE. However, it was badly designed in terms of specs, and working around those bottlenecks is what made development hard. You might think that's ultimately the same thing, but it is not: Remember that other console MVG reported on that was hard to develop on? It had the inverse problem: Plenty of power and no bottlenecks, BUT the code to use that power was complex and ugly. So, it's not really the same thing - quite the opposite.
@@lyxar777 I wouldn't say "badly" designed, so much as quirkily designed. As illustrated here, the design allowed the system to do some things that other competing consoles couldn't, it wasn't necessarily super intuitive to work with.
The N64's specs are not it's issue per see, it is the most powerful console of its generation by a long shot. The primary issue is limited cart space, developers always had to wrestle with this and the constant need for compression and decompression. Such a waste. CD would have solved all those problems. Also Factor 5 went into detail on how limiting Nintendo's generic SGI code was and the only way to actually push the system was to write custom microcode which alas Nintendo did not authorise but for a few games.
If you developed Nintendo 64 games back it the day, let me know about your experiences (feel free to email me) A few points. - I didn't talk about DMA in this video as i the main focus was on the graphics side. However if you want to understand how DMA worked on the N64, check out Rodrigo Copetti's blog (link in description). tl;dr - the implementation is NOT good - at 1:54 i meant to say 16 and 8 kilobyte instead of bit - stay safe!
Unrelated: I love how you have everything so neat in the background, it makes your videos stand out.. I can tell you have the same care when developing these videos..
N64 was the first console I developed for. I wrote a lot of the game engine (rendering and character control) for Earthworm Jim 3D (I know it was not a particularly good game - I'm so very, VERY sorry). I have fond memories of using the hardware - and don't remember it being particularly bothersome. It was a struggle early on with the documentation, but once I got a basic textured triangle on the screen, in my memory it went smoothly (from a technical point of view). But it was a long time ago, and I wouldn't say we really pushed it. I hadn't programmed any 16-bit consoles, so perhaps it helped not having that baggage. And the display lists felt familiar from having worked with DirectX (DX3? DX5??) albeit the N64 had a more OpenGL like syntax. I definitely remember fill rate and textures sizes being an issue. Pretty sure we kept everything 32x32 or lower. We started the game quite early on in the lifecycle of the console, but the project lurched in a few different directions, so by the time it was released the engine was older than other titles that were hitting the market at the same time. The chap who worked on the camera and collision had more issues - I remember the collision, in particular, was difficult to optimise - and the camera was hamstrung by the fact development of the levels and the camera happened in parallel - the camera should have been nailed down first - but that was a project management thing, and nothing to do with the hardware. The big positive I remember was the speed of the CPU and the fact it had a built-in FPU. We did a lot of prototyping on PC and converting that code to the N64 was easy. I do remember the Partner64 debugging system being unreliable - both the hardware and software were temperamental. That slowed things down, especially during debugging phases. All in all, I really enjoyed developing for the N64, and really like the fact it has a great catalogue of games, even if the one I worked on isn't :)
To their defence nobody knew how to design a viable real time polygonal/3D rendering system at the time, especially not in the consumer space. It was a completely new paradigm. Still mind boggling that Silicon Graphics could design a chip with such a laughable small texture cache even for that time when the system was so dependant on that. It was so much ahead in everything else.
It's not really laughable. 4KB per texture is fine for that era. PS1 had the same. The idea for N64 was tiling and clamping multiple textures across surfaces, making use of texture detailing etc. Unfortunately due to tiny carts and time, in most cases devs simply stretched a single small texture across the surface.
They didn't have CD rom for good rez textures anyway. Less textures, cheaper cartridge to produce. Saturn and Sega32X were promoting games like Virtua Fighter at the time which had like no textures at all.
@@zerazara N64 was designed with a superfloppy in mind. N64 DD was supposed to release much earlier and effectively be _the_ N64, and AFAIK wasn't even meant to be a peripheral, but it was postponed because of quality issues and eventually effectively cancelled. But you're right, textures wasn't much of a consideration at the time, flat surfaces was believed to be what they had to settle for in most cases, that was the paradigm, I had forgot that.
It’s because they basically just copy and pasted their workstation rendering architecture, designed for frame-by-frame rendering of high resolution movie scenes, and sold it to Nintendo as a game architecture. Why Nintendo even bought it though…
That's not true. Nobody new how to design a capable 3D graphics rendering system with a unified memory bus. The geometry transformation, texture mapping, and framebuffer reads/writes are all memory intensive - and they interfere with one another when they are all on the same bus. Nintendo could have added a small VRAM chip like the Playstation did, but that would drive up the cost. And their custom hardware was already expensive.
Tell me if I'm wrong here, but wouldn't an external cache have the same problems as the RAM? The cache was so fast, because it was built in the chip that needed it. If you added an external cache, it would lack the needed performance, wouldn't it?
@@g.o.a.t.y.g.u.y Yes and no, I think. The bottleneck tends to be the bus (especially if it's shared), then the RAM access method, then the RAM speed. The length of the wires are way down compared to those although it does affect how quickly the signal becomes stable. I'm no expert but that's how I understand it.
Ummm they did have an external cache. The game cartridge. That’s one of the things talked about in the video. Also a special separate port and bus for a external cache would just add to the system cost. So at that point might as well just add more cache to the chip instead
I think that it would've been awesome if Nintendo, instead of putting every single game on a cartridge, would have put the games on a cd and put in a cache inside of the console the size of a N64 cartridge. This would've drove up the price of the console by a bit, but it would've given developers more cache space to work with and allowed developers to also gain the benefits of a CD with its cheaper cost and higher storage capacity.
A CD would also have caused loading times and would be more vulnerable for kids.thry actually planed the DD 64 as extension but it never was successful. With every new console generation there are people complaining about limitations - dispute having way more Lower at hand then with the previous system.
I liked looking at the office dev docs. They treated the developers as if this is the first time they have seen 3D. Which was true for some at the time. Nintendo explained all the super basic stuff, like what is a polygon. I find that funny because today how could you work for a AAA studio and not know what a polygon was, but this was the mid 90s and Nintendo's first system to be fully 3D so that makes sense.
Employing people straight out of school without first clue about anything and training them from the bottom up is a standard Asian hiring practice. Tetsuya Iida, Sony programmer responsible for PS2 backward compatibility layer: "I had no real electrical engineering skills to speak of whatsoever and I didn’t even know how to boot up Windows 3.1, let alone how to write any programs. Even now, I can’t help but wonder what people at Sony saw in me when they decided to hire me. Luckily, I got the chance to learn how to program computers thanks to a training program that the company ran. "
@@rasz Yeah, also my guess it the job pool for people who knew how to do these things was basically vary small so they had no choice but to hire anyone and train them. I have nothing to back that up on my part though
NIntendo doesn't make sense because back in the early 90s with the Namco System 21 and the Segam Model 1 you had 3D polygon games running at 60fps. of course you had lot of dsp and other hardware making that happen but all developers knew what a polygon was by 1995-6!!
I remember the texture filtering in Ocarina of Time really blew my mind back in the day, even though it was low res the textures looked really smooth compared to the PlayStation where you would always see each jagged square pixel in games like FF8.
@@ThreeDaysOfDan It's like enlarging a photo in Microsoft Paint. Sure, a 240p image can _technically_ be resized to 4K, but it ends up looking even worse because of all the guesswork required to do so.
In case anyone wondered how much $199 in 1996 would equate to today it's around $384.77. So when I think back to my mum saying no it's too much money for one it makes sense to me now. Didn't show her i was gutted, I knew she struggled when we were younger. I kinda feel bad for asking now.
Same, any console I don’t have that’s $80-100 on offerup I’ll snatch up. I’m 20 and I’ve been collecting consoles and games that I never got to play as a kid. I had a single mother I never had a game console as a kid.
I swear i saw an ad for the N64 for around $600 that Xmas and just knew i wouldn't be getting it anytime soon. But after that winter, the price went down to the real MSRP of $199. Demand legit drove the price of that thing up way more than what this video says the price was
@@billybones6463 im guessing he found a random retail price for an outlet at the time. Not sure if the US was big into bundles but most N64s here had a game joypad or rumble pack thrown in for a bit extra. I wonder if he quoted one of those prices. Even at retail was a big ask for a kid in a family of 4 siblings. I never got my hopes up. But dont get me wrong i didnt miss out and was grateful for what I did have mind you.
1:18 : I clean, restore, and RGB mod a lot of these systems, and every time I open one up, I look at, enjoy, and appreciate the layout and design of the motherboard. The few chips are packaged so nicely on the board, connected by unique curved traces. It's such a beautiful layout, direct to the point with almost nothing wasted, especially space! I don't favor the system or its games that much (I've played very few games and since I didn't have one growing up I don't have any nostalgia for it) but I absolutely respect the engineers who created the final board layouts. It's a shame whoever picked out the plastics for the case chose a mix that got so brittle with age. Advice for anyone taking a console apart: when reassembling a screw that goes into plastic, rotate it counter-clockwise until you feel it "click" and sink down into the threads. Then tighten. This plastic is very brittle and it's super important to NOT cross-thread through it lest you break off posts. The plastic's only saving grace is it responds VERY well to standard cyanoacrylate super glue. As long as the plastic doesn't shatter into pieces but breaks off cleanly, chances are a superglued piece will bond stronger than the now 20+ years old plastic.
yep, it's indeed rare to see a PCB with that smooth design, electronic engineers wanted to make it like that and I approve. it changes from the ever-seen angular motherboard circuits.
It's a cool layout, but why they never supported RGB out of the box is beyond me. A real step back from the SNES. Thankfully they went back to it on the gamecube. It's especially weird for europe because here the scart connector was very widespread and a majority of TVs supported it. I remember playing the Playstation over RGB 20 years ago and I didn't even realise what that meant at the time.
@@thegearknob7161 yeah the SNES was fully SCART compliant. But you can mod the N64 to have th RGB out, since the signal out of the ICs is RGB + synchro to begin with. Wire it directly to a SVGA connector and add a jack or RCA conn for audio output.
@@Diamond_Tiara That only works on certain NTSC versions. On a PAL console at least it requires quite an expensive FPGA based board be soldered on to the DAC. There's no excuse for that. RGB Scart was as standard in PAL countries in the 1990s as HDMI is today. Nintendo knew this. They did a version for France with native RGB support but most of them are composite only, for reasons that make absolutely no sense to me. I can't see any cost reduction in not sending out a signal that's already been generated internally through a connector that already has the pins for it. Officially the PAL N64 doesn't even support Svideo either, but there is a way to modify a NTSC svideo cable to get it to work. So at least there's that.
So, the Stop N' Swap feature from Banjo-Kazooie was actually the result of Rare trying to take advantage of the Ultra 64's high memory latency, since it allowed you to power off the console and swap game cartridges while still keeping data in RAM for a short time. Which means that Nintendo accidentally removed a feature from an N64 game by upgrading the N64! EDIT: It has been pointed out to me that memory retention after power off and latency are not the same thing
Stop N' Swop was originally intended to be done with the console turned on, much like how multi disc Playstation games can change discs during gameplay. Rare was quickly dissuaded from implementing it, as Nintendo was not sure how safe it would be to hot swap memory that's being used.
@@somebonehead That may have worked on an empty memory card, but probably there was too much data to be passing along for even compression to work well in a memory card transfer. Mem card size is also something that neutered these consoles. It was only slightly better/more reliable than using the password mechanic.
They shared exactly one byte of data, you can read that up in the recent rare gamer interview with Paul Machacek, size wasn't an issue. I wondered why they didn't use the memory card aswell. I assume they just really liked the idea of data retention in RAM and went with it...
Hm, this really makes we wonder what N64 games would have been like if at least two of these things were addressed, even if the processors were at lower speeds.
@randomguy8196 I understand the risks for publishers, but as an end user cartridges were superior to CD's in every way. Not only could they quickly load new information in quickly but, like you said, FMV was nigh impossible. In game cutscenes on N64 games could flow seamlessly into and out of the action and plenty of spoken dialogue and other sound effects could be quickly swapped in and out during gameplay with little delay. I have never once heard a Playstation game pull off dynamic, in-game dialogue as effectively as Star Fox 64, Rogue Squadron, Battle For Naboo, Conker's Bad Fur Day, or Perfect Dark. CD's may have been great for publishers, but they always sucked for players. Level design always suffered heavily on CD games as well, in an attempt to cut down on frequent map switching. Banjo Kazooie and Banjo Tooie would of had severely simplified level design had they been designed for a CD system. Cheap, massive amounts of storage space is one of the big reasons games have become so damn boring and repetitive. Why bother spending months and months making real, genuine, game content when you can simply add in copious amounts of stuff. That is part of the reason games revolve around a billion pieces of loot to collect, cosmetics to purchase, etc. It is cheap and easy and gives the illusion of content and countless hours of gameplay. There are NES games that take an hour or so to complete that have more genuine content and game variety than most of the games that come out nowadays. They had to make as much variety and unique gameplay as they could because they didn't have thousands of gigabytes of storage space to fill up with cosmetics and slight varients on the same damn weapons and equipment.
@randomguy8196 Not as much as you think. Super Mario 64 was $50. Sony dropped the price of Playstation games from $50 to $40 around the time of the N64 launch as I recall. Even when they started getting into the larger cartridge sizes, Nintendo 1st party games were usually still around $50. The 3rd party developers games would cost $60 to $70. This had less to do with the medium the games were stored on and more to do with Nintendo's extreme greed. Here is the basic rundown of costs according to a reddit poster that claimed to have once had a contract for N64 development. Minimum production run: 15,000 units Production costs Nintendo profits from because you have to go through them. Nintendo takes an additional royalty fee of $7 each cart for logos, seal of approval, etc. Packaging runs about $150,000 for 15,000 carts. Manuals, boxes, shrink wrapping. Delivery fees not included. On a $55 cart a profit of $6 to $7 dollars was made. Nintendo got the rest. Profit margins for developer on Playstation could be as high as $27 per disk. _________________________________________ This is why 3rd party games cost so much more. Even the Blu-Rays of the WiiU are a special proprietary format that only Nintendo makes. Crash Bandicoot has a very linear structure to its' level design. The designer always knows what is coming next. That is a luxury that an open world game doesn't have. Ubisoft, EA, 2K, Activision, Rockstar, Capcom, etc. They all do it. If Mega Man was a new franchise released today you would have about 5 levels that you would keep playing through over and over again. You wouldn't gain a new weapon after defeating the boss either, but instead a random part that is needed along with others to make the weapon. How is what I described not exactly like Monster Hunter or any number of endless grind-a-thons? For the record, I do enjoy Monster Hunter, but it is a short game as far as content goes, that is artifically lengthened to be an endless time sink.
Perfect Dark is really impressive, not only how it looks but how it plays also, you can targer a hand or a leg and you disarm them that way, yes they all pretty much use the same animations but considering in other shooters when you hit someones arm or leg you just hurt them less, Perfect Dark for sure tried to do something new
@MultiTarded +1 Would be interested in a Wii U video as well. I remember hearing all this talk about eDRAM or something being the secret sauce of the Wii U. With the few games actually utilizing it to the max really showing it (Fast Racing NEO with the 4k textures and 8k shadows, Xenoblade X with the gigantic world + entire city without any loading times, Bayonetta 2, Smash Bros. with 8 players at once still at 1080p/60fps etc.). Claiming it's about as powerful as a Vita is wildly ignorant I think, but a video about the Wii U's strange architecture and what was or wasn't actually possible with it would be interesting for sure. Probably a bit niche though. Poorly optimized multiplat ports gave the masses the impression that it was barely more powerful than PS360, but the few games that really tried certainly proved the contrary.
I think that's because of the huge jump forward between going from 2d to 3d and then the graphical shift as well. I suppose it would be like if we went from the n64 straight to like ps3 with nothing in between. But either way I am with you on that. Going from original Nintendo and Super Nintendo to N64 growing up was mindblowing.
David Brunell I agree. Nintendo games in the late 90s and early 00s had really engaging stories, sound effects, music, and gameplay that had never been seen before. There are some others on different platforms that were great too like Spyro on PlayStation. All later platforms have been recycling the same types of games with marginal improvements or forced gimmicks (Wii/3DS). I don’t think we’ll witness a leap like N64 ever again.
@@haseebs.7286 There are still improvements to be had from 1.) cloud rendering/computing + 6G internet, 2.) 5D optical memory, 3.) ReRAM & MRAM, 4.) AI tensor chips + cloud AI models, 5.) VR + fovea rendering, 6.) haptic feedback controllers, 7.) super parallel arcitectures
I remember reading a quote by the CEO of Sony SCEA in the early 2000s where he said the PSX could push more polygons than the N64, in theory, but because of the lack of z-buffer and the various limitations of the RAM etc, it meant that once you factor in textures and AI, if you wanted your game to be playable or actually look good you needed to compromise on that.
I never knew the N64 packed so much compute power... Like, I knew it was powerful, but damn... Kinda curious of what a "modern" developer could do on it, if they had a good SDK...
the N64 was even way more powerful SGI Onyx computer that was used to make those games, it used to cost $80k of those ages money. If you look up what SGI Onyx was you'd be interested
Very interesting. I do not remember having read or heard at the time this kind of criticism concerning the N64. About the Saturn, yes, it was even a constant complaint, but not the 64. Thank you for this information.
By what i saw in the N64 development manuals and stuff, it was easy... as long you used the standard nintendo microcodes. After the PS1, both nintendo and sega rushed to make an "Open GL like" API so developers didn't needed to deal with the hell behind the curtains. You lost some performance and flexibility in the process, but not every game company have a genious able to deal with the RCP/SCU bullshits.
3D rendering is really hard, so it's the logical conclusion we're gone from microcode hacking to universal APIs right up to the current day where almost everyone except really big studios (even some of those) use one of the two big premade game engines (Epic or Unity). Less money spent on expensive programmers, more on creative staff.
@@theshinken Sony got it right out of the gate, by basically copying the best API of the time and designing the hardware itself to be friendly with it. But then nintendo and specially sega had to rush something to compete. The sega saturn techdocs are quite hilarious because they explain EVERYTHING about 3D graphics from the basics of basics, including "how to use lightwave to model a low poly monitor/car". But this was needed given most devs were coming out of the snes and genesis.
@@dan_loup I felt like there was a lot of competing APIs the time. Everything was so new. Given Nintendo partnered with SGI. I would think their solution is heavily biased from what SGI was doing at the time. It's amusing to say Gamecube was a more traditional 3D system when at the time of the N64 I don't think I'd say anything was deeply ensconced as the 3D machine.
@@Jerhevon On PC space there was quite a shootout brewing between DirectX, Open GL, intel 3DR, apple quicktime VR, proprietary videocard APIs like GLIDE, S3 metal etc.. Consoles on the other end had to deal with much slower CPUs so actual graphical APIs didn't looked like the best of the ideas, until sony basically implemented OpenGL in hardware on the PS1, and everyone had to run after.
Great video. People like to say that the lack of a CD drive is what prevented the system from being a major success. But I have always said sticking with cartridge was a better move for Nintendo at the time, and the console's overall hardware was a bigger factor. Good to see my theory get some support.
This makes me incredibly curious how developing for the N64 would work today. I remember reading somewhere that Nintendo had newly hired programmers working with the N64 as late as the Wii era to test how capable they were. I always wondered what kind of neat tricks they'd be able to do on the old but familiar hardware now that we have a better understanding of player's needs and how 3D games can work, and also given how games nowadays are considerably better at optimizing computer power.
RSP is comparable to x86 SIMD unit. There is a dude (giovannibajo) with a project to port Dragons Lair to N64, he claims to have h264 running on RSP at 18 FPS.
I always appreciated seeing 30 vehicles on screen at once with no slow down at all on F Zero X...the audio was great but in mono only, and I wonder if that reduction in audio quality was due to prioritising graphics/frame rate.
@@FinancialHealth-ku1ry Them I must play It on my receiver, It always play mono sound on the center speaker, like when I play Command & Conquer, the sound fx is stereo but the music is mono so I only hear the music coming from the center channel. Dolby Pro Logic II does this same thing, mono sounds always goes to center speaker.
I remember getting disappointed even back then considering how F-Zero X was so simple in terms of graphics. N64 never truly felt like a generational leap for me.
1:41 A 5 stage pipeline doesn't mean it can execute 5 commands at the same time. Based on the typical RISC pipeline, the stages are Instruction Fetch, Inst. Decode, Execute, Memory Access and Write Back. Only Execute really "does something". To run multiple commands at the same time you need multiple Int or Float Execution units within the CPU. The amount of stages in a pipeline has on its own nothing to do with how many commands can be executed simultaneously.
It's very rare that any gaming youtube channel makes me want to change the notification settings to "all", but you just hit it out of the park so consistently with your videos. Informative and entertaining, with good editing technique that keeps the viewer engaged. Thanks for what you do MVG.
In this matter, the Saturn and the Dreamcast are VERY different beasts. The Saturn was notoriously hard to develop for and Sega did everything in its power to avoid doing the same mistake for the Dreamcast, so that 3rd parties would be more keen on developing for their system. The Dreamcast is notoriously EASY to develop for, with a very straightforward, down-to-earth architecture, powerful and easy-to-use API and even DirectX compatibility.
@@Liam3072 i am SO upset Dreamcast didnt last. Imagine the indie scene that wouldve blossomed... Also the shoddy ports on Gamecube piss me off. SADX is a MASSIVE downgrade but now everyone thinks thats how it looked on the dreamcast.
Regarding the painful 4k texture cache, non-3D games have a way around it, in theory. You can simply render on the CPU and blit pixels to the framebuffer. I think it's called framebuffer poking on other platforms. This is used in a limited fashion in N64 games to render certain elements. The red dot in Pokemon Snap is a fairly famous example. But the only commercial game rendered entirely on the CPU was Namco Museum. A number of homebrew attempt this technique, including a Flappy Bird clone and some other stuff. But one of the more interesting projects is 64doom. A port of Doom to the N64 that renders on the CPU and blits directly to the framebuffer. github.com/jnmartin84/64doom Overall, the N64 suffered from severe bottlenecks. There's no point having fast access to the cartridge or a lot of RAM if you're constantly stalling trying to get to it. This is the root cause of rampant framerate issues on the N64. I remember reading that Factor 5 had to get very creative to render Rogue Squadron without the framerate tanking. Devs had to find ways to render while minimizing fetches from RAM, because fetching anything from RAM destroyed performance.
Doom64 makes a lot of sense being rendered on the CPU since it's a port of the Doom engine from over in PC land, which predates hardware accellerated 3D graphics (and indeed PCs of the era did not have the hardware accellerated 2D graphics that consoles did). Without getting into too much technical detail, Doom did not use triangles for geometry, but was rather built around using a BSP tree to figure out which walls each column of pixels is projected to. This technique has severe limitations (you can only have vertical walls and horizontal ceilings and floors, and only one floor and one ceiling for any given sector - no two floor buildings here.), which is why it was replaced by using triangles once computers were fast enough in the years to come. It would also be difficult to rewrite it to use the N64 GPU. For one thing, Doom textures are generally 64x128 or 128x128, which would not fit in the N64's texture cache, so you'd have to go and break everything down into smaller chunks or else greatly reduce texture resolution in addition to turning everything into triangles. In the end, it's not even clear that after all this the hardware accellerated path would be faster! Besides, the original Doom came out in 1993, and targeted the 486s of the time. N64 had plenty of CPU power by those standards - 3 years later was a looong time for tech advancements in those days. Do a direct engine port, find out it's plenty fast, and call it a day. Heck, Doom actually had a port running on the SNES....
@@Keldor314 actual Doom 64 game had nothing to do with the original PC Doom on the tech side, it is a completely different engine and completely different game as well. It does not render on CPU either. The one that runs on CPU is a direct port of PC Doom made by enthusiasts a few years back. It can't even run full speed.
@@shiru8bit I was going to to ask this exact question. Did it actually have enough power to do anything with the frame buffer that would be of significance. Thanks!
@@shiru8bit That's pretty hilarious that you believe such malarkey. So according to you, 5 MBs of VRAM, a 64-bit Address Register, and a Dual 32-bit Data Bus, with 60 MIPS & a 73 MHZ Clockspeed, is not enough CPU power to implement a Framebuffer.
Even if it's nice now that games are quite "uniform", so it maters little which platform you get it on.. there was something special about the vastly different hardware, games and ports, how they could be even completely different games on the other systems. The graphics were also nothing alike between the systems, made each console feel more unique.
This reminds me of the situation with the atari jaguar, and subsequently the ps3, a superior console which was way ahead of it's time, it terms of the technology it was using. The architecture however, was subsequently incredibly difficult to program for, as there wasn't a programming standard, or a benchmark to base it against, as 3D technology was still relatively new at the time, when the N64/ Playstation was released, but the playstation had net yaroze, and a very developer friendly approach to programming its games.
Same with Amiga. Only once the ST started to die off did we get Amiga native games. Look at how bad Xenon 2 looks on the ST vs the Amiga. That was an Amiga native game ported ot the ST.
@@kenrickkahn that's a different story, the main problem with the Saturn was a lack of native 3D rendering, so it was hard to code 3D games but probably the best console for 2D
I have mixed feelings about the PS3's hardware. The Cell CPU was good as doing GPU-esque parallel math tasks, but as a general purpose CPU the 360s 3C/6T Xenon was arguably better. The initial design pitch for the PS3 actually had two Cell CPUs and no GPU. It would have been a return to the old paradigm of software rendering without dedicated 3D acceleration. We can't predict the future, but as it stands I'd consider the PS3 more of a failed divergence than a forward-thinking design.
"...ps3, a superior console which was way ahead of it's time, it terms of the technology it was using"? PS3 has almost the same architecture as the XBox360. XBox360 was released a whole year earlier. Both systems have real SDKs. I don't believe that ist was difficult to develop for them. The existence of indie games should suffice as evidence.
A very interesting and very well documented insight into the hardware. I would say regardless of latency it was still a much more powerful system, 4Kb can be worked around easily but it requires alot of space on the media format, which the N64 cartridges didn't have - alongside unoptimised SGI microcode - and Nintendo's intolerable control over what developers where allowed to do really shot themselves in the foot.
On the downside, it would have meant that the overall hardware would've been more expensive than what Sega was offering for their Saturn at $399. Could you imagine how much it would cost to buy an N64 then with a built in disc drive?
Its fascinating looking back at that generation of consoles and how unique each system was, with both Saturn and N64 only seemingly unlocking their potential right at the end and it's a bit of a shame we never quite got to see what they really could do. While on lockdown I'm replaying many from that generation and I have to say the Saturn has outstanding picture quality for the time with its rgb setup and the few games which support its hi-res mode and of course the 4mb ram. The N64 is somewhat disappointing looking back with its forced AA and complete lack of RGB (something luckily we can fix today to make a huge difference). Overall all 3 have some truly amazing and quite unique games and a variety that we often lack in today's AAA space
It's almost like saturn designers were afraid about the motion sickness of 3d games and stuck with the 3d (or sprite scaling) that was popular in the arcades at the time that favoured racing, fighting and on track flights.
@@FoxUnitNell I think that was more the limitations of the hardware mainly due to using quads. That being said when the Saturn combined its 2d and 3d capabilities we got games like Panzer Dragoon Zwei which would have been impossible on the Playstation due to it having to render everything in polygons, where as Saturn used its Infinite plane engine to draw huge floors and ceilings well beyond what the competition could do.
@@snake-v9t I wasn't implying there aren't exciting games, but I was saying the games always advertised by Sony Interactive Entertainment outside of Asia are just plain unattractive to me. For example, in Taiwan and South Korea, games such as Trails of Cold Steel IV are advertised very heavily, while in Europe and the USA, Western games get heavily promoted. And let's not forget that Sony is the master of forced censorship (cultural Marxism) in games (even M-rated ones!)
they stated many times they made it intentional hard so you could see graphical jumps during the generation. ps3 was planned as a 10 years console from the beginning. on 360? this thing basically maxed out at gears of war in 2006. after that there werent many jumps. on ps3 though? god of war, killzone, beyond two souls, gran turismo...it was leagues above anything ive seen on 360
I really enjoyed this. I'd love a follow-up deeper dive into some of the optimisations Rare/Factor5/etc used and perhaps how today's retro-coders are using modern techniques to make the most from the hardware.
@@christuckwell3185 yeah there's a good interview out there about Factor 5's cancelled Dragon flying game for the PS3. I enjoy reading interviews about failed projects and development hell. You learn more from them :)
Ah, nothing like getting up too early for work and seeing a new video uploaded 28 seconds ago! Cheers and thank you for continuing to put out content when we need it the most!
Devs: what teh do we do with this shit, HOW DO I FREAKING CODE! *7 years later* Devs: finally I get it now Sony: introducing PS4 with X86 system that would be good for the developers because they suck our ding dongs with the PS3 Devs: *cries in happiness* Also devs: *cries in sadness* all those wasted years were for NOTHING!
The Xbox 360 was originally planned with 256MBs of RAM, But Epic Games conviced Microsoft to go with 512MBs for Gears of War. As Gears looked incredible when it came out in 2006 and didn't need 1GB of RAM for that. Microsoft had to choose between 512MBs of RAM or HDD by default
Funny because the N64 was going to be a CD based system and Nintendo had Sony help designing it. But Nintendo decided to scrap that for cartridges and Sony took the idea and what they developed to make the Playstation.
You mean one generation later. The PS2 had little VRAM, and clever developers had to rapidly swap in from main RAM. Between that and the VPUs, it was arguably even more challenging to wring the most out of than the N64... the only console harder to develop for (at least for 3D) was the Saturn. You needed someone at Jon Burton levels to get good 3D performance out of the Saturn.
@@MrInuhanyou123 That's the point, there were tradeoffs similar to the N64. EDRAM/ESRAM can be good if it's used in addition to a powerful conventional configuration (Xbox 360) but it can be a detriment if it is used to make up for other shortcomings (Xbox One / DDR3). If you were a very talented developer without major time or cash constraints you could make the PS2 sing pretty good but not as good as a more conventional Xbox (original). Even the older Dreamcast with its more conventional layout (and less total RAM) often outshone the more powerful (and more expensive) PS2 when it came to textures and AA. Even early titles like Soul Calibur looked quite good. The PS2's main advantage was that (again in the right hands) it could push a lot of polygons. Not as many as they tried to claim initially, of course... in an actual game with advanced special effects, decent textures, AA, etc the actual triangle count wasn't that epic. Meanwhile Sega actually under rated the DC. Developers were wringing out 5-6 million polys even with all the bells and whistles turned on. Don't get me wrong, the PS2 was still overall more powerful, and could do very well with the right developer (especially late in its lifespan)... but I think they would have had better results with a more conventional layout, especially for early games and smaller developers. Side note: My favorite example of "help I'm out of VRAM!" was the port of Grandia II to PS2... they had to turn stuff off when you used a video-stream-overlay spell.
@@AlexvrbX xbox one used esram which is not as fast as traditional edram for higher capacity. Not exactly same situation. I think ps3s unique traits is more applicable to the n64 comparison since it truly hobbled multiplatform development( of course not to as drastic as n64 which barred even making games of specific types)
@@MrInuhanyou123 It was still quite fast, and provided a large bandwidth boost. Had they had used it in conjunction with GDDR it would have been purely an asset. As it was, they were using it to help make up for the shortcomings of a DDR3 setup. It was barely good enough, and required a TON of effort on the part of devs to reach the kind of bandwidth you got out of the PS4's GDDR - and even then it still could fall short, depending on the game/engine in question. Of course it didn't have enough raw power either, since they went with the smaller GPU. They fixed their shortcomings with the One X. PS3 is actually a more conventional setup in terms of graphics and memory bandwidth. The Xbox 360 was actually more radical with the daughter die and EDRAM, plus unified main/vram. The CPU side of things is a different story, and Cell was definitely a handicap. Although the PPC chip in the Xbox 360 was pretty strange too compared to the original Xbox. It was a triple core hyperthreaded PPC with custom VMX vector processors, and it was an in-order CPU. Not as bad as Cell, but still harder to harness fully. Anyway, that's all getting off topic - the video and general conversation was mostly revolving around graphics and associated RAM / bandwidth. In which case, the 360 was the strange setup. Side note: I wonder if they aren't making a similar mistake with the next gen consoles, not packing in enough RAM and leaning on the SSD to stream in hardware-decompressed assets. I guess since both MS and Sony are taking a similar approach, it won't matter. Devs will just have to deal with it. Either way this might benefit PC games in the long term, finally force devs to require decently-fast SSDs and actually tax the drives a bit.
Good video! Tons of knowledge. Who knew the N64 was like that. I knew that Saturn was hard to program, or get the most out of, but didnt know the N64 has those issues.
I am not sure if it is possible but I would love to hear you talk with Julian Eggebrecht the former President of Factor 5 on Nintendo development on the N64 and GameCube.
Games from Rare and Factor 5 always pushed the N64 to its limits and sometimes beyond them, having said that they were the best games on the system IMO.
I own Diddy Kong Racing and can confirm. Rareware games look AMAZING on the N64. Even to this day it still holds up quite well. I need to find more Rare games
I remember noticing right away that DKR karts showed the wheels actually steering and the characters looked a lot more "real." Aged 10 I didn't know how any of this worked in the slightest, but we all _knew_ DKR looked way better than Mario Kart.
@randomguy8196 I would really like to see one on ps2 as well. The spectrum of graphical and performance quality was huge on that system too across games.
I think the game industry is going through the same cycle the movie industry is going through right now, the 90s, and the 70s. Games and dev teams have bloated to the point that it is unsustainable. It is the fault of publicly traded companies of course. The push to replace veteran devs with cheaper workers, the push to increase headcount instead of pushing out deadlines. I think that new dev tools will empower devs to make slightly smaller, more tailored experiences using a fraction of the people. Much like the movie industry goes in cycles from making big blockbusters, to creator driven smaller movies, and back to blockbusters, etc.
5 step pipeline means it can break processes into 5 steps and possibly do more because of pipeline but is still limited to 1 instruction per clock cycle max, super scaler would be two per clock like many modern processors. what made n64 processor so fast was its ability to also execute 32 bit code which used less memory bandwidth and had faster executing instructions. also a 5 step pipeline is less pipelining than many processors of that period which means that branch instructions could execute faster and an empty pipeline filled faster.
MIPS has no empty pipeline. Or does it? Load data needs stage 4 and 5 after address calculation in 3. Branch calculates the address of the next instruction and fetch would happen in 4.
Explaining pipelining as running multiple instructions at once is quite an unfortunate choice in my opinion. While technically true, it suggests the false idea of executing multiple instructions per clock. That is not the case. Pipelining splits instructions that would take too much time to process at once (forcing you to use lower clock speed) into smaller chunks (5 in this case) that complete much faster. But all that means is that you don't need to wait for instruction A to finish before starting instruction B, you can start B as soon as block 2 of A is executing (and C as soon as blocks 3 of A and 2 of B start). You can't start A and B at the same time.
@@halofreak1990 Not even that on these old MIPS CPUs, no interlocks meant it was on the programmer to avoid dependencies in branch delay slots and load delay slots.
@@espfusion VLIW(Very Long Instruction Word) is Latency Language or Assembly Language writing. RISC(Reduced Instruction Set Computing) is an ISA(Instruction Set Architecture) . Saturn's ISA is RISC. N64's is MIPS(Million Instructions Per Second), but both use VLIW Latency Language for Programming. N64 is capable of up to 60 MIPS Computing Instructions. Saturn is maxed out at 50 MIPS Computing Instructions. All 5th Gen Consoles used C Language Assembly. It was just easier to do on PS1.
Third party developers didn't want to create games for cartridges, because they were much more expensive and couldn't hold nearly as much memory. The PS1's CD-based console was much more attractive to developers, because CDs were much better than cartridges. It also didn't help that the PS1 sold 100,000,000 units, while the N64 only sold around 30,000,000.
It wasn't as hard as you think. I worked at Iguana Austin/Acclaim on the N64, on a number of Acclaim titles. Yes, there was some issues programming the RDP & RSP, but Nintendo rarely released the microcode source code for it. We did get it, but made few modifications. There were issues with RAMBUS, it was really terrible at random access. As long as you kept your data close by, it was OK. It was also bank based, so it was faster to make sure that code was in a separate bank from gfx, and gfx in a separate bank from the frame buffers. It wasn't easy to do, since you only had 4 banks to choose from, the extra 4M made this easier (and so also another reason games were faster). I much preferred cartridge, it was faster and easier. It was also possible to DMA directly from the cartridge, this made it quick. I added a VM system for it, to allow direct mapping of cartridge memory to main memory (stored in a compressed format, decompressed on need). The 4K texture cache was not easy to handle. You seem to make the impression that there is direct access from cartridge to the RDP. This is not possible on the machine. So, as you mention, the maximum size was 4K. QBC98 used 2 textures for the player models. I do not recall Turok 2 DMAing directly from cartridge in to RDP memory. I don't recall, but I believe most of the data was compressed on ROM. It's more likely it DMA'd from ROM -> RAM then DMA'd from RAM->Texture memory. BTW, it's a 4MB RAM expansion, and it was used extensively by a number of Acclaim products after about 1997 (about 24 or so). We also used the expansion memory to prevent early copying of the game. The versions released to reviewers MUST have the extra 4M, indeed, the code was loaded high up at the top of the 4M expansion to delay hacking. We also used this 4M RAM expansion to decompress a lot during game boot time (legal & logos screen). The extra 4M made the games run faster, as we could cache more of the VM accesses. We also used the 4M expansion RAM to increase frame buffer sizes on some games. I *believe* (but memory is not clear) that Turok 2, ASB99, and QBC99 all use higher resolution buffers with the extra 4M. 512x360 instead of 320x240. The VDP hardware was awesome. The scanout to display produced an amazing image. It would blend two scan-lines together in a funky way to make it flicker less. It was SUPER flexible, which is why 512x360 worked so well (hence Acclaims, DUMB AS ALL HELL, HiRez(tm) marketing hype).. A unified memory model was very well received by us. We could choose ourselves how much memory was split between audio, gfx and code. The build/debug environment from Nintendo was absolutely crap, I mean really crap. Basically, GCC, makefiles and gdb. They were also restricted to SGI machines, but I believe they sorted this out later. We typically used the SN Systems/Cross Products development environment which was actually built by real game developers.
1:43 - A 5 step Instruction pipeline means EACH instruction takes 5 stops at clock speed in the CPU to complete the machine code instruction. This is NOT parallel processing that you'd get from multiple cores. You complete 1 instruction per clock cycle, with 4 instructions processing in line behind it. Pipelining is used to increase processor clock speed by splitting more complex instructions into multiple steps. For comparison, the Pentium 4, released in 2000, had a 20 instruction pipeline to increase nominal processor clock speed (GHz). Long pipelines can be a major weakness. If you choose to go left instead of right (branching), the Pentium 4 might need to throw away the last 19 instructions, and then you'd have to wait 20 clock cycles for the correct instruction to be processed through the whole pipeline. AMD focused on performance over marketing, and Intel gave up on the P4 giant 20 step pipeline in 2007 (high heat, mixed performance), dropping down to 14+ stages in current processors.
Imagine if somehow it had 64K of cache and 64MB of RAM. Like a N64+ that came out a few years after initial release. That would be mind blowing for the consumer with high potential textures, and larger levels. But a nightmare for developers to make a game that plays on both consoles with 2 texture packs.
It would still have the laughable 4MB cartridge size , and the compressed to hell sound thats also dependent on the CPU .. unlike even the PS1 which had an impressive sound chip and didn't waste CPU cycles for sound
@@aceofspades001 with large enough Ram you can just stream the sound from it at decent quality. so it would be more possible to pre generate the music at like a level transition. this would waste barley any cpu cycles. (yes i am aware of how absolutely bat shit insane this solution sounds).
9:32 I never had an N64, but that sure looks familiar... Midway Games: "Hey man can I copy your homework?" Polyphony Digital: "Fine, just change it up a little so it doesn't look like you copied."
Boss Game Studios made the game, Midway, that trash company did not have the technical skill in their internal studios to ever pull a game like that. A lot of Boss Game Studio devs went and joined Turn10 and made the original Forza
I really love your channel. I'm an electronic engineer but I'm not that knowledgeable of programming. I know the basics but not much more than that. It's fascinating to learn about how these games are developed.
The thing that baffles me the most is Nintendo not opening up the microcode features and documentation earlier to third party developers. How on earth could they think gating off a core and novel feature to the developers making their console's image was ever a good idea?
MVG, you have made several mistakes since minute 6. It is mandatory for the RCP to use the TMEM to paint the textures and to use the bilinear filter. RCP cannot use RAM or cartridge memory for textures, and also they are too slow to apply effects to them. What they did in Turok 2 to use 64x64 pixel textures is to use 4-bit textures with color palette, or black and white that were colored by the RCP based on the color basis of each polygon. But this was not exclusive to Turok 2, all games at some point use colored black and white textures for specific polygons that only needed one color, for example some doors in Ocarina of Time. Even grass and dirt effects were achieved with the same texture painted in green or brown. The RDRAM latency is not that high compared to other chips of the time. The EDO RAM of the 3Dfx Voodoo has a latency equal to or greater. The problem was in a very narrow 9-bit bus and over-exploited memory that was used to store code, polygons, textures, z-buffer, framebuffer and much more data, making the 500MB / s of memory very limited. Edit: About textures, with 4KB of TMEM you can use textures up to 48x48 in 16 bit colors, and 90x90 in 4 bit grayscaled colored.
This is news to me. I heard the Saturn was difficult because it used quadrilaterals instead of triangles to create models, had two (i wanna say memory buses but maybe they were processors) and the console could not render true transparencies.
Love of gaming, vast knowledge of console history, great editing, and, most impressive of all: understanding and sharing of underlying coding in such... very happy to subscribe 😃
we do not forget that back in the days CRT TVs added their own blurryness on the image, and while this was beneficial on psx's and saturn's unfiltered textures, it was a matter of adding blur on an already blurred image on the n64...not a big deal
CRT televisions don't add blur, it was caused by composite and RF video signals muxing the chroma and luma signals. This caused signal degradation that resulted in blurry video and other artifacts depending on the cable quality and length of the cable. You can also bypass the tuner entirely on more advanced CRT TVs with "jungle" chips, where you can tap into the RGB signals directly to bypass any filtering done on the tuner (comb, pull down, etc.) S-Video, Component and RGB SCART gave far better video output. I've modded my Sega consoles to output S-Video to my 1992 Sony Trinitron CRT television, and I get a crisp picture with absolutely no blur whatsoever.
@@GGigabiteM ehehe don't get me wrong, i am still a big fan of crt technolgy, it still has its upsides compared to LCD. Maybe it is more correct to say that they added their own degree of smoothness to the image (calling it blurryness is a bit too much), but yes, the smoothness is definitely there, even when you use a good output like a scart (i always played with a scart, i never used an RF output). This was due to the technology itself (CRT does not really has pixels) and due to the quality of most CRT TVs, that was not even comparable to tha quality of a decent CRT monitor...
@@MrSapps nonetheless the smoothing effect of CRT television is notorius, as a matter of fact mame and other emulation programs offer filters to make games look like they used to be on old CRT tv even when you play on a LCD. But yes N64s graphic was a bloat of blur , i never liked it
One word: Cartridges. They were expensive and even the largest ones (64MB) had less than 1/10 the storage capacity of CD's (660MB), so no one wanted to deal with them.
True. Literally nobody is doing custom chips now. The closest you get is people using FPGAs for music purposes such as generating alias-free oscillators instead of using DSPs with static sample rates.
Most of what I learned about the basic components of PCs came from a book on building PCs that my aunt got me at Barnes & Noble when I was five (so 2005~2006). I recall there was a chapter on Rambus RAM, and that it was super expensive and probably on its way out. They recommended the then-new DDR2 instead. Haven't heard of Rambus since, until today.
World Driver Championship doesn't run at 60FPS. It commends a respectable 30FPS, but not 60. Also the 640x480 mode is a widescreen letterboxed mode, not fullscreen so the effective resolution of the 3D scene rendered on screen is not actually 640x480, but less than that, probably more along the lines of 640x240. Nevertheless, this game deserves praise for pushing much higher numbers of polygons per second than any other game on the system, while still running at that respectable 30FPS.
That game sucked. There were far better playing racers on the system pushing far more ambitious tracks and number of vehicles. Gran Turismo had the same problem. It had heavily restrictive narrow tubes for tracks in order to put all of the processing power on making the cars look good. It is the equivalent of praising the level of detail on Tekken's characters compared to the models in Spyro the Dragon. As far as pushing the technology towards actually making the gameplay better F-Zero X, the 3 San Francisco Rush games, Diddy Kong Racing, Wave Race 64, and plenty of other racers are more impressive.
Looking at how they went with the N64 system and the decision to use mini- dvds on the GCN makes me wonder, how a company that made so many great games could screw themselves this much with their hardware? They never went for peak power again after the gamecube.
It's the same thing with Wii to Wii U transition. It just made no sense. They already had a winning formula of motion controls, all they had to do with the successor was to improve graphics, improve online and improve motion controls. Instead they came with a clumsy tablet controller and even worse online.
@@Dodolfo1 Well Nintendo couldn´t cared less about specs after how succesful the Wii was. The fact that Wii U manage to have worse CPU than a 2005 console(Xbox 360) tell something. They went with PowerPC in order to have Backward compatibility with Wii, but well going with IBM was a good choice
I liked your video :) Definitely a comprehensive overview of the basics for some of the challenges. Good to see people are still interested in this topic!
@@jiijijjijiijiij this is why decompilation projects for N64 games have such a fascinating potential. Once decompiled source code is available for a game it can modified to use additional memory, higher CPU speeds, ported to other platforms, make fan-modding easier, etc :)
It's 4 KB. This is enough for a single 48x48 texture with 8-bit color + a palette and mipmaps, or else a single 64x64 texture with 4 bits of color (with palette and mipmaps). There were also higher bit depth textures, but you'd have to decrease resolution even more. Quite painful, really.
Amo las texturas "borrosas" de N64 (entre muchas otras cosas). Esa estética tan particular lograda con polígonos grandes, texturas y bordes suavizados y en algunos casos hasta niebla, es lo que hace de N64 una consola con gráficos originales y totalmente distinguibles. Creo que hay cierta "magia" en ello. Lo que muchos critican y ven como una limitación, para mí es una de las cosas que más amo de la N64 y que la hace única e inigualable.
(From what I can tell by a precursory scan of the documentation) One big issue with the texture cache is that it wasn't acutally a cache, but a dedicated memory space. This means that the program had to manage copying in textures as needed "by hand", and in general, you'd have to copy the entire texture even if only a couple texels are actually used. Also, the texture unit could ONLY access that 4KB texture memory. If you wanted a texture, you had to copy it over there. What this ultimately meant is that you would have to sort all your triangles by texture when you rendered them, and of course, the more textures you used in a scene, the more time would spent copying textures and not drawing pixels. Partial transparency/alpha was also made more difficult by this arrangement, since rendering it correctly requires that the (transparent) triangles are rendered starting with the ones furthest away and rendering the nearer transparent triangles blended on top of them. Remember how we also wanted to render the triangles sorted by texture to avoid thrashing texture memory? Uh oh! Thankfully textures with pixels that are either opaque or else completely transparent (such as trees) don't need to be rendered in back to front order - the z-buffer can take care of this case.
The Texture Chache issue was debunked by Homebrewers 2 years ago. The N64 used 4 Dies and had 32KBs on Each. But it required A Dual Thread with the NECVR4300 CPU and SGI RCP Graphics Chip, but MANY developers weren't used to that method. So they just resorted to Storage Capacity on the Cartridges and relied on N64's EXTREMELY primitive software tools including Open GL, and Run Scope. The Open GL Tool for N64 was limited to just 4KBs of Fil Rate without use of the main assets of the SGI RCP.
There were some really impressive games on the N64 like Shadow Man, Forsaken and Star Wars: Episode 1 Racer. And the titles by Factor 5 were also really great from a technical standpoint.
5 stage pipline does not imply it can do 5 instructions at once but 5 of 1/5. Maybe confuzed it with super scalarity that alows the pipeline doing 5x 2/5 instruction at a time.
Never liked the blurry N64 look. It was supposedly a step forward that looked like a step backwards to me. Large cubes everywhere with blurry textures seen through vaseline - good for Mario, i guess, but what else? Few fond memories compared to PS1, though i knew legions who swore by Goldeneye.
I prefer the solidity of N64 games to the wobbly polygons and warping textures of the PS1, which isn't to say I never enjoyed games like Spyro. I guess it's a matter of what you grew up with.
goldeneye is an unplayable mess, I don't know what people see in it. people that love it maybe never played doom, duke3d, quake 2, or even wolfenstein 3d??.
@@pelgervampireduck I played all of those games and Golden Eye is superior in every way. Better level design, mission objectives, location specific damage, larger enemy move sets, and an open world feel that is unmatched by most of today's games. The game was and still is one of the most ambitious designs for a fps. Enemies aren't stuck in there zones, they can literally go anywhere and they are all doing it at once. An empty room could be where a major shoot out takes place the next time you play through the game. It makes the game feel very dynamic. The weapon variety and the persistent, lingering explosions allow you to do some really cool strategies as well. The combat has a near flawless rhythm to it, almost like a side scrolling beat'em up like Final Fight, with you juggling an overwhelming number of enemies at once. The other games you mentioned are just incredibly primitive compared to it, and even more so when compared to the sequel-in-every-way-but-name Perfect Dark. In what delusional fantasy world is it "an unplayable mess"?
David Aitken Well said, I’ve been replaying Goldeneye recently and it’s still an amazing game. You just never know what the AI is going to do next, no two play through of a level are the same because they’re always doing something different, even in the confines of a train carriage! It’s always sad to see the controls criticised too, the default control scheme is basically identical to Resident Evil 4, but with the added bonus of also being able to shoot while moving.
Hey MVG, did you ever create or help develop homebrews for the N64? I'd be curious to know or see you try to have one working from source code in one of your videos.
Just makes you appreciate how good PS1's design and architecture was built several years before N64. It was the right machine at the right time at the right price: for consumers, developers and publishers.
@@NB-1 I know, I know 😅 CDs were 700-850 MB, depending on what mode it was pressed in, while N64 cartridges were only 64 MB (pun intended, and it's megabytes, not gigabytes). Nowadays, the largest optical disc for the consumer market is the Blu-ray Disc Quad Layer, which is only sold in Japan and can hold 128 GB of data. Meanwhile, 1 TB SD cards are available for sale, and can be used on Smartphones, Cameras, and the Nintendo Switch. Also, the Switch's Game Cards are read-only, region-free, and can hold up to 64 GB of storage (as of the last time I checked).
@@lucasn0tch old school carts could be directly memory mapped by the system and had comparable access times to RAM, Switch carts and SD cards are much slower relative to the memory used in modern systems, so everything needs to be copied to RAM before it's accessible at a decent speed. That's one of the reasons the PS5's SSD is so exciting, it's the closest thing to accessing game data from a cartridge we've seen in decades.
I remember working on the N64 back in '99.. was a junior developer working on South Park Rally, well.. originally it was a port of Deth Karz (from Melbourne House) to the N64 which got canned, and then the game ended up quite differently. 4K texture cache was painful but just had to be smart about how you render. R4K assembly was pretty fun though.
What are your thoughts on the final game?
@@crashoveride879 Tech-wise I thought it was great, from a gameplay aspect, not as good as it could have been. I really wish we had gotten to finish Dethkarz. Would have loved to seen it on the N64.
@referral madness These days, compilers are pretty good at generating smart code, however from a learning perspective, I think it's still worth learning. If you plan on writing code for older machines then it is a great skill to have.
Neat
@@shkdzn Any chance the Dethkarz n64 source is floating around??
Nitpick: a five-stage pipeline doesn't mean that the MIPS chip can execute five instructions at the same time, exactly. It means that it can have up to five instructions in progress at any one time, with each at a different stage of the pipeline. Only one instruction starts and/or finishes per clock cycle. That's a pretty important distinction: a chip that can start, run, and complete five totally concurrent instructions at once would have been much more impressive and powerful (and expensive) than just a single-threaded chip with a five-stage pipeline.
That also created a really annoying pipeline problem where you couldn't use a register value until 1 clock cycle after a load instruction, if you tried, you'd get the old value for that one cycle. This was difficult to debug.
Winston/Brian: Happy you guys wrote that up. When I heard that.... I was like 5 at a time. Hell no. That would have been a lot of money in 1996. Thanks guys!
...man.... no wonder emulation for the N64 is STILL subpar compared to consoles like the game cube or wii
Great comment. For some perspective , an original Pentium had a 5 stage pipeline so this was not a particularly novel thing for the time. Later Intel CPUs in the 90s and early 2000s ranged from the 6 stage all the way up to I think 20 on the Pentium 4.
@@cdoublejj There's many reasons the emulations are difficult to get accurate. Timing is the main one. Some games expect timing to be pretty much exact. Some titles generate their display lists, as the GPU is executing them. They are doing this dynamically under the assumption that have X clock cycles in which to do their calculation.
Emulations do not, generally, take this in to account and these are VERY HARD to track down.
Another example is some oddities of the MIPS processor.
For example:
mov r0,0
str r0,[r1]
nop
mov r0,0x80
ldr r0,[r1]
add r2,r0,1
What value will r2 have?
You would expect to see 1. But, no, you'd get 0x81 (I haven't tried this - i'm trying to recall the actual 'bug'). If there was a nop after the ldr, everything would be OK. If you single stepped this in a debugger, you would get what you expect; but when it runs, you wouldn't (EXCEPT if an interrupt happened between the ldr and the add).
That generation of early 3D consoles was so interesting for the different approaches to the same problems. The N64 took the approach of cutting down professional grade hardware to consumer level prices, leading to some difficult limitations particularly in terms of memory. By comparison the Saturn built much more on the previous generation of bespoke sprite/tile warping technology while the PlayStation implemented a lot of basics of modern 3D hardware from the ground up, but left out all the more taxing features.
To me is not exactly the Hardware what was important of that generation. Out of all those game systems the N64 was the only one that had a clear vision of how interacting in a 3D space should be achieved. The N64 stablished the base of how all game consoles worked after that.
@@Refreshment01 It's a broad statement to say that.
Yeah, it is. I get what he's saying and he's right to a large degree. They got 3D games, environments and movement, even cameras right while most others were really struggling.
I mean compare Tomb Raider to Mario 64. It's a fair comparison yet, TR was so clunky in comparison.
That said, it's unfair to give them ALL of the credit for the way modern 3D games were made thereafter.
I think the Xbox 360 and particularly Gears of War rewritten the rules on large character in 3rd person that usually handle slow and terribly now moving like a cat and building a solid base for gameplay with the first really good cover based shooter system.
So everyone gets to move things on a bit at a time but I get your point
Mario 64 did establish how good 3D camera movement should behave,
Spyro that took that example very seriously became really good and fun, while other playstation games (mentioned in one AVGN episodes) that didn't end up being criticized on how the player have to fight the the game's camera.
@@funky7chunky everyone adds little bits. Max payne gave us slow down to shoot mech that's now a crutch on console shooters, like red dead. But nintendo basically gave the industry a place to start from, which is huge. It's like apple, like it or not, they invtented the modern smart phone. (i was nokia symbian user too, and windows mobile, and bb, put your god damn pitch forks down, fanboys.
I would love a "what could N64 games look like if storage wasn't an issue" video 🙂
Yes. What could the RCP and 4300i do if really allowed to stretch their legs? I'd love to see what the real brains of the operation was capable of
There is a guy doing gods work on Portal64 and another lad making modern Mario maps, so you don't have to die wondering.
@@jasonbroadhurstyeah Portal64 is very impressive
N64 graphics with Playstation-like framerate?
@@SamyasaSwiR.I.P. Portal 64 :(
"Reading from cartridge was faster than reading from ram"
That sounds... interesting
catridge os
holy shit.... CArtridges os's
Oh my god if you build an n64 cartridge with Bluetooth and Wi-Fi receivers in them you could basically have a cartridge OS which is totally untraceable
Patent pending
How much is the speed of the average ram speed back then? Now its almost speed of light
"Alright team! Lets learn to develop for the N64! First off, lets read the manual on addressing the RAM..."
Manual: "Dont use the RAM, I beg of you."
To be fair, i think the title is a bit misleading. It's not that the N64 was hard to develop for PER SE. However, it was badly designed in terms of specs, and working around those bottlenecks is what made development hard. You might think that's ultimately the same thing, but it is not: Remember that other console MVG reported on that was hard to develop on? It had the inverse problem: Plenty of power and no bottlenecks, BUT the code to use that power was complex and ugly. So, it's not really the same thing - quite the opposite.
yes do i do have begging of the you
Naota Akatsuki The Sega Saturn and Atari Jaguar were arguably worse in that regard.
@@lyxar777 I wouldn't say "badly" designed, so much as quirkily designed. As illustrated here, the design allowed the system to do some things that other competing consoles couldn't, it wasn't necessarily super intuitive to work with.
The N64's specs are not it's issue per see, it is the most powerful console of its generation by a long shot. The primary issue is limited cart space, developers always had to wrestle with this and the constant need for compression and decompression. Such a waste. CD would have solved all those problems. Also Factor 5 went into detail on how limiting Nintendo's generic SGI code was and the only way to actually push the system was to write custom microcode which alas Nintendo did not authorise but for a few games.
If you developed Nintendo 64 games back it the day, let me know about your experiences (feel free to email me)
A few points.
- I didn't talk about DMA in this video as i the main focus was on the graphics side. However if you want to understand how DMA worked on the N64, check out Rodrigo Copetti's blog (link in description). tl;dr - the implementation is NOT good
- at 1:54 i meant to say 16 and 8 kilobyte instead of bit
- stay safe!
If developers learned from Developers like Capcom and Rare.. Those 2 developers made the N64 shine..
Unrelated: I love how you have everything so neat in the background, it makes your videos stand out.. I can tell you have the same care when developing these videos..
@@kenrickkahn yo I agree. MVG is one of my favorite channels for many reasons, this being one of them.
I like to compare Nintendo's restricted sdk to the story of crash bandicoot where the dev removed Sony's code to put in place faster in house code
N64 was the first console I developed for. I wrote a lot of the game engine (rendering and character control) for Earthworm Jim 3D (I know it was not a particularly good game - I'm so very, VERY sorry). I have fond memories of using the hardware - and don't remember it being particularly bothersome. It was a struggle early on with the documentation, but once I got a basic textured triangle on the screen, in my memory it went smoothly (from a technical point of view). But it was a long time ago, and I wouldn't say we really pushed it. I hadn't programmed any 16-bit consoles, so perhaps it helped not having that baggage. And the display lists felt familiar from having worked with DirectX (DX3? DX5??) albeit the N64 had a more OpenGL like syntax. I definitely remember fill rate and textures sizes being an issue. Pretty sure we kept everything 32x32 or lower.
We started the game quite early on in the lifecycle of the console, but the project lurched in a few different directions, so by the time it was released the engine was older than other titles that were hitting the market at the same time. The chap who worked on the camera and collision had more issues - I remember the collision, in particular, was difficult to optimise - and the camera was hamstrung by the fact development of the levels and the camera happened in parallel - the camera should have been nailed down first - but that was a project management thing, and nothing to do with the hardware. The big positive I remember was the speed of the CPU and the fact it had a built-in FPU. We did a lot of prototyping on PC and converting that code to the N64 was easy. I do remember the Partner64 debugging system being unreliable - both the hardware and software were temperamental. That slowed things down, especially during debugging phases.
All in all, I really enjoyed developing for the N64, and really like the fact it has a great catalogue of games, even if the one I worked on isn't :)
To their defence nobody knew how to design a viable real time polygonal/3D rendering system at the time, especially not in the consumer space. It was a completely new paradigm. Still mind boggling that Silicon Graphics could design a chip with such a laughable small texture cache even for that time when the system was so dependant on that. It was so much ahead in everything else.
It's not really laughable. 4KB per texture is fine for that era. PS1 had the same. The idea for N64 was tiling and clamping multiple textures across surfaces, making use of texture detailing etc. Unfortunately due to tiny carts and time, in most cases devs simply stretched a single small texture across the surface.
They didn't have CD rom for good rez textures anyway. Less textures, cheaper cartridge to produce. Saturn and Sega32X were promoting games like Virtua Fighter at the time which had like no textures at all.
@@zerazara N64 was designed with a superfloppy in mind. N64 DD was supposed to release much earlier and effectively be _the_ N64, and AFAIK wasn't even meant to be a peripheral, but it was postponed because of quality issues and eventually effectively cancelled. But you're right, textures wasn't much of a consideration at the time, flat surfaces was believed to be what they had to settle for in most cases, that was the paradigm, I had forgot that.
It’s because they basically just copy and pasted their workstation rendering architecture, designed for frame-by-frame rendering of high resolution movie scenes, and sold it to Nintendo as a game architecture.
Why Nintendo even bought it though…
That's not true. Nobody new how to design a capable 3D graphics rendering system with a unified memory bus. The geometry transformation, texture mapping, and framebuffer reads/writes are all memory intensive - and they interfere with one another when they are all on the same bus. Nintendo could have added a small VRAM chip like the Playstation did, but that would drive up the cost. And their custom hardware was already expensive.
That puny texture cache really neutered the system. Even a secondary external cache probably would have been a huge help.
Yeah, 4Kb? Why the hell even bother? No wonder developers told them to shove it!
@@immortalsofar5314
Yep, one of the numerous reasons third parties completely slipped Nintendo for the most part during that generation.
Tell me if I'm wrong here, but wouldn't an external cache have the same problems as the RAM? The cache was so fast, because it was built in the chip that needed it. If you added an external cache, it would lack the needed performance, wouldn't it?
@@g.o.a.t.y.g.u.y Yes and no, I think. The bottleneck tends to be the bus (especially if it's shared), then the RAM access method, then the RAM speed. The length of the wires are way down compared to those although it does affect how quickly the signal becomes stable. I'm no expert but that's how I understand it.
Ummm they did have an external cache. The game cartridge. That’s one of the things talked about in the video. Also a special separate port and bus for a external cache would just add to the system cost. So at that point might as well just add more cache to the chip instead
I think that it would've been awesome if Nintendo, instead of putting every single game on a cartridge, would have put the games on a cd and put in a cache inside of the console the size of a N64 cartridge. This would've drove up the price of the console by a bit, but it would've given developers more cache space to work with and allowed developers to also gain the benefits of a CD with its cheaper cost and higher storage capacity.
The cartridge was mostly used for copy protection I think. They rationalized it afterwards I believe.
A CD would also have caused loading times and would be more vulnerable for kids.thry actually planed the DD 64 as extension but it never was successful.
With every new console generation there are people complaining about limitations - dispute having way more Lower at hand then with the previous system.
I liked looking at the office dev docs. They treated the developers as if this is the first time they have seen 3D. Which was true for some at the time. Nintendo explained all the super basic stuff, like what is a polygon. I find that funny because today how could you work for a AAA studio and not know what a polygon was, but this was the mid 90s and Nintendo's first system to be fully 3D so that makes sense.
They teach what a polygon is is elementary school.
3D fully polygonal games were new for most at the time. Even for experienced PC developers. The 3DFX Voodoo came out in the same year as the N64.
Employing people straight out of school without first clue about anything and training them from the bottom up is a standard Asian hiring practice.
Tetsuya Iida, Sony programmer responsible for PS2 backward compatibility layer:
"I had no real electrical engineering skills to speak of whatsoever and I didn’t even know how to boot up Windows 3.1, let alone how to write any programs. Even now, I can’t help but wonder what people at Sony saw in me when they decided to hire me. Luckily, I got the chance to learn how to program computers thanks to a training program that the company ran. "
@@rasz Yeah, also my guess it the job pool for people who knew how to do these things was basically vary small so they had no choice but to hire anyone and train them. I have nothing to back that up on my part though
NIntendo doesn't make sense because back in the early 90s with the Namco System 21 and the Segam Model 1 you had 3D polygon games running at 60fps. of course you had lot of dsp and other hardware making that happen but all developers knew what a polygon was by 1995-6!!
I remember the texture filtering in Ocarina of Time really blew my mind back in the day, even though it was low res the textures looked really smooth compared to the PlayStation where you would always see each jagged square pixel in games like FF8.
I actually prefer jagged textures. Maybe it's the Minecraft fan in me. The Project64 plugin for video I use allows me to use jagged edges.
Best game ever
@@linkthehero8431 Same, the blurry textures make my eyes hurt
@@ThreeDaysOfDan It's like enlarging a photo in Microsoft Paint. Sure, a 240p image can _technically_ be resized to 4K, but it ends up looking even worse because of all the guesswork required to do so.
Also it had perspective whereas the PS1 kept thinking the world was in 2D
I would also like "Why X console was developer friendly?" series
This would be awesome, there gotta be some developers out there with good experiences.
Dreamcast and Xbox in the top tier alongside Sega Genesis.
i think gameboy color was.
@@Viktoria_Selene wasn't that coded on assembly??
Maybe a video on PS1, because it was so easy to develop for when compared to its immediate competition.
In case anyone wondered how much $199 in 1996 would equate to today it's around $384.77. So when I think back to my mum saying no it's too much money for one it makes sense to me now. Didn't show her i was gutted, I knew she struggled when we were younger.
I kinda feel bad for asking now.
Same, any console I don’t have that’s $80-100 on offerup I’ll snatch up. I’m 20 and I’ve been collecting consoles and games that I never got to play as a kid. I had a single mother I never had a game console as a kid.
Yeah as kids you just don't realize the costs of things, you just want stuff
I swear i saw an ad for the N64 for around $600 that Xmas and just knew i wouldn't be getting it anytime soon. But after that winter, the price went down to the real MSRP of $199. Demand legit drove the price of that thing up way more than what this video says the price was
@@billybones6463 im guessing he found a random retail price for an outlet at the time. Not sure if the US was big into bundles but most N64s here had a game joypad or rumble pack thrown in for a bit extra. I wonder if he quoted one of those prices.
Even at retail was a big ask for a kid in a family of 4 siblings. I never got my hopes up.
But dont get me wrong i didnt miss out and was grateful for what I did have mind you.
yeah and on steam sales they are extremelly cheap... problem is AAA games themselves are pretty crap nowadays.@@captainkirk4271
1:18 : I clean, restore, and RGB mod a lot of these systems, and every time I open one up, I look at, enjoy, and appreciate the layout and design of the motherboard. The few chips are packaged so nicely on the board, connected by unique curved traces. It's such a beautiful layout, direct to the point with almost nothing wasted, especially space! I don't favor the system or its games that much (I've played very few games and since I didn't have one growing up I don't have any nostalgia for it) but I absolutely respect the engineers who created the final board layouts.
It's a shame whoever picked out the plastics for the case chose a mix that got so brittle with age. Advice for anyone taking a console apart: when reassembling a screw that goes into plastic, rotate it counter-clockwise until you feel it "click" and sink down into the threads. Then tighten. This plastic is very brittle and it's super important to NOT cross-thread through it lest you break off posts. The plastic's only saving grace is it responds VERY well to standard cyanoacrylate super glue. As long as the plastic doesn't shatter into pieces but breaks off cleanly, chances are a superglued piece will bond stronger than the now 20+ years old plastic.
maanerud are you the guy i keep getting the glued together rgb mods from on ebay??
yep, it's indeed rare to see a PCB with that smooth design, electronic engineers wanted to make it like that and I approve. it changes from the ever-seen angular motherboard circuits.
It's a cool layout, but why they never supported RGB out of the box is beyond me. A real step back from the SNES. Thankfully they went back to it on the gamecube. It's especially weird for europe because here the scart connector was very widespread and a majority of TVs supported it. I remember playing the Playstation over RGB 20 years ago and I didn't even realise what that meant at the time.
@@thegearknob7161 yeah the SNES was fully SCART compliant. But you can mod the N64 to have th RGB out, since the signal out of the ICs is RGB + synchro to begin with. Wire it directly to a SVGA connector and add a jack or RCA conn for audio output.
@@Diamond_Tiara That only works on certain NTSC versions. On a PAL console at least it requires quite an expensive FPGA based board be soldered on to the DAC. There's no excuse for that. RGB Scart was as standard in PAL countries in the 1990s as HDMI is today. Nintendo knew this. They did a version for France with native RGB support but most of them are composite only, for reasons that make absolutely no sense to me. I can't see any cost reduction in not sending out a signal that's already been generated internally through a connector that already has the pins for it.
Officially the PAL N64 doesn't even support Svideo either, but there is a way to modify a NTSC svideo cable to get it to work. So at least there's that.
So, the Stop N' Swap feature from Banjo-Kazooie was actually the result of Rare trying to take advantage of the Ultra 64's high memory latency, since it allowed you to power off the console and swap game cartridges while still keeping data in RAM for a short time. Which means that Nintendo accidentally removed a feature from an N64 game by upgrading the N64!
EDIT: It has been pointed out to me that memory retention after power off and latency are not the same thing
Stop N' Swop was originally intended to be done with the console turned on, much like how multi disc Playstation games can change discs during gameplay. Rare was quickly dissuaded from implementing it, as Nintendo was not sure how safe it would be to hot swap memory that's being used.
@@BrunodeSouzaLino Rare should have just saved data to a memory card and read that data for S 'n' S instead.
@@somebonehead That may have worked on an empty memory card, but probably there was too much data to be passing along for even compression to work well in a memory card transfer. Mem card size is also something that neutered these consoles. It was only slightly better/more reliable than using the password mechanic.
Wouldn't ya know, they still do things like that today. Just look at the Switch Lite!
They shared exactly one byte of data, you can read that up in the recent rare gamer interview with Paul Machacek, size wasn't an issue. I wondered why they didn't use the memory card aswell. I assume they just really liked the idea of data retention in RAM and went with it...
Noobs = the cartridge format was the N64's weakness
MVG = 4kb texture cache, RDRAM, pixel fill rate were the N64's weaknesses
Hm, this really makes we wonder what N64 games would have been like if at least two of these things were addressed, even if the processors were at lower speeds.
all of the above
@randomguy8196 I understand the risks for publishers, but as an end user cartridges were superior to CD's in every way. Not only could they quickly load new information in quickly but, like you said, FMV was nigh impossible. In game cutscenes on N64 games could flow seamlessly into and out of the action and plenty of spoken dialogue and other sound effects could be quickly swapped in and out during gameplay with little delay. I have never once heard a Playstation game pull off dynamic, in-game dialogue as effectively as Star Fox 64, Rogue Squadron, Battle For Naboo, Conker's Bad Fur Day, or Perfect Dark. CD's may have been great for publishers, but they always sucked for players. Level design always suffered heavily on CD games as well, in an attempt to cut down on frequent map switching. Banjo Kazooie and Banjo Tooie would of had severely simplified level design had they been designed for a CD system.
Cheap, massive amounts of storage space is one of the big reasons games have become so damn boring and repetitive. Why bother spending months and months making real, genuine, game content when you can simply add in copious amounts of stuff. That is part of the reason games revolve around a billion pieces of loot to collect, cosmetics to purchase, etc. It is cheap and easy and gives the illusion of content and countless hours of gameplay. There are NES games that take an hour or so to complete that have more genuine content and game variety than most of the games that come out nowadays. They had to make as much variety and unique gameplay as they could because they didn't have thousands of gigabytes of storage space to fill up with cosmetics and slight varients on the same damn weapons and equipment.
He misspoke. The expansion pack had 4MB not 4 kilo bytes
@randomguy8196 Not as much as you think. Super Mario 64 was $50. Sony dropped the price of Playstation games from $50 to $40 around the time of the N64 launch as I recall. Even when they started getting into the larger cartridge sizes, Nintendo 1st party games were usually still around $50. The 3rd party developers games would cost $60 to $70. This had less to do with the medium the games were stored on and more to do with Nintendo's extreme greed. Here is the basic rundown of costs according to a reddit poster that claimed to have once had a contract for N64 development.
Minimum production run: 15,000 units
Production costs Nintendo profits from because you have to go through them.
Nintendo takes an additional royalty fee of $7 each cart for logos, seal of approval, etc.
Packaging runs about $150,000 for 15,000 carts. Manuals, boxes, shrink wrapping. Delivery fees not included.
On a $55 cart a profit of $6 to $7 dollars was made. Nintendo got the rest.
Profit margins for developer on Playstation could be as high as $27 per disk.
_________________________________________
This is why 3rd party games cost so much more. Even the Blu-Rays of the WiiU are a special proprietary format that only Nintendo makes.
Crash Bandicoot has a very linear structure to its' level design. The designer always knows what is coming next. That is a luxury that an open world game doesn't have.
Ubisoft, EA, 2K, Activision, Rockstar, Capcom, etc. They all do it. If Mega Man was a new franchise released today you would have about 5 levels that you would keep playing through over and over again. You wouldn't gain a new weapon after defeating the boss either, but instead a random part that is needed along with others to make the weapon. How is what I described not exactly like Monster Hunter or any number of endless grind-a-thons? For the record, I do enjoy Monster Hunter, but it is a short game as far as content goes, that is artifically lengthened to be an endless time sink.
I never thought it was easy to code for just based on the struggles people still deal with emulating it.
The microcode updates factor 5 did were very impressive, got so much more out of it than the ones that came with the sdk
A lot of devs these days are embracing that, getting more from the hardware then Nintendo would let studios of the 90s.
I played Perfect Dark recently on an emulator and was blown away from the opening video and the first level. Blade Runner vibes
Ahhh to play it again for the first time! I bought it at launch back in the day 😁
Love perfect dark, I have the ram cart with it, soo good. Way better than golden eye, when compared to multiplayer.
Still one of my favorite games of all time
Perfect Dark is really impressive, not only how it looks but how it plays also, you can targer a hand or a leg and you disarm them that way, yes they all pretty much use the same animations but considering in other shooters when you hit someones arm or leg you just hurt them less, Perfect Dark for sure tried to do something new
Yea and has bots or simulants which have excellent AI . I like the other bond game the world not enough for multiplayer also
Sad feels when he says we're gonna leave it here for this video :(
its only another week till a new one!
I know, it's like 'Aww man, keep going!'
@MultiTarded +1
Would be interested in a Wii U video as well. I remember hearing all this talk about eDRAM or something being the secret sauce of the Wii U. With the few games actually utilizing it to the max really showing it (Fast Racing NEO with the 4k textures and 8k shadows, Xenoblade X with the gigantic world + entire city without any loading times, Bayonetta 2, Smash Bros. with 8 players at once still at 1080p/60fps etc.).
Claiming it's about as powerful as a Vita is wildly ignorant I think, but a video about the Wii U's strange architecture and what was or wasn't actually possible with it would be interesting for sure. Probably a bit niche though.
Poorly optimized multiplat ports gave the masses the impression that it was barely more powerful than PS360, but the few games that really tried certainly proved the contrary.
I’ve never been as amazed by games like I was on the n64. Mario, Zelda etc were total game changers.
I think that's because of the huge jump forward between going from 2d to 3d and then the graphical shift as well. I suppose it would be like if we went from the n64 straight to like ps3 with nothing in between. But either way I am with you on that. Going from original Nintendo and Super Nintendo to N64 growing up was mindblowing.
David Brunell I agree. Nintendo games in the late 90s and early 00s had really engaging stories, sound effects, music, and gameplay that had never been seen before. There are some others on different platforms that were great too like Spyro on PlayStation. All later platforms have been recycling the same types of games with marginal improvements or forced gimmicks (Wii/3DS). I don’t think we’ll witness a leap like N64 ever again.
@@haseebs.7286 There are still improvements to be had from 1.) cloud rendering/computing + 6G internet, 2.) 5D optical memory, 3.) ReRAM & MRAM, 4.) AI tensor chips + cloud AI models, 5.) VR + fovea rendering, 6.) haptic feedback controllers, 7.) super parallel arcitectures
@@aoeu256 yes but still not as impressive to the eye and ear as n64 and first 3d consoles
I remember reading a quote by the CEO of Sony SCEA in the early 2000s where he said the PSX could push more polygons than the N64, in theory, but because of the lack of z-buffer and the various limitations of the RAM etc, it meant that once you factor in textures and AI, if you wanted your game to be playable or actually look good you needed to compromise on that.
I never knew the N64 packed so much compute power... Like, I knew it was powerful, but damn... Kinda curious of what a "modern" developer could do on it, if they had a good SDK...
You should look up Kaze Emanuar. He makes videos about the various ways in which he's been able to get as much performance as possible from the N64
the N64 was even way more powerful SGI Onyx computer that was used to make those games, it used to cost $80k of those ages money. If you look up what SGI Onyx was you'd be interested
Look up Kaze's SM64 romhacks. Some of them almost look like gamecube or even some early wii games. they are legit impressive.
Still amazes me that we had an experience like Zelda OOT on this hardware. So many good memories with the N64. Thanks for this I enjoyed it!
I'm a hardware guy but development on older consoles has always caught my interest. Great work!
Very interesting. I do not remember having read or heard at the time this kind of criticism concerning the N64. About the Saturn, yes, it was even a constant complaint, but not the 64. Thank you for this information.
By what i saw in the N64 development manuals and stuff, it was easy... as long you used the standard nintendo microcodes.
After the PS1, both nintendo and sega rushed to make an "Open GL like" API so developers didn't needed to deal with the hell behind the curtains.
You lost some performance and flexibility in the process, but not every game company have a genious able to deal with the RCP/SCU bullshits.
3D rendering is really hard, so it's the logical conclusion we're gone from microcode hacking to universal APIs right up to the current day where almost everyone except really big studios (even some of those) use one of the two big premade game engines (Epic or Unity). Less money spent on expensive programmers, more on creative staff.
@@theshinken Sony got it right out of the gate, by basically copying the best API of the time and designing the hardware itself to be friendly with it.
But then nintendo and specially sega had to rush something to compete.
The sega saturn techdocs are quite hilarious because they explain EVERYTHING about 3D graphics from the basics of basics, including "how to use lightwave to model a low poly monitor/car".
But this was needed given most devs were coming out of the snes and genesis.
@@dan_loup I felt like there was a lot of competing APIs the time. Everything was so new. Given Nintendo partnered with SGI. I would think their solution is heavily biased from what SGI was doing at the time. It's amusing to say Gamecube was a more traditional 3D system when at the time of the N64 I don't think I'd say anything was deeply ensconced as the 3D machine.
@@Jerhevon On PC space there was quite a shootout brewing between DirectX, Open GL, intel 3DR, apple quicktime VR, proprietary videocard APIs like GLIDE, S3 metal etc..
Consoles on the other end had to deal with much slower CPUs so actual graphical APIs didn't looked like the best of the ideas, until sony basically implemented OpenGL in hardware on the PS1, and everyone had to run after.
so, for the N64 there was no API whatsoever? you had to deal with the HW code to create an engine from scratch?
Great video. People like to say that the lack of a CD drive is what prevented the system from being a major success. But I have always said sticking with cartridge was a better move for Nintendo at the time, and the console's overall hardware was a bigger factor. Good to see my theory get some support.
This makes me incredibly curious how developing for the N64 would work today. I remember reading somewhere that Nintendo had newly hired programmers working with the N64 as late as the Wii era to test how capable they were. I always wondered what kind of neat tricks they'd be able to do on the old but familiar hardware now that we have a better understanding of player's needs and how 3D games can work, and also given how games nowadays are considerably better at optimizing computer power.
RSP is comparable to x86 SIMD unit. There is a dude (giovannibajo) with a project to port Dragons Lair to N64, he claims to have h264 running on RSP at 18 FPS.
Lots of hacks and even music roms are available, some decent homebrew s can’t be that hard
MVG Monday's getting me through lockdown
I always appreciated seeing 30 vehicles on screen at once with no slow down at all on F Zero X...the audio was great but in mono only, and I wonder if that reduction in audio quality was due to prioritising graphics/frame rate.
It was in mono to make it fit into the cartridge. The EXpansion Kit (on the 64DD) has all the music in stereo.
The music is mono in F-Zero X? I never noticed it when playing on a Hi-Fi system...
@@ZinhoMegaman Some Hi-Fi systems have features that can force 'simulated' stereo or surround sound from a console that produces mono.
@@FinancialHealth-ku1ry Them I must play It on my receiver, It always play mono sound on the center speaker, like when I play Command & Conquer, the sound fx is stereo but the music is mono so I only hear the music coming from the center channel. Dolby Pro Logic II does this same thing, mono sounds always goes to center speaker.
I remember getting disappointed even back then considering how F-Zero X was so simple in terms of graphics. N64 never truly felt like a generational leap for me.
Also it's the reason why it took so long before we saw accurate low level emulation just a few years ago with AngryLion
And probably why AngryLion still eats your CPU for breakfast.
@@steel5897 They got it a lot faster on a retroarch plugin, i think parallei.
On a good CPU you can play most games in realtime with it.
What? I tried CEN64 recently, it works but it's extremely slow, and I have a beefy machine.
@@Sauraen Thats another attempt at low level emulation. Get the Angry Lion Plugin, there are versions for Project 64 and MupenPlus.
@@Sauraen The fastest implementation of the angrylion stuff is on the retroarch parallel core.
It can run several games in realtime.
1:41 A 5 stage pipeline doesn't mean it can execute 5 commands at the same time. Based on the typical RISC pipeline, the stages are Instruction Fetch, Inst. Decode, Execute, Memory Access and Write Back. Only Execute really "does something". To run multiple commands at the same time you need multiple Int or Float Execution units within the CPU. The amount of stages in a pipeline has on its own nothing to do with how many commands can be executed simultaneously.
It's very rare that any gaming youtube channel makes me want to change the notification settings to "all", but you just hit it out of the park so consistently with your videos. Informative and entertaining, with good editing technique that keeps the viewer engaged. Thanks for what you do MVG.
another high quality video, thank you for making all of these great content
When he doesn't make them!? From this point that's all I expect from him.. Mr. Quality Man..
Would love to see your thoughts on programming for things like Saturn and Dreamcast!
In this matter, the Saturn and the Dreamcast are VERY different beasts. The Saturn was notoriously hard to develop for and Sega did everything in its power to avoid doing the same mistake for the Dreamcast, so that 3rd parties would be more keen on developing for their system. The Dreamcast is notoriously EASY to develop for, with a very straightforward, down-to-earth architecture, powerful and easy-to-use API and even DirectX compatibility.
@@Liam3072 i am SO upset Dreamcast didnt last. Imagine the indie scene that wouldve blossomed...
Also the shoddy ports on Gamecube piss me off. SADX is a MASSIVE downgrade but now everyone thinks thats how it looked on the dreamcast.
@@Liam3072 Saturn wasn't hard to develop for. Lazy Western Programmers didn't want to learn it.
poo poo pee pee
@@KOTEBANAROT There's still people creating games for Dreamcast so you don't really have to imagine too hard lol
Regarding the painful 4k texture cache, non-3D games have a way around it, in theory. You can simply render on the CPU and blit pixels to the framebuffer. I think it's called framebuffer poking on other platforms. This is used in a limited fashion in N64 games to render certain elements. The red dot in Pokemon Snap is a fairly famous example. But the only commercial game rendered entirely on the CPU was Namco Museum. A number of homebrew attempt this technique, including a Flappy Bird clone and some other stuff. But one of the more interesting projects is 64doom. A port of Doom to the N64 that renders on the CPU and blits directly to the framebuffer. github.com/jnmartin84/64doom
Overall, the N64 suffered from severe bottlenecks. There's no point having fast access to the cartridge or a lot of RAM if you're constantly stalling trying to get to it. This is the root cause of rampant framerate issues on the N64. I remember reading that Factor 5 had to get very creative to render Rogue Squadron without the framerate tanking. Devs had to find ways to render while minimizing fetches from RAM, because fetching anything from RAM destroyed performance.
Doom64 makes a lot of sense being rendered on the CPU since it's a port of the Doom engine from over in PC land, which predates hardware accellerated 3D graphics (and indeed PCs of the era did not have the hardware accellerated 2D graphics that consoles did).
Without getting into too much technical detail, Doom did not use triangles for geometry, but was rather built around using a BSP tree to figure out which walls each column of pixels is projected to. This technique has severe limitations (you can only have vertical walls and horizontal ceilings and floors, and only one floor and one ceiling for any given sector - no two floor buildings here.), which is why it was replaced by using triangles once computers were fast enough in the years to come.
It would also be difficult to rewrite it to use the N64 GPU. For one thing, Doom textures are generally 64x128 or 128x128, which would not fit in the N64's texture cache, so you'd have to go and break everything down into smaller chunks or else greatly reduce texture resolution in addition to turning everything into triangles. In the end, it's not even clear that after all this the hardware accellerated path would be faster!
Besides, the original Doom came out in 1993, and targeted the 486s of the time. N64 had plenty of CPU power by those standards - 3 years later was a looong time for tech advancements in those days. Do a direct engine port, find out it's plenty fast, and call it a day. Heck, Doom actually had a port running on the SNES....
N64 does not have enough CPU power to render anything impressive to the framebuffer. Maybe you can get a couple of 2D layers like Genesis games had.
@@Keldor314 actual Doom 64 game had nothing to do with the original PC Doom on the tech side, it is a completely different engine and completely different game as well. It does not render on CPU either. The one that runs on CPU is a direct port of PC Doom made by enthusiasts a few years back. It can't even run full speed.
@@shiru8bit I was going to to ask this exact question. Did it actually have enough power to do anything with the frame buffer that would be of significance. Thanks!
@@shiru8bit That's pretty hilarious that you believe such malarkey. So according to you, 5 MBs of VRAM, a 64-bit Address Register, and a Dual 32-bit Data Bus, with 60 MIPS & a 73 MHZ Clockspeed, is not enough CPU power to implement a Framebuffer.
Even if it's nice now that games are quite "uniform", so it maters little which platform you get it on.. there was something special about the vastly different hardware, games and ports, how they could be even completely different games on the other systems. The graphics were also nothing alike between the systems, made each console feel more unique.
I was 1 of the 2 engineers on Cruis’n USA for the N64. We had no debugger and so we had to print text and play sounds to help us debug code.
This reminds me of the situation with the atari jaguar, and subsequently the ps3, a superior console which was way ahead of it's time, it terms of the technology it was using. The architecture however, was subsequently incredibly difficult to program for, as there wasn't a programming standard, or a benchmark to base it against, as 3D technology was still relatively new at the time, when the N64/ Playstation was released, but the playstation had net yaroze, and a very developer friendly approach to programming its games.
Don't forget about the Sega Saturn too! It was also hard to develop for..
Same with Amiga. Only once the ST started to die off did we get Amiga native games. Look at how bad Xenon 2 looks on the ST vs the Amiga. That was an Amiga native game ported ot the ST.
@@kenrickkahn that's a different story, the main problem with the Saturn was a lack of native 3D rendering, so it was hard to code 3D games but probably the best console for 2D
I have mixed feelings about the PS3's hardware. The Cell CPU was good as doing GPU-esque parallel math tasks, but as a general purpose CPU the 360s 3C/6T Xenon was arguably better. The initial design pitch for the PS3 actually had two Cell CPUs and no GPU. It would have been a return to the old paradigm of software rendering without dedicated 3D acceleration. We can't predict the future, but as it stands I'd consider the PS3 more of a failed divergence than a forward-thinking design.
"...ps3, a superior console which was way ahead of it's time, it terms of the technology it was using"? PS3 has almost the same architecture as the XBox360. XBox360 was released a whole year earlier. Both systems have real SDKs. I don't believe that ist was difficult to develop for them. The existence of indie games should suffice as evidence.
A very interesting and very well documented insight into the hardware. I would say regardless of latency it was still a much more powerful system, 4Kb can be worked around easily but it requires alot of space on the media format, which the N64 cartridges didn't have - alongside unoptimised SGI microcode - and Nintendo's intolerable control over what developers where allowed to do really shot themselves in the foot.
On the downside, it would have meant that the overall hardware would've been more expensive than what Sega was offering for their Saturn at $399. Could you imagine how much it would cost to buy an N64 then with a built in disc drive?
Its fascinating looking back at that generation of consoles and how unique each system was, with both Saturn and N64 only seemingly unlocking their potential right at the end and it's a bit of a shame we never quite got to see what they really could do.
While on lockdown I'm replaying many from that generation and I have to say the Saturn has outstanding picture quality for the time with its rgb setup and the few games which support its hi-res mode and of course the 4mb ram. The N64 is somewhat disappointing looking back with its forced AA and complete lack of RGB (something luckily we can fix today to make a huge difference). Overall all 3 have some truly amazing and quite unique games and a variety that we often lack in today's AAA space
It's almost like saturn designers were afraid about the motion sickness of 3d games and stuck with the 3d (or sprite scaling) that was popular in the arcades at the time that favoured racing, fighting and on track flights.
The gaming industry today is nothing but delays, DLC, politics (I'm not joking), censorship, and long download times.
@@FoxUnitNell I think that was more the limitations of the hardware mainly due to using quads. That being said when the Saturn combined its 2d and 3d capabilities we got games like Panzer Dragoon Zwei which would have been impossible on the Playstation due to it having to render everything in polygons, where as Saturn used its Infinite plane engine to draw huge floors and ceilings well beyond what the competition could do.
@@lucasn0tch *yawn* I love retro gaming too but acting like there isn't lots of new amazing games released all the time is such a boring, jaded take.
@@snake-v9t I wasn't implying there aren't exciting games, but I was saying the games always advertised by Sony Interactive Entertainment outside of Asia are just plain unattractive to me. For example, in Taiwan and South Korea, games such as Trails of Cold Steel IV are advertised very heavily, while in Europe and the USA, Western games get heavily promoted. And let's not forget that Sony is the master of forced censorship (cultural Marxism) in games (even M-rated ones!)
7:01 4 kb RAM expansion. Don't you mean 4 Mb RAM expansion ?
Kinda funny, that Sony would end up making same mistake with the PS3, 10 years later.
they stated many times they made it intentional hard so you could see graphical jumps during the generation. ps3 was planned as a 10 years console from the beginning. on 360? this thing basically maxed out at gears of war in 2006. after that there werent many jumps. on ps3 though? god of war, killzone, beyond two souls, gran turismo...it was leagues above anything ive seen on 360
Odd, it sold the most games..................?
@@Axtmoerder The 360 had plenty of great looking games.
Halo 3, Halo Reach, Fable II, Forza motorsport 3, Gears of war 1,2,3.
To name a few.
@@Axtmoerder at the cost of gameplay
@@Axtmoerder did you play Halo 4? Halo 4 blew my mind on the 360. It looks insanely good
I really enjoyed this. I'd love a follow-up deeper dive into some of the optimisations Rare/Factor5/etc used and perhaps how today's retro-coders are using modern techniques to make the most from the hardware.
>perhaps how today's retro-coders are using modern techniques to make the most from the hardware.
There isn't much n64 brew stuff these days
I'd love to see an interview of Factor5 covering all their work. The N64 would be just one chapter of their history.
@@christuckwell3185 they did a few really interesting interviews for ign
@@christuckwell3185 yeah there's a good interview out there about Factor 5's cancelled Dragon flying game for the PS3. I enjoy reading interviews about failed projects and development hell. You learn more from them :)
Ah, nothing like getting up too early for work and seeing a new video uploaded 28 seconds ago! Cheers and thank you for continuing to put out content when we need it the most!
Sony be like " So we have this PS3 architecture 8 cores and a whopping 512mb/256 of shared ram and here is the the manual *hands bible* good luck"
Devs: what teh do we do with this shit, HOW DO I FREAKING CODE!
*7 years later*
Devs: finally I get it now
Sony: introducing PS4 with X86 system that would be good for the developers because they suck our ding dongs with the PS3
Devs: *cries in happiness*
Also devs: *cries in sadness* all those wasted years were for NOTHING!
The Xbox 360 was originally planned with 256MBs of RAM, But Epic Games conviced Microsoft to go with 512MBs for Gears of War.
As Gears looked incredible when it came out in 2006 and didn't need 1GB of RAM for that.
Microsoft had to choose between 512MBs of RAM or HDD by default
Funny because the N64 was going to be a CD based system and Nintendo had Sony help designing it. But Nintendo decided to scrap that for cartridges and Sony took the idea and what they developed to make the Playstation.
Interesting how two generations later Sony made a very similar mistake
You mean one generation later. The PS2 had little VRAM, and clever developers had to rapidly swap in from main RAM. Between that and the VPUs, it was arguably even more challenging to wring the most out of than the N64... the only console harder to develop for (at least for 3D) was the Saturn. You needed someone at Jon Burton levels to get good 3D performance out of the Saturn.
Ps2 sacrificed capacity for high bw edram...it was a gamble that did pay off even with sacrifices
@@MrInuhanyou123 That's the point, there were tradeoffs similar to the N64. EDRAM/ESRAM can be good if it's used in addition to a powerful conventional configuration (Xbox 360) but it can be a detriment if it is used to make up for other shortcomings (Xbox One / DDR3). If you were a very talented developer without major time or cash constraints you could make the PS2 sing pretty good but not as good as a more conventional Xbox (original). Even the older Dreamcast with its more conventional layout (and less total RAM) often outshone the more powerful (and more expensive) PS2 when it came to textures and AA. Even early titles like Soul Calibur looked quite good. The PS2's main advantage was that (again in the right hands) it could push a lot of polygons. Not as many as they tried to claim initially, of course... in an actual game with advanced special effects, decent textures, AA, etc the actual triangle count wasn't that epic. Meanwhile Sega actually under rated the DC. Developers were wringing out 5-6 million polys even with all the bells and whistles turned on.
Don't get me wrong, the PS2 was still overall more powerful, and could do very well with the right developer (especially late in its lifespan)... but I think they would have had better results with a more conventional layout, especially for early games and smaller developers. Side note: My favorite example of "help I'm out of VRAM!" was the port of Grandia II to PS2... they had to turn stuff off when you used a video-stream-overlay spell.
@@AlexvrbX xbox one used esram which is not as fast as traditional edram for higher capacity. Not exactly same situation. I think ps3s unique traits is more applicable to the n64 comparison since it truly hobbled multiplatform development( of course not to as drastic as n64 which barred even making games of specific types)
@@MrInuhanyou123 It was still quite fast, and provided a large bandwidth boost. Had they had used it in conjunction with GDDR it would have been purely an asset. As it was, they were using it to help make up for the shortcomings of a DDR3 setup. It was barely good enough, and required a TON of effort on the part of devs to reach the kind of bandwidth you got out of the PS4's GDDR - and even then it still could fall short, depending on the game/engine in question. Of course it didn't have enough raw power either, since they went with the smaller GPU. They fixed their shortcomings with the One X.
PS3 is actually a more conventional setup in terms of graphics and memory bandwidth. The Xbox 360 was actually more radical with the daughter die and EDRAM, plus unified main/vram. The CPU side of things is a different story, and Cell was definitely a handicap. Although the PPC chip in the Xbox 360 was pretty strange too compared to the original Xbox. It was a triple core hyperthreaded PPC with custom VMX vector processors, and it was an in-order CPU. Not as bad as Cell, but still harder to harness fully. Anyway, that's all getting off topic - the video and general conversation was mostly revolving around graphics and associated RAM / bandwidth. In which case, the 360 was the strange setup.
Side note: I wonder if they aren't making a similar mistake with the next gen consoles, not packing in enough RAM and leaning on the SSD to stream in hardware-decompressed assets. I guess since both MS and Sony are taking a similar approach, it won't matter. Devs will just have to deal with it. Either way this might benefit PC games in the long term, finally force devs to require decently-fast SSDs and actually tax the drives a bit.
Good video! Tons of knowledge. Who knew the N64 was like that. I knew that Saturn was hard to program, or get the most out of, but didnt know the N64 has those issues.
I absolutely love these videos. No fluff, and all the information delivered in an approachable manner yet never dumb-ed down.
no fluff is seriously understated, more content needs to be straightforward like this
I am not sure if it is possible but I would love to hear you talk with Julian Eggebrecht the former President of Factor 5 on Nintendo development on the N64 and GameCube.
yes that is my dream!
That would be amazing!
Games from Rare and Factor 5 always pushed the N64 to its limits and sometimes beyond them, having said that they were the best games on the system IMO.
I own Diddy Kong Racing and can confirm. Rareware games look AMAZING on the N64. Even to this day it still holds up quite well. I need to find more Rare games
I remember noticing right away that DKR karts showed the wheels actually steering and the characters looked a lot more "real." Aged 10 I didn't know how any of this worked in the slightest, but we all _knew_ DKR looked way better than Mario Kart.
It has a more independent feel to it it’s more advanced than smk for sure and just more fun
Pretty sure the karts in DKR were pre-rendered 2D sprites for each angle, that would be why they looked so good
God I love your channel, so many good memories with N64 in 1998 when i was 16. Banjo-Kazooie was as a true masterpiece
i don't know anything about ganedevelopement or programming. still i love these videos. very entertaining.
MVG: N64 was hard to code on
Sony: *Laughs in playstation 3*
Sega: Laughs in Sega Saturn
Atari: Laughs in Atari Jaguar
@randomguy8196 I would really like to see one on ps2 as well. The spectrum of graphical and performance quality was huge on that system too across games.
@randomguy8196G Dam-it, I hate vector processors!
I think the game industry is going through the same cycle the movie industry is going through right now, the 90s, and the 70s. Games and dev teams have bloated to the point that it is unsustainable. It is the fault of publicly traded companies of course. The push to replace veteran devs with cheaper workers, the push to increase headcount instead of pushing out deadlines. I think that new dev tools will empower devs to make slightly smaller, more tailored experiences using a fraction of the people. Much like the movie industry goes in cycles from making big blockbusters, to creator driven smaller movies, and back to blockbusters, etc.
the technical understanding is way over my head, but i no longer feel crazy for thinking n64 games were just blurry and slow as a kid.
5 step pipeline means it can break processes into 5 steps and possibly do more because of pipeline but is still limited to 1 instruction per clock cycle max, super scaler would be two per clock like many modern processors. what made n64 processor so fast was its ability to also execute 32 bit code which used less memory bandwidth and had faster executing instructions. also a 5 step pipeline is less pipelining than many processors of that period which means that branch instructions could execute faster and an empty pipeline filled faster.
MIPS has no empty pipeline. Or does it? Load data needs stage 4 and 5 after address calculation in 3. Branch calculates the address of the next instruction and fetch would happen in 4.
Explaining pipelining as running multiple instructions at once is quite an unfortunate choice in my opinion.
While technically true, it suggests the false idea of executing multiple instructions per clock. That is not the case. Pipelining splits instructions that would take too much time to process at once (forcing you to use lower clock speed) into smaller chunks (5 in this case) that complete much faster. But all that means is that you don't need to wait for instruction A to finish before starting instruction B, you can start B as soon as block 2 of A is executing (and C as soon as blocks 3 of A and 2 of B start). You can't start A and B at the same time.
And you must also make sure that A and B do not depend on each other, or you'll stall the pipeline
@@halofreak1990 Not even that on these old MIPS CPUs, no interlocks meant it was on the programmer to avoid dependencies in branch delay slots and load delay slots.
N64 is VLIW Latency Language exactly like Saturn. But the difference being, NECVR4300 is a single Wafer CPU Chip, SH-2 is a Double Wafer Chip Pair.
@@segaunited3855 RISC, not VLIW.
@@espfusion VLIW(Very Long Instruction Word) is Latency Language or Assembly Language writing. RISC(Reduced Instruction Set Computing) is an ISA(Instruction Set Architecture) . Saturn's ISA is RISC. N64's is MIPS(Million Instructions Per Second), but both use VLIW Latency Language for Programming.
N64 is capable of up to 60 MIPS Computing Instructions. Saturn is maxed out at 50 MIPS Computing Instructions.
All 5th Gen Consoles used C Language Assembly. It was just easier to do on PS1.
Third party developers didn't want to create games for cartridges, because they were much more expensive and couldn't hold nearly as much memory. The PS1's CD-based console was much more attractive to developers, because CDs were much better than cartridges. It also didn't help that the PS1 sold 100,000,000 units, while the N64 only sold around 30,000,000.
It wasn't as hard as you think. I worked at Iguana Austin/Acclaim on the N64, on a number of Acclaim titles.
Yes, there was some issues programming the RDP & RSP, but Nintendo rarely released the microcode source code for it. We did get it, but made few modifications.
There were issues with RAMBUS, it was really terrible at random access. As long as you kept your data close by, it was OK. It was also bank based, so it was faster to make sure that code was in a separate bank from gfx, and gfx in a separate bank from the frame buffers. It wasn't easy to do, since you only had 4 banks to choose from, the extra 4M made this easier (and so also another reason games were faster).
I much preferred cartridge, it was faster and easier. It was also possible to DMA directly from the cartridge, this made it quick. I added a VM system for it, to allow direct mapping of cartridge memory to main memory (stored in a compressed format, decompressed on need).
The 4K texture cache was not easy to handle. You seem to make the impression that there is direct access from cartridge to the RDP. This is not possible on the machine. So, as you mention, the maximum size was 4K. QBC98 used 2 textures for the player models.
I do not recall Turok 2 DMAing directly from cartridge in to RDP memory. I don't recall, but I believe most of the data was compressed on ROM. It's more likely it DMA'd from ROM -> RAM then DMA'd from RAM->Texture memory.
BTW, it's a 4MB RAM expansion, and it was used extensively by a number of Acclaim products after about 1997 (about 24 or so). We also used the expansion memory to prevent early copying of the game. The versions released to reviewers MUST have the extra 4M, indeed, the code was loaded high up at the top of the 4M expansion to delay hacking. We also used this 4M RAM expansion to decompress a lot during game boot time (legal & logos screen). The extra 4M made the games run faster, as we could cache more of the VM accesses.
We also used the 4M expansion RAM to increase frame buffer sizes on some games. I *believe* (but memory is not clear) that Turok 2, ASB99, and QBC99 all use higher resolution buffers with the extra 4M. 512x360 instead of 320x240. The VDP hardware was awesome. The scanout to display produced an amazing image. It would blend two scan-lines together in a funky way to make it flicker less. It was SUPER flexible, which is why 512x360 worked so well (hence Acclaims, DUMB AS ALL HELL, HiRez(tm) marketing hype)..
A unified memory model was very well received by us. We could choose ourselves how much memory was split between audio, gfx and code.
The build/debug environment from Nintendo was absolutely crap, I mean really crap. Basically, GCC, makefiles and gdb. They were also restricted to SGI machines, but I believe they sorted this out later. We typically used the SN Systems/Cross Products development environment which was actually built by real game developers.
thanks for this comment, gives big insight
Wonderful comment, thank you
That's all interesting stuff. Thanks, Brian.
@One Billion Caring Mums what
Where the games being on cartridge a problem on the overall development of the game? Was the limited cartridge space an actual issue?
1:43 - A 5 step Instruction pipeline means EACH instruction takes 5 stops at clock speed in the CPU to complete the machine code instruction. This is NOT parallel processing that you'd get from multiple cores. You complete 1 instruction per clock cycle, with 4 instructions processing in line behind it.
Pipelining is used to increase processor clock speed by splitting more complex instructions into multiple steps.
For comparison, the Pentium 4, released in 2000, had a 20 instruction pipeline to increase nominal processor clock speed (GHz). Long pipelines can be a major weakness. If you choose to go left instead of right (branching), the Pentium 4 might need to throw away the last 19 instructions, and then you'd have to wait 20 clock cycles for the correct instruction to be processed through the whole pipeline.
AMD focused on performance over marketing, and Intel gave up on the P4 giant 20 step pipeline in 2007 (high heat, mixed performance), dropping down to 14+ stages in current processors.
Imagine if somehow it had 64K of cache and 64MB of RAM. Like a N64+ that came out a few years after initial release. That would be mind blowing for the consumer with high potential textures, and larger levels. But a nightmare for developers to make a game that plays on both consoles with 2 texture packs.
It would still have the laughable 4MB cartridge size , and the compressed to hell sound thats also dependent on the CPU .. unlike even the PS1 which had an impressive sound chip and didn't waste CPU cycles for sound
@@aceofspades001 with large enough Ram you can just stream the sound from it at decent quality. so it would be more possible to pre generate the music at like a level transition. this would waste barley any cpu cycles.
(yes i am aware of how absolutely bat shit insane this solution sounds).
9:32 I never had an N64, but that sure looks familiar...
Midway Games: "Hey man can I copy your homework?"
Polyphony Digital: "Fine, just change it up a little so it doesn't look like you copied."
Boss Game Studios
made the game, Midway, that trash company did not have the technical skill in their internal studios to ever pull a game like that. A lot of Boss Game Studio devs went and joined Turn10 and made the original Forza
I was going to eat right now, and had no idea on what video to see during lunchtime, and all sudden, notification poped up! Thanks!
Û7ïï
nintendo 64= wow, i am so complex
sega saturn = hold my beer
From Wikipedia: "The Saturn has a dual-CPU architecture and eight processors." 😲
I really love your channel. I'm an electronic engineer but I'm not that knowledgeable of programming. I know the basics but not much more than that. It's fascinating to learn about how these games are developed.
The thing that baffles me the most is Nintendo not opening up the microcode features and documentation earlier to third party developers. How on earth could they think gating off a core and novel feature to the developers making their console's image was ever a good idea?
*WHEN IS NINTENDO GOING TO RELEASE THE N64 CLASSIC?*
I'd like to know lol
never
@@r-ea why not? Does Nintendo hate money?
@Michael W. I hope so. I grew up with N64 and it's still my favorite console of all time!
I member playing Mario 64 xmas day 1996, what an impression that left on me! Your vids are so insightful, brilliant man!
MVG, you have made several mistakes since minute 6. It is mandatory for the RCP to use the TMEM to paint the textures and to use the bilinear filter. RCP cannot use RAM or cartridge memory for textures, and also they are too slow to apply effects to them. What they did in Turok 2 to use 64x64 pixel textures is to use 4-bit textures with color palette, or black and white that were colored by the RCP based on the color basis of each polygon. But this was not exclusive to Turok 2, all games at some point use colored black and white textures for specific polygons that only needed one color, for example some doors in Ocarina of Time. Even grass and dirt effects were achieved with the same texture painted in green or brown.
The RDRAM latency is not that high compared to other chips of the time. The EDO RAM of the 3Dfx Voodoo has a latency equal to or greater. The problem was in a very narrow 9-bit bus and over-exploited memory that was used to store code, polygons, textures, z-buffer, framebuffer and much more data, making the 500MB / s of memory very limited.
Edit: About textures, with 4KB of TMEM you can use textures up to 48x48 in 16 bit colors, and 90x90 in 4 bit grayscaled colored.
I don't think everyone knows everything about turok 2
This is news to me. I heard the Saturn was difficult because it used quadrilaterals instead of triangles to create models, had two (i wanna say memory buses but maybe they were processors) and the console could not render true transparencies.
I don't fully understand all the technical jibba-jabba in some of these videos, but I love em more than others.
This is massively interesting. Thanks for sharing all this N64 info with us.
I was blown away the first time I seen Mario 64
Hey mate, loving your content - just seems to get better and better.
Love of gaming, vast knowledge of console history, great editing, and, most impressive of all: understanding and sharing of underlying coding in such... very happy to subscribe 😃
it's main handicap was the business decision to stick to cartridge format. no getting away from that.
we do not forget that back in the days CRT TVs added their own blurryness on the image, and while this was beneficial on psx's and saturn's unfiltered textures, it was a matter of adding blur on an already blurred image on the n64...not a big deal
CRT televisions don't add blur, it was caused by composite and RF video signals muxing the chroma and luma signals. This caused signal degradation that resulted in blurry video and other artifacts depending on the cable quality and length of the cable. You can also bypass the tuner entirely on more advanced CRT TVs with "jungle" chips, where you can tap into the RGB signals directly to bypass any filtering done on the tuner (comb, pull down, etc.)
S-Video, Component and RGB SCART gave far better video output. I've modded my Sega consoles to output S-Video to my 1992 Sony Trinitron CRT television, and I get a crisp picture with absolutely no blur whatsoever.
@@GGigabiteM ehehe don't get me wrong, i am still a big fan of crt technolgy, it still has its upsides compared to LCD. Maybe it is more correct to say that they added their own degree of smoothness to the image (calling it blurryness is a bit too much), but yes, the smoothness is definitely there, even when you use a good output like a scart (i always played with a scart, i never used an RF output). This was due to the technology itself (CRT does not really has pixels) and due to the quality of most CRT TVs, that was not even comparable to tha quality of a decent CRT monitor...
nah, psx looked jagged and sharp on CRT and N64 looked like drunk vision, I really hate this blurry AA/texturing method
@@MrSapps nonetheless the smoothing effect of CRT television is notorius, as a matter of fact mame and other emulation programs offer filters to make games look like they used to be on old CRT tv even when you play on a LCD. But yes N64s graphic was a bloat of blur , i never liked it
that only affected n64 which had no rgb out..... usual n crap, justnlike today :(
ps1 and saturn had rgb of course....
One word: Cartridges. They were expensive and even the largest ones (64MB) had less than 1/10 the storage capacity of CD's (660MB), so no one wanted to deal with them.
ikr? I mean what the hell was N thinking
@@Killaswarm N64 cartridge capacity was often listed in bits, not bytes. There's 8 bits to a byte, so 256 Megabits was only 32 Megabytes.
Someone mentioned Crash, this vid was very interesting: th-cam.com/video/izxXGuVL21o/w-d-xo.html
but the non-existent loading times were AMAZING
I love learning about older consoles, most modern consoles are just cut down PC's.
Only ps4 and xbone are cut down pcs
True. Literally nobody is doing custom chips now. The closest you get is people using FPGAs for music purposes such as generating alias-free oscillators instead of using DSPs with static sample rates.
@@caliptus85
Right, Switch is a cut down tablet instead.
@@kekeke8988 it is and I freaking love it
As boring as that sounds, I'm sure most developers are thankful for that
Most of what I learned about the basic components of PCs came from a book on building PCs that my aunt got me at Barnes & Noble when I was five (so 2005~2006). I recall there was a chapter on Rambus RAM, and that it was super expensive and probably on its way out. They recommended the then-new DDR2 instead.
Haven't heard of Rambus since, until today.
I fondly remember N64. We had one on display in the computer shop I was working back then. Ocarina of time music still plays in my head!
They really learned their lesson with gamecube. That thing was a great console.
World Driver Championship doesn't run at 60FPS. It commends a respectable 30FPS, but not 60. Also the 640x480 mode is a widescreen letterboxed mode, not fullscreen so the effective resolution of the 3D scene rendered on screen is not actually 640x480, but less than that, probably more along the lines of 640x240. Nevertheless, this game deserves praise for pushing much higher numbers of polygons per second than any other game on the system, while still running at that respectable 30FPS.
That game sucked. There were far better playing racers on the system pushing far more ambitious tracks and number of vehicles. Gran Turismo had the same problem. It had heavily restrictive narrow tubes for tracks in order to put all of the processing power on making the cars look good. It is the equivalent of praising the level of detail on Tekken's characters compared to the models in Spyro the Dragon. As far as pushing the technology towards actually making the gameplay better F-Zero X, the 3 San Francisco Rush games, Diddy Kong Racing, Wave Race 64, and plenty of other racers are more impressive.
Looking at how they went with the N64 system and the decision to use mini- dvds on the GCN makes me wonder, how a company that made so many great games could screw themselves this much with their hardware? They never went for peak power again after the gamecube.
It's the same thing with Wii to Wii U transition. It just made no sense. They already had a winning formula of motion controls, all they had to do with the successor was to improve graphics, improve online and improve motion controls. Instead they came with a clumsy tablet controller and even worse online.
@@Dodolfo1 Well Nintendo couldn´t cared less about specs after how succesful the Wii was.
The fact that Wii U manage to have worse CPU than a 2005 console(Xbox 360) tell something.
They went with PowerPC in order to have Backward compatibility with Wii, but well going with IBM was a good choice
Love this stuff. Helps make sense of things I went through without knowing why when i was growing up with these systems.
I liked your video :)
Definitely a comprehensive overview of the basics for some of the challenges.
Good to see people are still interested in this topic!
6:58 i guess you mean 4mb, not 4 kb.
Indeed 4kb wouldn't be too much 😄
@John N64 can only address up to like 16mb of ram iirc, even then it wouldn't be very effective
@@jiijijjijiijiij this is why decompilation projects for N64 games have such a fascinating potential. Once decompiled source code is available for a game it can modified to use additional memory, higher CPU speeds, ported to other platforms, make fan-modding easier, etc :)
It's 4 KB. This is enough for a single 48x48 texture with 8-bit color + a palette and mipmaps, or else a single 64x64 texture with 4 bits of color (with palette and mipmaps).
There were also higher bit depth textures, but you'd have to decrease resolution even more.
Quite painful, really.
@@Keldor314 the memory expansion pack is 4mb mister wiseass
Amo las texturas "borrosas" de N64 (entre muchas otras cosas). Esa estética tan particular lograda con polígonos grandes, texturas y bordes suavizados y en algunos casos hasta niebla, es lo que hace de N64 una consola con gráficos originales y totalmente distinguibles. Creo que hay cierta "magia" en ello. Lo que muchos critican y ven como una limitación, para mí es una de las cosas que más amo de la N64 y que la hace única e inigualable.
(From what I can tell by a precursory scan of the documentation) One big issue with the texture cache is that it wasn't acutally a cache, but a dedicated memory space. This means that the program had to manage copying in textures as needed "by hand", and in general, you'd have to copy the entire texture even if only a couple texels are actually used. Also, the texture unit could ONLY access that 4KB texture memory. If you wanted a texture, you had to copy it over there.
What this ultimately meant is that you would have to sort all your triangles by texture when you rendered them, and of course, the more textures you used in a scene, the more time would spent copying textures and not drawing pixels.
Partial transparency/alpha was also made more difficult by this arrangement, since rendering it correctly requires that the (transparent) triangles are rendered starting with the ones furthest away and rendering the nearer transparent triangles blended on top of them. Remember how we also wanted to render the triangles sorted by texture to avoid thrashing texture memory? Uh oh! Thankfully textures with pixels that are either opaque or else completely transparent (such as trees) don't need to be rendered in back to front order - the z-buffer can take care of this case.
aaaauuhhrrgggg mah gawd, the stupidity! I would try to change job if faced with something like that.
imagine if it was a proper cache with weighted ejection rates and everything.. managed by the hardware itself..
The Texture Chache issue was debunked by Homebrewers 2 years ago. The N64 used 4 Dies and had 32KBs on Each. But it required A Dual Thread with the NECVR4300 CPU and SGI RCP Graphics Chip, but MANY developers weren't used to that method. So they just resorted to Storage Capacity on the Cartridges and relied on N64's EXTREMELY primitive software tools including Open GL, and Run Scope. The Open GL Tool for N64 was limited to just 4KBs of Fil Rate without use of the main assets of the SGI RCP.
There were some really impressive games on the N64 like Shadow Man, Forsaken and Star Wars: Episode 1 Racer. And the titles by Factor 5 were also really great from a technical standpoint.
5 stage pipline does not imply it can do 5 instructions at once but 5 of 1/5. Maybe confuzed it with super scalarity that alows the pipeline doing 5x 2/5 instruction at a time.
Never liked the blurry N64 look. It was supposedly a step forward that looked like a step backwards to me. Large cubes everywhere with blurry textures seen through vaseline - good for Mario, i guess, but what else? Few fond memories compared to PS1, though i knew legions who swore by Goldeneye.
I prefer the solidity of N64 games to the wobbly polygons and warping textures of the PS1, which isn't to say I never enjoyed games like Spyro. I guess it's a matter of what you grew up with.
Saturn for life
goldeneye is an unplayable mess, I don't know what people see in it. people that love it maybe never played doom, duke3d, quake 2, or even wolfenstein 3d??.
@@pelgervampireduck I played all of those games and Golden Eye is superior in every way. Better level design, mission objectives, location specific damage, larger enemy move sets, and an open world feel that is unmatched by most of today's games. The game was and still is one of the most ambitious designs for a fps. Enemies aren't stuck in there zones, they can literally go anywhere and they are all doing it at once. An empty room could be where a major shoot out takes place the next time you play through the game. It makes the game feel very dynamic. The weapon variety and the persistent, lingering explosions allow you to do some really cool strategies as well. The combat has a near flawless rhythm to it, almost like a side scrolling beat'em up like Final Fight, with you juggling an overwhelming number of enemies at once. The other games you mentioned are just incredibly primitive compared to it, and even more so when compared to the sequel-in-every-way-but-name Perfect Dark. In what delusional fantasy world is it "an unplayable mess"?
David Aitken Well said, I’ve been replaying Goldeneye recently and it’s still an amazing game. You just never know what the AI is going to do next, no two play through of a level are the same because they’re always doing something different, even in the confines of a train carriage!
It’s always sad to see the controls criticised too, the default control scheme is basically identical to Resident Evil 4, but with the added bonus of also being able to shoot while moving.
The Body Harvest team said it was an easy process.
Hey MVG, did you ever create or help develop homebrews for the N64? I'd be curious to know or see you try to have one working from source code in one of your videos.
Please create a playlist of videos like this. 10/10 work man.
Just makes you appreciate how good PS1's design and architecture was built several years before N64. It was the right machine at the right time at the right price: for consumers, developers and publishers.
Sony in 1997: Let's tell those N64 players that cartridges can go bye bye.
Square Enix/PlayAsia in 2019: *releases FFVII/FFVIII on cartridge*
@@NB-1 I know, I know 😅
CDs were 700-850 MB, depending on what mode it was pressed in, while N64 cartridges were only 64 MB (pun intended, and it's megabytes, not gigabytes). Nowadays, the largest optical disc for the consumer market is the Blu-ray Disc Quad Layer, which is only sold in Japan and can hold 128 GB of data. Meanwhile, 1 TB SD cards are available for sale, and can be used on Smartphones, Cameras, and the Nintendo Switch. Also, the Switch's Game Cards are read-only, region-free, and can hold up to 64 GB of storage (as of the last time I checked).
@@lucasn0tch old school carts could be directly memory mapped by the system and had comparable access times to RAM, Switch carts and SD cards are much slower relative to the memory used in modern systems, so everything needs to be copied to RAM before it's accessible at a decent speed. That's one of the reasons the PS5's SSD is so exciting, it's the closest thing to accessing game data from a cartridge we've seen in decades.
the switchs uses game cards not carts