Hope you all enjoyed this video. Ill be throwing all this hardware into a Socket 423 Specific PC next week, where we'll investigate: - Modern Day Usage - More Operating Systems - General Compatability etc... I felt this would be better suited to another video, as this one was already about as packed full of data as it can be (Yes it took a long time to benchmark these systems, even the Original XBOX), so if you want more Socket 423 Action please let me know down in the comments below, as I cant wait to explore this forgotten platform more.
Is it just me, or is it a Pentium 4 strapped to a silicon wafer to fit the old pinouts? It looks practically identical to the later versions if you exclude that adapter-like bit.
I have never seen one, Only read a little about it. Here is something to look out for.. "Socket 423 to 478 Upgrade Adapter" www.amazon.com/Upgradeware-Upgrade-Adapter-Converter-Powerleap/dp/B004HAXLUU
I had some many issues using a SSD with that adapter in older systems than this one, does it happens with you as well? I was using a P2B-D, I made a video about it, ended up giving up on the SSD and went with an IDE to SD adapter.
I remember having seen such a 1.3 Ghz Pentium 4 on socket 423 back in the days in a customer PC I was cleaning. When I saw the socket with this 423/478 adapter I thought it was a prototype or some kind of engineering sample.
@@healspringy6300 RDRAM was hideously expensive compared to DDR and even the older SDR memory because of licensing fees. Since Intel was in bed with Rambus and had bet the farm on the technology, you were required to use it on early Pentium 4 boards since that's all Intel's chipsets supported. I remember RDRAM being 3-4x the price of DDR sticks of the same capacity. But you can't really accurately gauge the real cost because something else going on at the same time was the wider memory industry engaging in price fixing in an attempt to kill off Rambus, in retaliation for Rambus joining the memory standards group and patenting ideas out from under them in bad faith to try and extort royalties out of everyone. In essence, it was a dumpster fire. Buying memory in the early 2000s sucked. Intel quickly backpedaled on the Rambus decision and released this horrible thing called the "Memory Translation Hub", or MTH. This was a special purpose ASIC sold to motherboard vendors that would allow use of cheaper SDR memory. This solution turned out to be buggy and prone to system instability, and I believe there was a recall on it. Intel dropped Rambus after that whole fiasco and went back to DDR in their later Pentium 4 chipsets.
@@healspringy6300 It was more than just memory cost, Intel's Pentium 4 was far more expensive for less performance than even their previous gen Pentium 3, let alone the Athlon. The classic Athlon on Socket A and later 462 was outperforming both of Intel's parts for less money. I skipped over pretty much the entire Pentium 4 generation myself. After my Pentium MMX 200, I went to an AMD K6/2 400 to a Duron 800 to several succeeding Athlons (1333, 1700+, 2400+, 3000+) and finally to the Athlon 64 3700+ before I went back to Intel in the Core 2 era with the E6420.
The ram was so expensive that Maximum PC magazine did an article with weight that it was more expensive than the drug coke. I have about 2-3ft full of magazines from when I was a kid still in my moms attic. I gotta find that article and post it.
I was working at a tech company at the time these launched, I distinctly remember we only ever sold one, just one at full price. By the time socket 478 Northwoods came out we (the techs) were offered the remaining inventory for 80% off, would have been a great deal if it wasn't for the fact we were offered no such discount on the expensive RAM and we had no motherboards in stock. Our boss tragically fell for the hype and had ordered 50 of the damn things (1.3 and 1.4ghz models), only 9 sold and of that 9 most sold for 50% off. The Pentium 3 Tualatin 1.4ghz were so much better in just about every way, worse still, they could be had for less than half the price by the time the Willamette matured as a platform and didn't need RDRAM. An interesting bit of history none the less. Great video.
my dead XPS B733 came with only 2 slots, paid around $60 for two 128MB modules to upgrade from their original PC600 to PC800. Rambus had more placebos than benefits, but got good results with games (Halo, GTA VC) and compression. Still beating atom and Celeron from later years.
I had the pleasure of working with a Dell dimension desktop that had one of these chips + rambus ram wayyy back at my first internship. At the time (2005) rambus was pretty much dead and DDR was common...
Interestingly, a 20 stage pipeline is relatively normal now. Both AMD and Intel chips are around that number, but have the bonus of higher clocks, multiple cores, much more cache, and 20 years in branch prediction advancement.
@@dan_loup The Willamette and Northwood architectures used a 20 stage pipeline. Later on, Prescott and Cedar Mill used a 31 stage pipeline. Interestingly, while the Tualatin Pentium III (which used a 10 stage pipeline) topped at 1.4 GHz and the Northwood Pentium 4 (20 stage) topped at 3.4 GHz, the Prescott Pentium 4 (31 stage) topped at 3.8 GHz. A minimal improvement for an even less efficient and more complex architecture. But yeah, today most CPUs have a pipeline ranging from 15 to 20 stages, but improvements in production, smaller nodes, less power consuming architectures and better branch prediction circuits greatly reduced any misprediction penalties and other problems the Pentium 4 had to deal with.
@@NathanPlays395 True. They were talking about around 40 and 50 stages. Which is really dumb. They say a 2.80 GHz prototype Pentium 5 was running at 150W of TDP, compared to a 3.80 GHz Pentium 4 Prescott running at 115W. All with an even more inefficient pipeline. I mean, Prescott was already an aggressive redesign, and, like I said, offered even less performance per clock (losing against a same clocked Northwood in some instances) and reached only 3.80 GHz. Intel designed Prescott to be highly escalable, reaching speeds of 5 GHz and beyond, but never could due to increasingly worrisome power and thermal problems. If you look at Prescott's die (which was made with 90nm tech), you'll see it has almost the same area as Willamette (180nm), and already gigantic chip for its time, and terribly inefficient. No wonder. Considering Prescott failed so spectacularly to scale, I was actually surprised to know Intel planned on following it with another pipeline redesign so soon. Maybe they didn't anticipate the heating issues, as I'm pretty sure Prescott was already in development when they were preparing Willamette for launch. I think they should've kept improving Northwood's architecture instead.
it was always weird to me as a kid that I had a newer 3.06GHZ pentium 4 and a 700mhz pentium 3 M and they were both way faster than the first generation pentium 4 clocked at 1.5GHZ.
I remember buying the 3,2 P4,it was the best of the best,but it was super pricy compared to its AMD Athlon competitor.But i wasnt paying so i didnt care,my father paid for it.If i were to do it all over again,i would just buy an Athlon and get my father to get a better gpu,i wanted a radeon 9800 pro or xt,but got 5900 xt from NVIDIA,which was markedly worse than the 9800,but still super good for its time.
I almost bought one of the 1st gen P4s back in 2001, but ended up getting a cheaper Duron rig. I don't regret that decision. Especially once I upgraded it to an Athlon XP. Intel basically saved their bacon by rebranding the Pentium III as the Pentium M, and later Core Solo/Duo. All current Intel x86 processors descend from those.
My first 1 GHz proc was a Duron, those Socket A/462 boards were futureproof for many years, all kinds of options and overclocking. Anyone remember the pencil trick with some of the newer Athlons?
There's only one place where very long, high-clocked pipelines work very well, and that's the situation where you have a lot of repetitive tasks wich similar data types which require high throughput, but not a lot of operation switching. Not so coincidentally, that's *exactly* the workload you have in a GPU.
I had the 1.4 ghz p4, and no it wasn't really much of an improvement over the p3, but I was coming from a Cyrix 200mhz socket 7 cpu system (another rare one) so it was extremely fast for me. I cruised that p4 for most of the xp era and it did its job well enough! I remember rdram as well, it died out quickly and was almost impossible to find any for upgrades, and when you did it was at astronomical prices for memory... it also ran incredibly hot. Great video, love the nostalgia.
Actually back in those days, the 0.13 um 1.4 Ghz Tualatin Pentium 3 was pretty beast and stomped on those earlier Pentium 4 systems, especially if you were fortunate enough to get some overclocking out of it. It would be cool to see a shootout between a Tualatin P3 and some of these benchmark results. The Tualatin core was what the Pentium M was developed from, which was what the Core architecture was derived from, which ironically was what finally relieved the P4 architecture from duty. The P3 was just a really good CPU.
The P6 architecture dates all the way back to the Pentium Pro in the mid '90s... It's crazy to think how long that underlying architecture lasted during a period of very rapid PC development. Today's performance CPUs are actually more similar architecturally to the P4 (long pipeline and multithreading) than the P3. The difference is that now we can fit enough transistors on a die to make an adequate branch predictor to virtually eliminates pipeline stalling (which is what really killed the P4).
You couldn't buy a 1.4Ghz Tualatin back then, nor the 1.3Ghz version, they all went into the server market. Oh, and no, it didn't stomp on the early P4s, it was competitive in speed while generating less heat. Either your memory is faulty or you're talking out of your arse.
@@iangreenhalgh9280 Having used the socket 423 platform with multiple cpus ranging from the slowest 1.3ghz Willamette to the fastest 3.0ghz Northwood (SL6YH); the Tualatins do indeed stomp them in IPC because the lower clocked 400mhz front side bus in addition to the aforementioned branch prediction misses choke the early Pentium 4s potential. If one wanted to play games of the era; (other than Quake 3) then it took a 2.8ghz Northwood (SL7EY) running through a 423-478 socket conversion kit with 40ns pc800 rdram to be competitive against a 1.4ghz Tualatin as frequency only goes so far before other bottlenecks appear. With socket 478 arriving so soon afterwards increasing the front side bus to 533 and 800mhz; the early adopters were shafted big time. Pentium 4s are a dumpster fire with a special place in my heart considering I grew up around them and had to make one run through the beginning of 2010. "Thank you for your service, may we never need you again." By the way; you absolutely could get Tualatins from a consumer facing supplier that dealt in higher end product; Intel gladly took your money all the same.
@@Nachokinz I wasn't playing games on them I was building servers and workstations using them and the Tualatins were not as you describe in that application, they weren't competitive in speed, but they ran a lot cooler, so found a niche in rack servers where heat was an issue.
@@iangreenhalgh9280 So early Pentium 4s were not preferred in the server market due to heat, an area with much greater potential of workloads it could excell within. A reminder of an architecture that found difficulty in satisfying consumer demands.
I just have to point out the massive improvement in the overall quality of your videos. Last year, they were great. But the last few just skyrocketed in production quality. The pipeline explanation is phenomenal. I love the time labelings. Keep up with the good work!
I used to have a Pentium 4 1.5GHz. It was a bit slow, but the system lasted for quite a while before being replaced by some of the first dual cores from Intel.
@@harryshuman9637 they arent any more rarer than the P4 he tested in this video, and they are only expensive now because of how good they proved themselves to be..
14:50 Pentium 4 1.6Ghz......"what is physics!!!!" For anyone that wants to know why ram was so expensive in the early 2000 you have to look at the companies making them. There was a huge ram "cartel" headed by Samsung. It was some serious supervillain move in which Samsung, SK Hynix (Hyundai Electronics back then, Infineon and some other companies got together in secret meetings and decided to all jack up the price of ram to make a ton of money. It wasn't until a few years later than one of the companies spilled the beans to the US government and they all got taken to court. Samsung ended up paying millions of dollars in fines.
We used Kingston as it's made right here! It was still expensive so the company used Celeron 2.4ghz to save money. Nobody bought a single one. To my knowledge, they are still in a warehouse somewhere. 10,000+ units. I personally built 2,000 of them. What a few months that was. Great training though.
@@fungo6631 no this was in the late to early 2000s. Rambus only had control of their own memory, RDRAM which only Intel used and some game consoles. SDR and DDR was not owned by them but free and open
The first PC I ever used had that processor. I have some bad memories of how slow it would be sometimes. But overall, I had fun with it. Thanks for reminding me about my childhood with this video!
And this is the reason why Intel and AMD made the transition from high-speed clocked CPUs to multi-core, multi-threaded ones. Both realized that you can do more with your processors when you split the tasks among different cores and threads rather than just dumping all the bulk of the job onto a single, fast processor, no matter how well optimized the software was. Then again, we consumers got stuck with 4 cores CPUS for an eternity because Intel couldn't or didn't want to push the boundaries of tech beyond what they could do at the time.
That wasn't the reason at all. The reason why we got multi core CPUs is because AMD and later a tiny part of Intel realized that pushing CPU clocks beyond a certain point consumes excessive amounts of power and puts out tons of heat. Even at the end of Netburst's life, Intel was still trying to push clocks higher than their final 3.8 GHz part and had a 4.0 GHz part on the horizon, with a new 'Tejas' core revision being planned that was supposed to reach 7 GHz. By that point, AMD was decimating Intel with the Athlon 64 and the only thing that saved their bacon was a small team in Israel that had been quietly working on continuing development of the Pentium 3 core into the Pentium M and subsequent Core / Core 2 series. Had the Israel team of Intel engineers not been working on what they did, or even existed, Intel would have been in big trouble. I'd say they would have likely ended up in the same position AMD was in during their own failed Bulldozer architecture, where they were a knife edge from bankruptcy. You have to remember they also had the Itanium dumpster fire going on at the same time, so they couldn't lean on that to buy time because the Opteron was also decimating them in the server market.
Ars Technica BITD often couldn't tell their bum from a whole in the ground and frequently had their head on both. Objective reporting was not something they were known for, at least back then.
When the new arch was announced, you should have seen the Intel fan boys drooling, whilst others wondered about smaller caches and v. long pipelines. The double speed logic part sounded dubious too for thermal reasons
Wow, I think I finally found out why SW Kotor played weirdly on my first PC. Had a P4 1,4 GHz with 256MB SDRAM and Mx200. The tutorial level was fine and the beginning of the first planet too. Until you got outside. Than the characters started to teleport around, while the game run with decent enough FPS. Seems that the pipeline and branch prediction didn't work nice with game. I like your videos, keep going and a happy new year
I remember these 423 processors. Most PC publications in those days said it was good but AMD was just as good and far cheaper. The Pentium III Tualatin also performed very well and there were plenty of boutique system builders that offered such systems, some with DDR RAM. The Pentium 4 was a good competitor once it moved past Willamette and I wanted to have a Northwood system with DDR RAM as a second system to my AMD Athlon XP systems at the time.
Not that I am a fanboy but I have built more AMD systems, just because the price to performance point was always good. I always wanted to build a super powerful Intel system but the prices just weren't really worth it.
I'd be interested in seeing other CPUs paired with the RAMBUS or these with SDRAM. I remember they made 423 to 478 adapters (or maybe it was the other way around)
there were also 478 boards with RDRAM slots. I have one with 2GB's of 800mhz ECC RDRAM and a 2.8ghz Northwood in it. runs WinXP SP3 like a potato despite the SSD and Geforce 7600gs. There's some 1066mhz kits and boards out there but they seem even rarer given they were only supported by Intel's i850E chipset which lived for only a few months.
There were 478 to 423 socket interposers, but like with all janky CPU upgrade adapters, they limited the performance of the newer CPU. You could only use 400 MHz FSB parts, not the later 533/800 MHz parts, which really cripples performance.
Great video man and also a nice job on on the architecture explanation! I've always had a soft spot for the Netburst era and it's nice to have flashback to the OG P4 with its huuge PCB. Looking forward to seeing it in action with a newer OS.
I wrote a comment about this literally a few seconds before I read this one. I like how Intel released Tualatin after those first gen P4s, and it trashed them anyway. Good job Intel. 10 bibbahertz all the way.
It did scale to 10ghz! Right around launch of these I spent a day out of school shadowing an engineer at intel. One of the things he showed me was a Pentium 4 ALU running at 10ghz. Granted that’s just the integer math portion but they definitely at least got that running at the intended speed!
Intel: "Runs great on Intel Pentium 4!" Intel Pentium 4: **runs awfully slow and hot in basically everything, so badly that Intel based their next CPUs on the Pentium 3 which is based on the Pentium Pro**
the P4 was designed with 5ghz+ in mind for its operational sweet spot. and in theory it would have worked just fine, but silicon hit its limits so the 5-10ghz clockspeeds needed for it to shine were impossible.
In 2004(ish) I built my dad an audio recording computer with a Socket 478 Pentium 4, 1.5GB 1066MHz RDRAM, and dual IDE hard drives (each one on its own separate cable for maximum throughput). That thing was a beast. Never had any complaints about its performance. He used it for almost 10 years.
P4 was very sensitive to pipeline flushes. As long as the branch predictor did its job, it was a smooth ride, otherwise resetting that long pipeline on each branch miss-prediction caused too much delays. Also, the lack of L1 instruction cache made that architecture overly dependent on the speed and size of the L2 cache. In the case of this early model, 256KB of L2 was coming a bit short of the sweet spot. The later Northwood core, with 512KB L2, would perform better.
It also didn't help that, when they released the Prescott core, they made an even longer pipeline (31 stages against 20). Yikes. Imagine waiting 31 CPU cycles to fill that pipeline again every single time the prediction fails. Prescott also had a 1 MB L2 cache, which could have helped it, but as far as I read, it had a bigger latency than the Northwood's cache, which could slow it down in applications that didn't take advantage of it... No wonder Prescott didn't impress anybody at it's release, with the fastest Northwood (running at 3.40 GHz) being a little bit faster, if not a lot faster than the fastest PGA 478 Prescott (which also runs at 3.40 GHz). I think Northwood is kinda underrated for being a P4. It's actually a solid chip, that actually works. It's also not amazing or impressive by any means, but it's still a good CPU. Now, Prescott completely ruined the P4's reputation. But honestly, it isn't a hard thing to do lol.
My highschool ran a bunch of these socket 423 P4 systems. They retired them during my Grade 10 - 11 year. My friends and I had a computer club, so the school gave them to us to experiment on. They all had the nasty RamBus DRAM. What a blast of the past.
Excellent, enjoyable video! Thank you for the history lesson about the Pentium 4. I look forward to the core 2 duo and other topics you mentioned for future vids. 😀
What your description of pipelining missed is that in *ideal circumstances* each pipeline stage would be executing a part of an instruction separate (and independent) from all other instructions in the pipeline. This way, even if it takes 30+ clock cycles to process a single instruction, you can still theoretically process one instruction every cycle. Pipelining is a very powerful cpu design concept -- but it only works well when the software is designed to work with it. Unfortunately, x86 ISA is fundamentally incompatible with it...
Okay you had me until you said x86 is fundamentally incompatible with pipelining. All modern x86 CPUs are pipelined. Hell all modern CPUs used in consumer PCs of any kind are pipelined. The P6 based processors like the Pentium 2 and 3 that came before the Pentium 4 already had a fairly deep 14 stage pipeline. This is comparable to a modern ARM processor like the Cortex-A76 which has 13 stages. ARM CPUs and other RISC designs seem to have less pipeline stages than x86, I would actually say that x86 and other CISCs lend themselves to deep pipelines for this reason. I am guessing it's easier to execute the more complex instructions over many steps rather than trying to do it in fewer with simpler instructions like some RISC designs do. The Pentium-M had a few fewer stages than this though, it only had 10. This highlights a key problem with deeply pipelined CPUs, they tend to use more power and can be less efficient. The deeper the pipeline the higher the cost of a misprediction. That's the problem that undid the Pentium 4 with it's 20 to 31 pipeline stages. This is also why modern CPUs have extremely good branch prediction - AMD actually uses a primitive form of machine learning for there newer Zen processors and there successors like Zen 2 and Zen 3. Zen 2 only has 19 stages in it's pipeline compared to the 31 Prescott had, I doubt the branch prediction of the time was good enough to deal with this. Another problem you get is dependencies, a later instruction can depend on the results of instructions that come later. This means they can't effectivley be pipelined. You can also get an entire processor being stalled by an instruction that requires fetching data form memory. Both of these problems are partially ameliorated by the use of hyperthreading. When one instruction stream stalls for whatever reason - or just can't make use of all the execution units available in modern superscalar CPU designs - you just start executing the other instruction stream. GPUs are not as deeply pipelined - nor are more efficient but less powerful CPU designs. This shows how having too many stages causes problems. They also do without certain other performance features like being superscalar or in some cases even having out of order processing in some GPU designs. This reduces single thread performance by a lot but makes the cores simpler and more efficient allowing you to fit more of them within the same area of silicon and power envelope making total throughput go up. This is ideal for something like a GPU whose workload is massively parallelised.
i disagree with all these definitions of pipeline, it's not such that it's related to a specific instruction going through a set length pipeline, the pipeline is simply speculative branch prediction where a missed prediction causes a pipeline flush (bad). The Specific length of said pipeline is not really the issue, nor is it in any way shape or form related to what gets done in a clock cycle (i.e. instructions still take x cycles per instruction regardless, and there will always be delayed execution where programmers have threaded code that depends on shit to do other shit). The description in the video is a better description of CISC vs RISC, which although does affect the pipeline, is just a consequence....long story short the pipeline doesn't cause instructions to take more/less cycles to execute....Additionally in regards to @harryHall there, he's 100 percent correct. In Collage, made a mips-esq processor on FPGA, it was a dope class, and everyone made their own special flavor and implementation of it too, kind of a contest, and it was fully pipelined (and developed all the way from building up adders from gates and shit) and i'm pretty sure MIPS was pretty fuckin, like, old. But again, when something takes 30 cycles to execute thats a COMPLEX instruction set architecture. Reduced instruction set architecture is where things take less cycles, but you have to do more operations.
@@amanda_bynes226 hate to say this but that's not completely correct. Pipelining can be done without branch prediction or speculative execution. Pipelined CPUs also don't have to be out of order either. Simpler pipelined CPUs and some GPUs don't have these as they aren't required and take up silicon area and power. They do improve the performance of a pipelined CPU though. Pretty sure they don't have to be superscalar either.
@@harryhall4001 totally agree with you. My point is that cycles per instruction is not a pipeline issue, it's an instruction set (Cisc vs risc in this context). I never said pipelines required branch prediction (my mips implementation didn't!!!)
Honestly watching older hardware is more interesting than newer one because old hardware has a story attached to it rather than stat padding of "no this has better numbers on paper so that's why it's better even though its more expensive" kind of ordeal nowadays... Another great vid, had fun learning my socket 775 cpu's i have lying around aren't that bad atleast. Keep up the good work mate!
Having used the socket 423 platform with multiple cpus ranging from the slowest 1.3ghz Willamette to the fastest 3.0ghz Northwood (SL6YH) that could be dropped in through a socket 423-478 conversion kit; the Pentium 3 Tualatins were really a sign of things to come in the core2 series. The lower clocked 400mhz front side bus in addition to the aforementioned branch prediction misses choke the early Pentium 4s potential. With socket 478 arriving so soon afterwards increasing the front side bus to 533 and 800mhz; the early adopters were shafted big time. Pentium 4s are a dumpster fire with a special place in my heart considering I grew up around them and had to make one run through the beginning of 2010. "Thank you for your service, may we never need you again."
You should find an old celeron 300a and overclock it using modern methods, even de-lidding. Just to see how much more room was left on the table that was held back by cooling of its time.
Hitting 450 mHz was easy, just bump the FSB to 100 mHz and you were off to the races. Going faster, well, therein lay a problem. The 440BX chipset was the last chipset that was compatible with the 300A. The 440BX could hit 133 mHz FSB, no problem...but then the AGP slot was waaaaay overclocked because the 440BX didn't support the right frequency dividers to put AGP back into spec, and most graphics cards would lose their shit. And there wasn't a version of the 300A released in Socket 370 so you could install it in an i815 chipset board that supported 133 mHz FSB natively.
I had a Compaq small form factor socket 423 desktop at our rural cabin on dial up internet for years. I think it was originally a 1.5ghz cpu, I then upgraded it to a 1.7ghz cpu and finally I upgraded it to the fastest socket 423 cpu, 2.0ghz. I remember I had an ATI 9550 video card on it. The last game I installed on it was COD 4, it played at 800 x 600 at 30fps-ish on low.
Socket 423 is a great option for running Windows 2000, it's basically as compatible as it gets and gives you a chance to use some "oddware" since most P4's were mPGA478B
It would be nice to see these cpu's taking up on the Pentium III Tualatin or AMD Athlon, and Athlon XP processor. I guess that all above named processors will be faster (exceptions in some benchmarks).
I have a q9400. Nice for a retro system, as for the gpu anything stronger than a gtx 750ti is wasted. In windows 10 it is usable but forget any modern heavy game.
The history part of this video kinda shows what is happening on the current market as well. The only difference is of course, Intel still has a chance (or it goes the same way as the history.) Great video as always!
This is nothing! You should see a S939 Athlon 64/X2, they're bricks. Also the old slot based P3's, which were actually bought as a daughter board-like thing that plugged into a slot on the mobo instead of a socket. They came with a preinstalled cooler like a videocard HSF.
@@gravitone Yes, it's an extremely common practice on electronic waste. Salvaging precious metals is common especially on older components, as the gold plating has a tendency to be more abundant.
That's a real nice setup you have there. I tried (for years) to get a socket 423 setup as I wanted to try and test the Tom's Hardware thermal throtteling thing (I kinda have my doubts about it's legitesimy...or however you spell it). However, finding a 423 isn't easy. You either find a CPU but no motherboard. A motherboard but no CPU (and no RAM), or if you had a motherboard with CPU but no RAM it was too expensive. After a few years I got a working setup with a socket 423 entry level chipset that uses SDRAM instead of RDRAM, so getting RAM for it was a heck of a lot easier. CPU I have is a 1.7Ghz model and paid 25 euro's for the lot of it (motherboard+cpu). Motherboard needed the caps replaced as they were all bulged. The difference in RAM speed (100Mhz or 133Mhz SDRAM) could make up 10 to 15 FPS in Quake 3 for example. It might have been the worst of the Pentium 4's, but I still love it. I didn't get to test the thermal throttle thingy like in the Tom's video as I have an OEM MSI board instead of the used Asus P4T. Thermal sensors are... a bit too optimistic (giving lower than ambient...). Thermal throttle works when I disable the fan, but...even I didn't dare to remove the heatsink to put the Tom's video to the test as it took me so long to actualy find a working S423! (and then it's "what if it doesn't work and it burns up?"). I did try a small overclock on it, going to 1.8Ghz, and every extra Mhz helps the P4 a lot, but the motherboard was lacking dividers and voltage control so couldn't get any higher. But you saw great gain in games like Quake 3 with 'just' a 100Mhz more. Anyway, great video (and find for that matter! a working 423 setup is hard to find...for a decent price at least).
I STILL REMEMBER ( Years Ago ) an AMD rollout in Detroit . NONE of the Game consoles worked / Hardly any literature provided / no Food and The Mints in the little Green Box's were HORRIBLE...anyone ?
My first new computer i got when i was around 11 was just about this setup! It was a Compaq 5000T with the 1.4Ghz, 128mb RD RAM, 40GB HDD running Windows ME. I remember my childhood best friends family had bought the same machine & at a point threw it out (mainly because it was full of viruses) & i took the RAM out of it & added it to my own. I later upgraded it to Win XP with a bootleg disk my friend gave me. My onboard graphics had gone bad at some point which gave me an excuse for Dad to buy me a Video Card. I had put a Capture Card in it later on so that i could play PS2 through it because i didn't have my own TV. This machine lasted me though the end of my school years until i built my first Desktop when i was 18. Keeping this thing running all those years is what sparked my interest in I.T. & I'm still doing it today!
I had a 3200+. They were kind of a ripoff, because they had no OC headroom. You could buy one of the cheaper XPs and OC it past 3200 speeds easily. Still it was nice having one that was clocked that high out of the box, it just had no room to grow. It also ran on the hot side for an XP.
My first computer had a Duron in it, I think it was a 700mhz, this Pentium 4 would be a dream at that time. Wish I kept that old processor but a lightning storm took my computer out.
@@daemonspudguy Aside from the newer instruction sets, a Llano APU would have been better in 99% of the cases. You are not compressing stuff or rendering video most of the time. Not to mention the power consumption being ridiculous for the little benefit in very specific applications compared to Phenom II / Llano APUs. I also liked my old Prescott Pentium 4, that doesnt mean it was any good at all.
03:30 Every time it ends up messing up (e. g. cache miss), P-4 sends uops to replay loop. US patent US7200737B1. Willamette and Northwood have two loops (RL-7 and RL-12). Prescott and Cedar mill have one (RL-18).
I remember that our neighbour had one of these Socket 423 Willamette P4 based systems. It was very unstable, I recall being called over because the system kept on bluescreening. We then eventually tracked the problem down to a lousy PSU. At the time I had a 1 GHz Pentium III (Coppermine) with 64MB GeForce FX5200 and 256MB of 133 MHz SDRAM. They had the same graphics card and 256MB of RDRAM, but my system ran most of the games better. Then I switched to a P4 630, and I sold it and bought a Mac because Vista :D
Before the Pentium 4 was released I was a forming a business collaborating between Be inc and Gateway, inc to develop a universal platform to replace the then defunct hardware side of Be, inc aka BeBox in 1998. We had a lot of input with their NLX platform, 3200. It was originally designed to take anything from a PII 233 to a PIII 733 via slick bios update. We had upgraded our offerings to socket 370 in late 1999 and by early 2000 we had been beta testing BeOS 5.0.1 on a NLX socket 423 board. While the board itself was a nice upgrade to that of the socket 370 (pIII) it ran very hot and slow, I wasn't impressed. Even with firmware updates throttling the power it was clear to me and many at Gateway that the NLX platform wasn't going to work with this cpu/memory architecture, worse it was far more expensive for the ram and cpu without much gain at al as BeOS is a highly threaded OS, not ideal for long pipelines . It wasn't until the Northwood, socket 478 did things improve as far as a faster feeling system under BeOS over the PIII. By then Be, inc was out of business and the venture didn't make financial sense. After my non-discloser agreement ended with Intel and Gateway I would tell customers looking for an upgrade path to look at AMD.
Hope you all enjoyed this video. Ill be throwing all this hardware into a Socket 423 Specific PC next week, where we'll investigate:
- Modern Day Usage
- More Operating Systems
- General Compatability
etc...
I felt this would be better suited to another video, as this one was already about as packed full of data as it can be (Yes it took a long time to benchmark these systems, even the Original XBOX), so if you want more Socket 423 Action please let me know down in the comments below, as I cant wait to explore this forgotten platform more.
Is it just me, or is it a Pentium 4 strapped to a silicon wafer to fit the old pinouts? It looks practically identical to the later versions if you exclude that adapter-like bit.
Epic
I want to see a comparison between this and early Athlon XP chips.
I have never seen one, Only read a little about it. Here is something to look out for..
"Socket 423 to 478 Upgrade Adapter"
www.amazon.com/Upgradeware-Upgrade-Adapter-Converter-Powerleap/dp/B004HAXLUU
I had some many issues using a SSD with that adapter in older systems than this one, does it happens with you as well? I was using a P2B-D, I made a video about it, ended up giving up on the SSD and went with an IDE to SD adapter.
I had a 1.3ghz p4 with 1gb ram until like 2006. I was SOOOOOO happy when i found a Athlon XP 2800+ in the trash. MASSIVE upgrade.
I remember having seen such a 1.3 Ghz Pentium 4 on socket 423 back in the days in a customer PC I was cleaning.
When I saw the socket with this 423/478 adapter I thought it was a prototype or some kind of engineering sample.
I had this CPU too. Upgraded to E6600 in 2006 aswell it was like moving from the 19th century straight into 21st century thats how I felt
i too moved at 2006 from p4 to athlon xp
BARTON
BARTON
@@rembramlastname3631 I hit the silicon lottery with my Barton core 3200. Was an absolute beast paired with a nf7-s. Used it all the way to 2013
I haven't heard the word "snappy" used this often since Apple announcements of the 1990s to 2010s :-)
So just apple announcements then? 😂
They probably stopped using the term because of the snappiness of their iPhones, like they literally snap in half.
@@MisterRorschach90 fax
Thats such a broad window though lol
"Rapid Execution Engine"
REE
REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEeeeeeeeeeeeeeeeeeee
Reeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
Réééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééé
@@Jepoy_Aquino That's the cooling fan having a bearing failure.
REEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
I remember going from a Pentium 3 to an AMD CPU. I didn't remember why I went with that. Then you mentioned how much the ram cost. Oh now I remember!
How much did the ram costs back then?
@@healspringy6300 RDRAM was hideously expensive compared to DDR and even the older SDR memory because of licensing fees. Since Intel was in bed with Rambus and had bet the farm on the technology, you were required to use it on early Pentium 4 boards since that's all Intel's chipsets supported.
I remember RDRAM being 3-4x the price of DDR sticks of the same capacity. But you can't really accurately gauge the real cost because something else going on at the same time was the wider memory industry engaging in price fixing in an attempt to kill off Rambus, in retaliation for Rambus joining the memory standards group and patenting ideas out from under them in bad faith to try and extort royalties out of everyone. In essence, it was a dumpster fire. Buying memory in the early 2000s sucked.
Intel quickly backpedaled on the Rambus decision and released this horrible thing called the "Memory Translation Hub", or MTH. This was a special purpose ASIC sold to motherboard vendors that would allow use of cheaper SDR memory. This solution turned out to be buggy and prone to system instability, and I believe there was a recall on it. Intel dropped Rambus after that whole fiasco and went back to DDR in their later Pentium 4 chipsets.
@@GGigabiteM ahh, i see, now i know why many people back in the 2000s were using AMD a lot now
@@healspringy6300 It was more than just memory cost, Intel's Pentium 4 was far more expensive for less performance than even their previous gen Pentium 3, let alone the Athlon. The classic Athlon on Socket A and later 462 was outperforming both of Intel's parts for less money.
I skipped over pretty much the entire Pentium 4 generation myself. After my Pentium MMX 200, I went to an AMD K6/2 400 to a Duron 800 to several succeeding Athlons (1333, 1700+, 2400+, 3000+) and finally to the Athlon 64 3700+ before I went back to Intel in the Core 2 era with the E6420.
The ram was so expensive that Maximum PC magazine did an article with weight that it was more expensive than the drug coke. I have about 2-3ft full of magazines from when I was a kid still in my moms attic. I gotta find that article and post it.
19:55 There's something magical about an SSD dangling from an IDE cable.
There are actually SSDs with IDE connectors in place of SATA that you can still buy. Allegedly they would run perfectly fine on a system like that.
@@builder396 They're overpriced, better find a good adapter and roll with that
I already ran a SSD on a dual pentium II (slot-1) server motherboard 😂
But I used a SATA PCI card instead.
I used a PATA to SATA adapter to put a 2TB SSD in my original Xbox.
I was working at a tech company at the time these launched, I distinctly remember we only ever sold one, just one at full price. By the time socket 478 Northwoods came out we (the techs) were offered the remaining inventory for 80% off, would have been a great deal if it wasn't for the fact we were offered no such discount on the expensive RAM and we had no motherboards in stock. Our boss tragically fell for the hype and had ordered 50 of the damn things (1.3 and 1.4ghz models), only 9 sold and of that 9 most sold for 50% off. The Pentium 3 Tualatin 1.4ghz were so much better in just about every way, worse still, they could be had for less than half the price by the time the Willamette matured as a platform and didn't need RDRAM.
An interesting bit of history none the less. Great video.
wheres the boss now? lol
1.4ghz Tualatin WERE better in every way.
Literally.
Tualatin could run RDRAM as well.
@@nexxusty what processor is that?
@@Maximus20778 Intel Pentium 3 Tualatin 1.4ghz.
I owned a Dell Server with one. It had an RDRAM motherboard.
I shouldn't have ever thrown it away.
@@nexxusty thats the first time I've ever heard of the name tualatin
"yes but also no" that's my answer when people ask if i got my life together
I tell people that I hope to get one, one day.
Rambus memory was a pain in the ass. You need a matched pair, expensive, was picky on motherboards, and needed blanks to fill the unused sockets
Then you never used it properly.
@@christophero1969 my old Dell PC must been a total odd ball
my dead XPS B733 came with only 2 slots, paid around $60 for two 128MB modules to upgrade from their original PC600 to PC800.
Rambus had more placebos than benefits, but got good results with games (Halo, GTA VC) and compression. Still beating atom and Celeron from later years.
@@christophero1969 risitas.wav
I had the pleasure of working with a Dell dimension desktop that had one of these chips + rambus ram wayyy back at my first internship. At the time (2005) rambus was pretty much dead and DDR was common...
So Crysis was optimised for those 10 GHz proccessors...
Actually yes! When in development they banked on higher clockspeeds when shortly after things shifted towards higher thread count.
@@jokerzwild00 Wasn't the game made with duo core in mind?
@@cyphaborg6598 Yeah, optimized for high clock speed dual cores
@@yancgc5098 do you think we will ever see a 10 ghz cpu made from a manufacturer in our life time?
@@raven4k998 Not without extreme overclocking.
Nice vid, hope that the channel keeps growing.
True
Interestingly, a 20 stage pipeline is relatively normal now. Both AMD and Intel chips are around that number, but have the bonus of higher clocks, multiple cores, much more cache, and 20 years in branch prediction advancement.
oh. oh shit. last time i cared to check we were at like 7 stages, but really we're back to 19+
so intel was right after all. only 20 years too early.
@@GraveUypo Kinda, i think the pentium 4 by the end of the life was hitting 37-40 stages
@@dan_loup The Willamette and Northwood architectures used a 20 stage pipeline. Later on, Prescott and Cedar Mill used a 31 stage pipeline.
Interestingly, while the Tualatin Pentium III (which used a 10 stage pipeline) topped at 1.4 GHz and the Northwood Pentium 4 (20 stage) topped at 3.4 GHz, the Prescott Pentium 4 (31 stage) topped at 3.8 GHz. A minimal improvement for an even less efficient and more complex architecture.
But yeah, today most CPUs have a pipeline ranging from 15 to 20 stages, but improvements in production, smaller nodes, less power consuming architectures and better branch prediction circuits greatly reduced any misprediction penalties and other problems the Pentium 4 had to deal with.
@@yukinagato1573the Pentium 5 prototypes have an even longer pipeline iirc
@@NathanPlays395 True. They were talking about around 40 and 50 stages. Which is really dumb. They say a 2.80 GHz prototype Pentium 5 was running at 150W of TDP, compared to a 3.80 GHz Pentium 4 Prescott running at 115W. All with an even more inefficient pipeline.
I mean, Prescott was already an aggressive redesign, and, like I said, offered even less performance per clock (losing against a same clocked Northwood in some instances) and reached only 3.80 GHz. Intel designed Prescott to be highly escalable, reaching speeds of 5 GHz and beyond, but never could due to increasingly worrisome power and thermal problems. If you look at Prescott's die (which was made with 90nm tech), you'll see it has almost the same area as Willamette (180nm), and already gigantic chip for its time, and terribly inefficient. No wonder.
Considering Prescott failed so spectacularly to scale, I was actually surprised to know Intel planned on following it with another pipeline redesign so soon. Maybe they didn't anticipate the heating issues, as I'm pretty sure Prescott was already in development when they were preparing Willamette for launch. I think they should've kept improving Northwood's architecture instead.
Budget: I fear nothing.... but..
Budget: *stares at Valezen's 1.3Ghz 423 Willamette Pentium 4 .
Budget: That thing scares me.....
it was always weird to me as a kid that I had a newer 3.06GHZ pentium 4 and a 700mhz pentium 3 M and they were both way faster than the first generation pentium 4 clocked at 1.5GHZ.
4 is better than 3
Intel's Marketing Logic
the Pentium iii s Tualatin 1400Mhz 512kb cache kills the Pentium 4 to 2Ghz
@@microphonology2830 Not only that, in 2000 the P3s were actually selling better than the P4s before the launch of socket 478
4 is not better than 3
Hyper 4 is better than 3
-Intel logic
I remember buying the 3,2 P4,it was the best of the best,but it was super pricy compared to its AMD Athlon competitor.But i wasnt paying so i didnt care,my father paid for it.If i were to do it all over again,i would just buy an Athlon and get my father to get a better gpu,i wanted a radeon 9800 pro or xt,but got 5900 xt from NVIDIA,which was markedly worse than the 9800,but still super good for its time.
@@RandoBurner 5900 xt from NVIDIA.......how do you live with yourself?
I almost bought one of the 1st gen P4s back in 2001, but ended up getting a cheaper Duron rig. I don't regret that decision. Especially once I upgraded it to an Athlon XP. Intel basically saved their bacon by rebranding the Pentium III as the Pentium M, and later Core Solo/Duo. All current Intel x86 processors descend from those.
their bacon lmao
there is zero information on those 32 bit core chips, vista days, before core 2...
My first 1 GHz proc was a Duron, those Socket A/462 boards were futureproof for many years, all kinds of options and overclocking. Anyone remember the pencil trick with some of the newer Athlons?
This channel learned and helped me a lot especially about gpus❤️❤️
There's only one place where very long, high-clocked pipelines work very well, and that's the situation where you have a lot of repetitive tasks wich similar data types which require high throughput, but not a lot of operation switching.
Not so coincidentally, that's *exactly* the workload you have in a GPU.
I had the 1.4 ghz p4, and no it wasn't really much of an improvement over the p3, but I was coming from a Cyrix 200mhz socket 7 cpu system (another rare one) so it was extremely fast for me. I cruised that p4 for most of the xp era and it did its job well enough! I remember rdram as well, it died out quickly and was almost impossible to find any for upgrades, and when you did it was at astronomical prices for memory... it also ran incredibly hot.
Great video, love the nostalgia.
Actually back in those days, the 0.13 um 1.4 Ghz Tualatin Pentium 3 was pretty beast and stomped on those earlier Pentium 4 systems, especially if you were fortunate enough to get some overclocking out of it. It would be cool to see a shootout between a Tualatin P3 and some of these benchmark results. The Tualatin core was what the Pentium M was developed from, which was what the Core architecture was derived from, which ironically was what finally relieved the P4 architecture from duty. The P3 was just a really good CPU.
The P6 architecture dates all the way back to the Pentium Pro in the mid '90s... It's crazy to think how long that underlying architecture lasted during a period of very rapid PC development. Today's performance CPUs are actually more similar architecturally to the P4 (long pipeline and multithreading) than the P3. The difference is that now we can fit enough transistors on a die to make an adequate branch predictor to virtually eliminates pipeline stalling (which is what really killed the P4).
You couldn't buy a 1.4Ghz Tualatin back then, nor the 1.3Ghz version, they all went into the server market. Oh, and no, it didn't stomp on the early P4s, it was competitive in speed while generating less heat. Either your memory is faulty or you're talking out of your arse.
@@iangreenhalgh9280 Having used the socket 423 platform with multiple cpus ranging from the slowest 1.3ghz Willamette to the fastest 3.0ghz Northwood (SL6YH); the Tualatins do indeed stomp them in IPC because the lower clocked 400mhz front side bus in addition to the aforementioned branch prediction misses choke the early Pentium 4s potential.
If one wanted to play games of the era; (other than Quake 3) then it took a 2.8ghz Northwood (SL7EY) running through a 423-478 socket conversion kit with 40ns pc800 rdram to be competitive against a 1.4ghz Tualatin as frequency only goes so far before other bottlenecks appear.
With socket 478 arriving so soon afterwards increasing the front side bus to 533 and 800mhz; the early adopters were shafted big time.
Pentium 4s are a dumpster fire with a special place in my heart considering I grew up around them and had to make one run through the beginning of 2010.
"Thank you for your service, may we never need you again."
By the way; you absolutely could get Tualatins from a consumer facing supplier that dealt in higher end product; Intel gladly took your money all the same.
@@Nachokinz I wasn't playing games on them I was building servers and workstations using them and the Tualatins were not as you describe in that application, they weren't competitive in speed, but they ran a lot cooler, so found a niche in rack servers where heat was an issue.
@@iangreenhalgh9280 So early Pentium 4s were not preferred in the server market due to heat, an area with much greater potential of workloads it could excell within. A reminder of an architecture that found difficulty in satisfying consumer demands.
10ghz, and many years later we haven’t gone passed 8.7 on ln2.
423 P4, neat.Haven't even thought about these in years. Great vid, liked the benchmark comparisons.
Bro! Empire Earth! What a classic. You earned a sub just for that!
I just have to point out the massive improvement in the overall quality of your videos. Last year, they were great. But the last few just skyrocketed in production quality. The pipeline explanation is phenomenal. I love the time labelings. Keep up with the good work!
Inhell Pentium 4
Sintel Repentium IV
in the heat of the moment
Incel Pentium 4
L🤫L Dell Dimension 💨🍃Leaf Blower Limited Edition! 🥵
Exhale Pentium 4
Inhell pentagram 4
I used to have a Pentium 4 1.5GHz.
It was a bit slow, but the system lasted for quite a while before being replaced by some of the first dual cores from Intel.
I would love if you can add to comparison the P3-S 1.4Ghz
just like 11th gen intel i7 cpus XD
Honestly, the 1.4 Tualatin was a fantastic chip for its time. I had one, I think a mate of mine still has the old girl somewhere
You realize how rare and expensive that processor is, right?
@@harryshuman9637 the 1.4?
Looks like I'm seeing if my mate still has it then as I know it's not being used
@@harryshuman9637 they arent any more rarer than the P4 he tested in this video, and they are only expensive now because of how good they proved themselves to be..
14:50
Pentium 4 1.6Ghz......"what is physics!!!!"
For anyone that wants to know why ram was so expensive in the early 2000 you have to look at the companies making them. There was a huge ram "cartel" headed by Samsung. It was some serious supervillain move in which Samsung, SK Hynix (Hyundai Electronics back then, Infineon and some other companies got together in secret meetings and decided to all jack up the price of ram to make a ton of money. It wasn't until a few years later than one of the companies spilled the beans to the US government and they all got taken to court. Samsung ended up paying millions of dollars in fines.
We used Kingston as it's made right here! It was still expensive so the company used Celeron 2.4ghz to save money. Nobody bought a single one. To my knowledge, they are still in a warehouse somewhere. 10,000+ units. I personally built 2,000 of them. What a few months that was. Great training though.
Wasn't it because of RAMBUS high licensing fees? SDRAM was much cheaper and standardized.
@@fungo6631 no this was in the late to early 2000s. Rambus only had control of their own memory, RDRAM which only Intel used and some game consoles. SDR and DDR was not owned by them but free and open
@@tHeWasTeDYouTh That's what I said. RDRAM was controlled by RAMBUS and not an open standard like SDRAM and DDR.
The first PC I ever used had that processor. I have some bad memories of how slow it would be sometimes. But overall, I had fun with it. Thanks for reminding me about my childhood with this video!
Perfect choice for background music, perfect music volume, perfect explanation. This was a great video indeed.
And this is the reason why Intel and AMD made the transition from high-speed clocked CPUs to multi-core, multi-threaded ones. Both realized that you can do more with your processors when you split the tasks among different cores and threads rather than just dumping all the bulk of the job onto a single, fast processor, no matter how well optimized the software was. Then again, we consumers got stuck with 4 cores CPUS for an eternity because Intel couldn't or didn't want to push the boundaries of tech beyond what they could do at the time.
That wasn't the reason at all. The reason why we got multi core CPUs is because AMD and later a tiny part of Intel realized that pushing CPU clocks beyond a certain point consumes excessive amounts of power and puts out tons of heat. Even at the end of Netburst's life, Intel was still trying to push clocks higher than their final 3.8 GHz part and had a 4.0 GHz part on the horizon, with a new 'Tejas' core revision being planned that was supposed to reach 7 GHz. By that point, AMD was decimating Intel with the Athlon 64 and the only thing that saved their bacon was a small team in Israel that had been quietly working on continuing development of the Pentium 3 core into the Pentium M and subsequent Core / Core 2 series.
Had the Israel team of Intel engineers not been working on what they did, or even existed, Intel would have been in big trouble. I'd say they would have likely ended up in the same position AMD was in during their own failed Bulldozer architecture, where they were a knife edge from bankruptcy. You have to remember they also had the Itanium dumpster fire going on at the same time, so they couldn't lean on that to buy time because the Opteron was also decimating them in the server market.
Loving the SimCity 3000 music in the background
Why does the Pentium 4 kinda suck?
Ars Technica: The Pentium 4 *COMPLETELY* sucks!!!!!!!
Ars Technica BITD often couldn't tell their bum from a whole in the ground and frequently had their head on both. Objective reporting was not something they were known for, at least back then.
I played ps1 games on Pentium 3 333mhz lol
When the new arch was announced, you should have seen the Intel fan boys drooling, whilst others wondered about smaller caches and v. long pipelines.
The double speed logic part sounded dubious too for thermal reasons
@@RobBCactive That's not what happened at all.
Wow, I think I finally found out why SW Kotor played weirdly on my first PC. Had a P4 1,4 GHz with 256MB SDRAM and Mx200. The tutorial level was fine and the beginning of the first planet too. Until you got outside. Than the characters started to teleport around, while the game run with decent enough FPS. Seems that the pipeline and branch prediction didn't work nice with game.
I like your videos, keep going and a happy new year
I remember these 423 processors. Most PC publications in those days said it was good but AMD was just as good and far cheaper. The Pentium III Tualatin also performed very well and there were plenty of boutique system builders that offered such systems, some with DDR RAM. The Pentium 4 was a good competitor once it moved past Willamette and I wanted to have a Northwood system with DDR RAM as a second system to my AMD Athlon XP systems at the time.
Not that I am a fanboy but I have built more AMD systems, just because the price to performance point was always good. I always wanted to build a super powerful Intel system but the prices just weren't really worth it.
I would like to thank you for your videos. Your videos are what got me building my first gaming pc.
I'd be interested in seeing other CPUs paired with the RAMBUS or these with SDRAM. I remember they made 423 to 478 adapters (or maybe it was the other way around)
There were RDRAM boards for P3. I have one with a pair of P3 933's on it.
there were also 478 boards with RDRAM slots. I have one with 2GB's of 800mhz ECC RDRAM and a 2.8ghz Northwood in it. runs WinXP SP3 like a potato despite the SSD and Geforce 7600gs.
There's some 1066mhz kits and boards out there but they seem even rarer given they were only supported by Intel's i850E chipset which lived for only a few months.
There were 478 to 423 socket interposers, but like with all janky CPU upgrade adapters, they limited the performance of the newer CPU. You could only use 400 MHz FSB parts, not the later 533/800 MHz parts, which really cripples performance.
Great video man and also a nice job on on the architecture explanation! I've always had a soft spot for the Netburst era and it's nice to have flashback to the OG P4 with its huuge PCB. Looking forward to seeing it in action with a newer OS.
You should be comparing it with Tualatin P3, but I guess that would be an overkill.
I wrote a comment about this literally a few seconds before I read this one.
I like how Intel released Tualatin after those first gen P4s, and it trashed them anyway. Good job Intel. 10 bibbahertz all the way.
@@voltare2amstereo Intel eventually realized Pentium 4s were crap so they based the Core 2 architecture on the P3 architecture
It did scale to 10ghz! Right around launch of these I spent a day out of school shadowing an engineer at intel. One of the things he showed me was a Pentium 4 ALU running at 10ghz. Granted that’s just the integer math portion but they definitely at least got that running at the intended speed!
Intel: "Runs great on Intel Pentium 4!"
Intel Pentium 4: **runs awfully slow and hot in basically everything, so badly that Intel based their next CPUs on the Pentium 3 which is based on the Pentium Pro**
the P4 was designed with 5ghz+ in mind for its operational sweet spot. and in theory it would have worked just fine, but silicon hit its limits so the 5-10ghz clockspeeds needed for it to shine were impossible.
In 2004(ish) I built my dad an audio recording computer with a Socket 478 Pentium 4, 1.5GB 1066MHz RDRAM, and dual IDE hard drives (each one on its own separate cable for maximum throughput). That thing was a beast. Never had any complaints about its performance. He used it for almost 10 years.
P4 was very sensitive to pipeline flushes. As long as the branch predictor did its job, it was a smooth ride, otherwise resetting that long pipeline on each branch miss-prediction caused too much delays.
Also, the lack of L1 instruction cache made that architecture overly dependent on the speed and size of the L2 cache. In the case of this early model, 256KB of L2 was coming a bit short of the sweet spot. The later Northwood core, with 512KB L2, would perform better.
It also didn't help that, when they released the Prescott core, they made an even longer pipeline (31 stages against 20). Yikes. Imagine waiting 31 CPU cycles to fill that pipeline again every single time the prediction fails.
Prescott also had a 1 MB L2 cache, which could have helped it, but as far as I read, it had a bigger latency than the Northwood's cache, which could slow it down in applications that didn't take advantage of it...
No wonder Prescott didn't impress anybody at it's release, with the fastest Northwood (running at 3.40 GHz) being a little bit faster, if not a lot faster than the fastest PGA 478 Prescott (which also runs at 3.40 GHz).
I think Northwood is kinda underrated for being a P4. It's actually a solid chip, that actually works. It's also not amazing or impressive by any means, but it's still a good CPU.
Now, Prescott completely ruined the P4's reputation. But honestly, it isn't a hard thing to do lol.
Awesome video, a showcase of the late 2000s AMD phenoms against their intel counterparts would be an interesting contest
1GB of ram would have been crazy at that point in time. 256MB would be plenty for the early XP era.
I had XP even on 128MB ram 😀
My highschool ran a bunch of these socket 423 P4 systems. They retired them during my Grade 10 - 11 year. My friends and I had a computer club, so the school gave them to us to experiment on. They all had the nasty RamBus DRAM. What a blast of the past.
Master of animations, I knew it!
Excellent, enjoyable video! Thank you for the history lesson about the Pentium 4. I look forward to the core 2 duo and other topics you mentioned for future vids. 😀
The paintium 4
-powered by ded inside
👍 for the Sim City 3000 music 😂
What your description of pipelining missed is that in *ideal circumstances* each pipeline stage would be executing a part of an instruction separate (and independent) from all other instructions in the pipeline. This way, even if it takes 30+ clock cycles to process a single instruction, you can still theoretically process one instruction every cycle.
Pipelining is a very powerful cpu design concept -- but it only works well when the software is designed to work with it. Unfortunately, x86 ISA is fundamentally incompatible with it...
Okay you had me until you said x86 is fundamentally incompatible with pipelining.
All modern x86 CPUs are pipelined. Hell all modern CPUs used in consumer PCs of any kind are pipelined.
The P6 based processors like the Pentium 2 and 3 that came before the Pentium 4 already had a fairly deep 14 stage pipeline.
This is comparable to a modern ARM processor like the Cortex-A76 which has 13 stages. ARM CPUs and other RISC designs seem to have less pipeline stages than x86, I would actually say that x86 and other CISCs lend themselves to deep pipelines for this reason. I am guessing it's easier to execute the more complex instructions over many steps rather than trying to do it in fewer with simpler instructions like some RISC designs do.
The Pentium-M had a few fewer stages than this though, it only had 10. This highlights a key problem with deeply pipelined CPUs, they tend to use more power and can be less efficient. The deeper the pipeline the higher the cost of a misprediction. That's the problem that undid the Pentium 4 with it's 20 to 31 pipeline stages. This is also why modern CPUs have extremely good branch prediction - AMD actually uses a primitive form of machine learning for there newer Zen processors and there successors like Zen 2 and Zen 3. Zen 2 only has 19 stages in it's pipeline compared to the 31 Prescott had, I doubt the branch prediction of the time was good enough to deal with this.
Another problem you get is dependencies, a later instruction can depend on the results of instructions that come later. This means they can't effectivley be pipelined. You can also get an entire processor being stalled by an instruction that requires fetching data form memory. Both of these problems are partially ameliorated by the use of hyperthreading. When one instruction stream stalls for whatever reason - or just can't make use of all the execution units available in modern superscalar CPU designs - you just start executing the other instruction stream.
GPUs are not as deeply pipelined - nor are more efficient but less powerful CPU designs. This shows how having too many stages causes problems. They also do without certain other performance features like being superscalar or in some cases even having out of order processing in some GPU designs. This reduces single thread performance by a lot but makes the cores simpler and more efficient allowing you to fit more of them within the same area of silicon and power envelope making total throughput go up. This is ideal for something like a GPU whose workload is massively parallelised.
i disagree with all these definitions of pipeline, it's not such that it's related to a specific instruction going through a set length pipeline, the pipeline is simply speculative branch prediction where a missed prediction causes a pipeline flush (bad). The Specific length of said pipeline is not really the issue, nor is it in any way shape or form related to what gets done in a clock cycle (i.e. instructions still take x cycles per instruction regardless, and there will always be delayed execution where programmers have threaded code that depends on shit to do other shit). The description in the video is a better description of CISC vs RISC, which although does affect the pipeline, is just a consequence....long story short the pipeline doesn't cause instructions to take more/less cycles to execute....Additionally in regards to @harryHall there, he's 100 percent correct. In Collage, made a mips-esq processor on FPGA, it was a dope class, and everyone made their own special flavor and implementation of it too, kind of a contest, and it was fully pipelined (and developed all the way from building up adders from gates and shit) and i'm pretty sure MIPS was pretty fuckin, like, old. But again, when something takes 30 cycles to execute thats a COMPLEX instruction set architecture. Reduced instruction set architecture is where things take less cycles, but you have to do more operations.
@@amanda_bynes226 hate to say this but that's not completely correct. Pipelining can be done without branch prediction or speculative execution. Pipelined CPUs also don't have to be out of order either. Simpler pipelined CPUs and some GPUs don't have these as they aren't required and take up silicon area and power. They do improve the performance of a pipelined CPU though. Pretty sure they don't have to be superscalar either.
@@harryhall4001 totally agree with you. My point is that cycles per instruction is not a pipeline issue, it's an instruction set (Cisc vs risc in this context). I never said pipelines required branch prediction (my mips implementation didn't!!!)
I look forward to your coverage of the core 2 duo
Honestly watching older hardware is more interesting than newer one because old hardware has a story attached to it rather than stat padding of "no this has better numbers on paper so that's why it's better even though its more expensive" kind of ordeal nowadays...
Another great vid, had fun learning my socket 775 cpu's i have lying around aren't that bad atleast. Keep up the good work mate!
You know it's amazing when it was even slower than an XBOX.
Love that Sim city 3000 soundtrack, keep up the good work.
Having used the socket 423 platform with multiple cpus ranging from the slowest 1.3ghz Willamette to the fastest 3.0ghz Northwood (SL6YH) that could be dropped in through a socket 423-478 conversion kit; the Pentium 3 Tualatins were really a sign of things to come in the core2 series. The lower clocked 400mhz front side bus in addition to the aforementioned branch prediction misses choke the early Pentium 4s potential.
With socket 478 arriving so soon afterwards increasing the front side bus to 533 and 800mhz; the early adopters were shafted big time.
Pentium 4s are a dumpster fire with a special place in my heart considering I grew up around them and had to make one run through the beginning of 2010.
"Thank you for your service, may we never need you again."
You should find an old celeron 300a and overclock it using modern methods, even de-lidding. Just to see how much more room was left on the table that was held back by cooling of its time.
66mhz FSB to 100Mhz FSB woohoo cheap 450mhz that smashed everything else!
Hitting 450 mHz was easy, just bump the FSB to 100 mHz and you were off to the races.
Going faster, well, therein lay a problem. The 440BX chipset was the last chipset that was compatible with the 300A. The 440BX could hit 133 mHz FSB, no problem...but then the AGP slot was waaaaay overclocked because the 440BX didn't support the right frequency dividers to put AGP back into spec, and most graphics cards would lose their shit. And there wasn't a version of the 300A released in Socket 370 so you could install it in an i815 chipset board that supported 133 mHz FSB natively.
I fricking love this channel !
Amazing video , as always !
I remember playing halo for the first time in pentium 4 computer
And boy its mesmerizing
I had a Compaq small form factor socket 423 desktop at our rural cabin on dial up internet for years. I think it was originally a 1.5ghz cpu, I then upgraded it to a 1.7ghz cpu and finally I upgraded it to the fastest socket 423 cpu, 2.0ghz. I remember I had an ATI 9550 video card on it. The last game I installed on it was COD 4, it played at 800 x 600 at 30fps-ish on low.
Would love to see a Pentium M review. Because I have an ancient lapptop with it:D
Socket 423 is a great option for running Windows 2000, it's basically as compatible as it gets and gives you a chance to use some "oddware" since most P4's were mPGA478B
It would be nice to see these cpu's taking up on the Pentium III Tualatin or AMD Athlon, and Athlon XP processor. I guess that all above named processors will be faster (exceptions in some benchmarks).
All tested games (from highest to lowest 1% lows):
Empire Earth - 480p ultra - 58/50fps
Halo: Combat Evolved - 480p high - 45/25fps
The Sims 2: Ultimate Collection - 600p normal - 41/25fps
Far Cry - 600p low/medium - 34/21fps
The Elder Scrolls III: Morrowind - 480p normal - 36/18fps
Fable: The Lost Chapters - 480p low/medium - 22/11fps
Half-Life 2 - 600p low/medium - 24/9fps
I have one of those beasts, it is clocked at 1.4 Ghz and it is an amazing CPU for collectors.
Give me pfp sauce
Epic Video, thank you for making this!! I still have one of these systems... Socket 423, RDRAM, Pentium 4 Willamette... although mine's a 1.7 GHz.
Paintium 4
Yes
Holy shit, Budget-Builds speedrunning? That sounds awesome, if you're going to stream it/record it, I can't wait to watch it
can you test core 2 quad i want to build a core 2 quad system and I want your take on this cpu in 2020
I had a core2quad q8300 you can pick them up dirt cheap. But you might be better off getting an AMD phenom x4
I have a q9400. Nice for a retro system, as for the gpu anything stronger than a gtx 750ti is wasted. In windows 10 it is usable but forget any modern heavy game.
@@hornetIIkite3 Sadly all phenoms lack SSE 4.1 needed for most of today applications and games.
17:45 LOL, I was trying to close the "windows XP tour" bubble...
"Intel's Painful Pentium 4"...
So.. Paintium 4?
The history part of this video kinda shows what is happening on the current market as well. The only difference is of course, Intel still has a chance (or it goes the same way as the history.) Great video as always!
Damn boi that cpu *THICC*
This is nothing! You should see a S939 Athlon 64/X2, they're bricks. Also the old slot based P3's, which were actually bought as a daughter board-like thing that plugged into a slot on the mobo instead of a socket. They came with a preinstalled cooler like a videocard HSF.
I really love your voice and videos. good job and thanks much...:)
Pentium 4 (aka) when Intel tried to prioritize marketing
Very great video! :) Thanks for that! Keep up that great work!
Compare this to a Pentium III Tualatin using DDR ram. My Asus TUA266 motherboard with even a Celeron Tualatin 1.0MHz OC'ed to 1333Mhz seemed better.
That pipeline explaination was really good
Oh wow, I thought most of these were salvaged for precious metals
What precious metals? the 0.001 grams of gold they used to plate the CPU pins?
@@gravitone Yes, it's an extremely common practice on electronic waste.
Salvaging precious metals is common especially on older components, as the gold plating has a tendency to be more abundant.
I had a Pentium 4 PC as a kid. Where i learned most of my early knowledge of computers. It ran remarkably well and served me for years.
For AMD's version: See the FX series
Man those were weird chips. I remember being a kid building my first pc and seeing those things and those specs blew my mind. EIGHT CORES???
@n0obhAcker00 and the OS sees it as 4 core/8 thread CPU .
@n0obhAcker00 exactly
Very nice video, i really enjoyed it, but i would have liked to see the Pentium III 1.4 Tualatin in those benchmarks crushing the Willamette X'D
I forget Intel use to name all there CPUs off Oregon towns. Kind of cool
Oh no. Not the Pentium 4
That's a real nice setup you have there. I tried (for years) to get a socket 423 setup as I wanted to try and test the Tom's Hardware thermal throtteling thing (I kinda have my doubts about it's legitesimy...or however you spell it). However, finding a 423 isn't easy. You either find a CPU but no motherboard. A motherboard but no CPU (and no RAM), or if you had a motherboard with CPU but no RAM it was too expensive. After a few years I got a working setup with a socket 423 entry level chipset that uses SDRAM instead of RDRAM, so getting RAM for it was a heck of a lot easier. CPU I have is a 1.7Ghz model and paid 25 euro's for the lot of it (motherboard+cpu). Motherboard needed the caps replaced as they were all bulged. The difference in RAM speed (100Mhz or 133Mhz SDRAM) could make up 10 to 15 FPS in Quake 3 for example. It might have been the worst of the Pentium 4's, but I still love it. I didn't get to test the thermal throttle thingy like in the Tom's video as I have an OEM MSI board instead of the used Asus P4T. Thermal sensors are... a bit too optimistic (giving lower than ambient...). Thermal throttle works when I disable the fan, but...even I didn't dare to remove the heatsink to put the Tom's video to the test as it took me so long to actualy find a working S423! (and then it's "what if it doesn't work and it burns up?"). I did try a small overclock on it, going to 1.8Ghz, and every extra Mhz helps the P4 a lot, but the motherboard was lacking dividers and voltage control so couldn't get any higher. But you saw great gain in games like Quake 3 with 'just' a 100Mhz more. Anyway, great video (and find for that matter! a working 423 setup is hard to find...for a decent price at least).
I correct my comment on the community post. But still for some 30 secs that post was 6hrs long and the video wasn't uploaded.
@Hatidža Serdarević jest vala bas neki fan napeo covjeka
I STILL REMEMBER ( Years Ago ) an AMD rollout in Detroit . NONE of the Game consoles worked / Hardly any literature provided / no Food and The Mints in the little Green Box's were HORRIBLE...anyone ?
Let’s go I got my rtx 3090
How about an SLI 3090?
My first new computer i got when i was around 11 was just about this setup! It was a Compaq 5000T with the 1.4Ghz, 128mb RD RAM, 40GB HDD running Windows ME. I remember my childhood best friends family had bought the same machine & at a point threw it out (mainly because it was full of viruses) & i took the RAM out of it & added it to my own. I later upgraded it to Win XP with a bootleg disk my friend gave me. My onboard graphics had gone bad at some point which gave me an excuse for Dad to buy me a Video Card. I had put a Capture Card in it later on so that i could play PS2 through it because i didn't have my own TV. This machine lasted me though the end of my school years until i built my first Desktop when i was 18. Keeping this thing running all those years is what sparked my interest in I.T. & I'm still doing it today!
I've got the best Socket A board and an Athlon XP3200+ I could loan you...
I had a 3200+. They were kind of a ripoff, because they had no OC headroom. You could buy one of the cheaper XPs and OC it past 3200 speeds easily. Still it was nice having one that was clocked that high out of the box, it just had no room to grow. It also ran on the hot side for an XP.
My first computer had a Duron in it, I think it was a 700mhz, this Pentium 4 would be a dream at that time. Wish I kept that old processor but a lightning storm took my computer out.
longer pipeline higher clockspeed, kinda of sounds like a scam xD
AMD Bulldozer FX CPU left the chat.
@@laharl2k LOL
I boasted about my pipeline when dating my wife.
@@laharl2k I like my Steamroller-based APU.
@@daemonspudguy
Aside from the newer instruction sets, a Llano APU would have been better in 99% of the cases. You are not compressing stuff or rendering video most of the time. Not to mention the power consumption being ridiculous for the little benefit in very specific applications compared to Phenom II / Llano APUs.
I also liked my old Prescott Pentium 4, that doesnt mean it was any good at all.
Yes Hamish thanks for posting 🙏
Roses are red,
Violets are blue,
i diots write "first",
So you shoud't be too
First
03:30 Every time it ends up messing up (e. g. cache miss), P-4 sends uops to replay loop. US patent US7200737B1. Willamette and Northwood have two loops (RL-7 and RL-12). Prescott and Cedar mill have one (RL-18).
heart this comment man c'mon
I remember that our neighbour had one of these Socket 423 Willamette P4 based systems. It was very unstable, I recall being called over because the system kept on bluescreening. We then eventually tracked the problem down to a lousy PSU.
At the time I had a 1 GHz Pentium III (Coppermine) with 64MB GeForce FX5200 and 256MB of 133 MHz SDRAM. They had the same graphics card and 256MB of RDRAM, but my system ran most of the games better. Then I switched to a P4 630, and I sold it and bought a Mac because Vista :D
Empire Earth 1 has the best campaigns, nice to include it.
My god I had completely forgotten about RDRAM **~SHUDDERS~**
Before the Pentium 4 was released I was a forming a business collaborating between Be inc and Gateway, inc to develop a universal platform to replace the then defunct hardware side of Be, inc aka BeBox in 1998. We had a lot of input with their NLX platform, 3200. It was originally designed to take anything from a PII 233 to a PIII 733 via slick bios update. We had upgraded our offerings to socket 370 in late 1999 and by early 2000 we had been beta testing BeOS 5.0.1 on a NLX socket 423 board. While the board itself was a nice upgrade to that of the socket 370 (pIII) it ran very hot and slow, I wasn't impressed. Even with firmware updates throttling the power it was clear to me and many at Gateway that the NLX platform wasn't going to work with this cpu/memory architecture, worse it was far more expensive for the ram and cpu without much gain at al as BeOS is a highly threaded OS, not ideal for long pipelines . It wasn't until the Northwood, socket 478 did things improve as far as a faster feeling system under BeOS over the PIII. By then Be, inc was out of business and the venture didn't make financial sense. After my non-discloser agreement ended with Intel and Gateway I would tell customers looking for an upgrade path to look at AMD.
Great video man absolutely love your content as always