Intel Tried To Kill x86! - Itanium Explained

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 พ.ค. 2024
  • Get iFixit's Pro Tech Toolkit at ifixit.com/techquickie
    Learn about Itanium, the Intel architecture that was once meant to replace x86.
    Leave a reply with your requests for future episodes, or tweet them here: / jmart604
    ►GET MERCH: www.LTTStore.com/
    ►SUPPORT US ON FLOATPLANE: www.floatplane.com/
    ►LTX EXPO: www.ltxexpo.com/
    AFFILIATES & REFERRALS
    ---------------------------------------------------
    ►Affiliates, Sponsors & Referrals: lmg.gg/sponsors
    ►Private Internet Access VPN: lmg.gg/pialinus2
    ►MK Keyboards: lmg.gg/LyLtl
    ►Secretlabs Gaming Chairs: lmg.gg/SecretlabLTT
    ►Nerd or Die Stream Overlays: lmg.gg/avLlO
    ►Green Man Gaming lmg.gg/GMGLTT
    ►Amazon Prime: lmg.gg/8KV1v
    ►Audible Free Trial: lmg.gg/8242J
    ►Our Gear on Amazon: geni.us/OhmF
    FOLLOW US ELSEWHERE
    ---------------------------------------------------
    Twitter: / linustech
    Facebook: / linustech
    Instagram: / linustech
    Twitch: / linustech
    FOLLOW OUR OTHER CHANNELS
    ---------------------------------------------------
    Linus Tech Tips: lmg.gg/linustechtipsyt
    Mac Address: lmg.gg/macaddress
    TechLinked: lmg.gg/techlinkedyt
    ShortCircuit: lmg.gg/shortcircuityt
    LMG Clips: lmg.gg/lmgclipsyt
    Channel Super Fun: lmg.gg/channelsuperfunyt
    Carpool Critics: lmg.gg/carpoolcriticsyt
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 782

  • @kwerboom
    @kwerboom 2 ปีที่แล้ว +487

    I prefer, "If you're going to reinvent the wheel at least make sure it works on the roads that already exist."

    • @Fenrisboulder
      @Fenrisboulder 2 ปีที่แล้ว +10

      like how they had such poor insight , shallow connections maybe w/big client

    • @ch4.hayabusa
      @ch4.hayabusa 2 ปีที่แล้ว +20

      Or rather if you're going to invent a wheel that requires special roads, make sure someone wants to build those roads. If not make sure that your wheel is so good that you can afford to do it yourself.

    • @dabbasw31
      @dabbasw31 6 หลายเดือนก่อน

      This is the main reason why maglev trains did not replace traditional wheel-rail trains. (:

  • @I0NE007
    @I0NE007 2 ปีที่แล้ว +447

    I just heard about Itanium two days ago when learning about "why Space Pinball didn't make it to Vista."

    • @peteasmr2952
      @peteasmr2952 2 ปีที่แล้ว +9

      Same

    • @rishi-m
      @rishi-m 2 ปีที่แล้ว +24

      th-cam.com/video/3EPTfOTC4Jw/w-d-xo.html
      You both made me Google this, this video? Interesting..
      Edit: It is a link to NCommander's video

    • @I0NE007
      @I0NE007 2 ปีที่แล้ว +3

      @@rishi-m Yeah, that's the one.

    • @ikannunaplays
      @ikannunaplays 2 ปีที่แล้ว +14

      It's almost as if TechQuickie did also and decided to expand on it

    • @angeldendariarena2287
      @angeldendariarena2287 2 ปีที่แล้ว +3

      The point of view of a retired Microsoft software engineer that created much of the tools of windows. th-cam.com/video/ThxdvEajK8g/w-d-xo.html&ab_channel=Dave%27sGarage

  • @pyronical
    @pyronical 2 ปีที่แล้ว +515

    You should start covering more obscure/failed hardware devices that people probably never heard of.

    • @JosifovGjorgi
      @JosifovGjorgi 2 ปีที่แล้ว +7

      and dressed as hipster :)

    • @StarkRG
      @StarkRG 2 ปีที่แล้ว +10

      It'd be interesting to see LTT's take on the Transmeta Crusoe, a CPU that was very much NOT x86 but managed to run a custom x86 emulator that was, in almost all cases, at least as fast as an Intel or AMD CPU of the same cost, and using much less power in the process. (The "almost all" caveat is probably ultimately why they failed) It's actually the same architecture family as Itanium, VLIW (Very Long Instruction Word, with 32-bit instructions some of which could be combined to make 64- and 128-bit instruction words).

    • @SonicBoone56
      @SonicBoone56 2 ปีที่แล้ว +1

      Canadian LGR

    • @fanseychaeng
      @fanseychaeng 2 ปีที่แล้ว

      That's a good one tho.🤣🤣🤯

    • @WarriorsPhoto
      @WarriorsPhoto 2 ปีที่แล้ว +1

      Yes agreed. Any ideas come to mind?

  • @xplodingmojo2087
    @xplodingmojo2087 2 ปีที่แล้ว +735

    “Tried to take out the bricks from my house yesterday”
    - Intel

    • @shyferret9924
      @shyferret9924 2 ปีที่แล้ว +5

      Speaks in 10nm

    • @notaname1750
      @notaname1750 2 ปีที่แล้ว +6

      @@shyferret9924
      Finally! Intel using 10nm

    • @shyferret9924
      @shyferret9924 2 ปีที่แล้ว +9

      @@notaname1750 when amd already on 7

    • @Halz0holic
      @Halz0holic 2 ปีที่แล้ว +2

      More like foundation

    • @hasupe6520
      @hasupe6520 2 ปีที่แล้ว +1

      x86 is already dead. The writing is already on the wall. ARM will be the future. Intel/AMD will have to adapt or die. Anyone who don't understand that is kidding themselves.

  • @basix250
    @basix250 2 ปีที่แล้ว +959

    Considering the new tech coming out every day, the age of x86 and its survival is really an outlier.

    • @kusayfarhan9943
      @kusayfarhan9943 2 ปีที่แล้ว +226

      It's a low level foundation. Makes perfect sense that it survived this long. It's like the foundation/basement to a building. You can't yank it out and expect the building to not topple. There is software decades old running on x86.
      At a university lab I once saw an old piece of machinery that is controlled by a PC running Win 95 on x86. This machine is still used today to prototype CPU designs.

    • @dycedargselderbrother5353
      @dycedargselderbrother5353 2 ปีที่แล้ว +34

      Adaptation is part of it. Under the hood modern parts operate nothing like the original chips.

    • @Myvoetisseer
      @Myvoetisseer 2 ปีที่แล้ว +106

      It shouldn't have survived, but we're kinda trapped by it. Apple proved that x86 SHOULD die. RISC is far more efficient. But we all use x86 because all the software is written for it. And all the software is written for it because we all use it. It's very hard to get out of this loop.

    • @Paerigos
      @Paerigos 2 ปีที่แล้ว +53

      @@Myvoetisseer Well it cant reach the computing power x86 (and mainly AMD64) can. Quite frankly you can just dump more power into AMD64 and it will deliver. you cant do that with RISC and you wont be able to match AMD64 for about a decade to come.

    • @manasmitjena5593
      @manasmitjena5593 2 ปีที่แล้ว +43

      @@Myvoetisseer x86 uses CISC and using RISC which stand for Reduced Instructions would mean low performance overall. Apple's M1 ARM processor is great with CPU performance but its GPU is laughable at best compared to nvidia or AMD and is comparable to integrated graphics like the Iris Xe graphics.

  • @DantalionNl
    @DantalionNl 2 ปีที่แล้ว +187

    Actualy architectures exist where the scheduling is done by the software ahead of time. This is the Very Long Instruction Word architecture still found in Digital Signal Processors today!

    • @tazogochitashvili6514
      @tazogochitashvili6514 2 ปีที่แล้ว +11

      VLIW isn't even dead technically. Russian Elbrus is VLIW

    • @VivekYadav-ds8oz
      @VivekYadav-ds8oz 2 ปีที่แล้ว +7

      Don't compilers already do this to some extent? I once read LLVM IR of my program and it had rearranged quite a lot of instructions for some low-level optimisation (for packing reasons? idk)

    • @DantalionNl
      @DantalionNl 2 ปีที่แล้ว +5

      @@VivekYadav-ds8oz Yes VLIW compilers do this ofcourse but a compiler is still software. So it happens at compile time as opposed to at runtime as would be the case with a hardware scheduler

    • @sudhanshugupta100
      @sudhanshugupta100 2 ปีที่แล้ว +7

      EPIC, the design idea Itanium is based on, builds upon VLIW :)

    • @afelias
      @afelias 2 ปีที่แล้ว +11

      It works for DSP because memory access and processing all run regularly with little interruption. It's the same reason those applications can have a million pipeline stages. Random memory accesses and dealing with other interrupts from I/O makes VLIW+software scheduling really annoying for a central processor/controller.

  • @JuanPablo-ho7fg
    @JuanPablo-ho7fg 2 ปีที่แล้ว +374

    As an Itanium native speaker, the CPU would probably say "Disculpe señor, ¿podría indicarme dónde está el baño?". The same error was in Itanium emulators a while ago.

    • @Manganization
      @Manganization 2 ปีที่แล้ว +46

      Thank you Itanian for gracing us with the proper language execution.

    • @Quique-sz4uj
      @Quique-sz4uj 2 ปีที่แล้ว +3

      @@Manganization is this a woosh or not

    • @mineland8220
      @mineland8220 2 ปีที่แล้ว +15

      El cpu quería descargar los procesos extra

    • @CaptainSunFlare
      @CaptainSunFlare 2 ปีที่แล้ว +9

      @@Quique-sz4uj no, amigo, no es un whoosh.
      Aunque, Tristemente, mi computadora usa el procesador itanium, así no sé si vas a comprender este...

    • @soyiago
      @soyiago 2 ปีที่แล้ว +5

      Basado 😎👌

  • @JemaKnight
    @JemaKnight 2 ปีที่แล้ว +76

    Missed at least one important point I think was worth bringing up:
    Intel had no choice but to license the AMD64 instruction set extension from AMD, who had licensed the x86 instruction set from them initially - creating a mutual dependence that remains to this day, and extremely important leverage for AMD in cross-licensing agreements, lawsuits and industry deals that have taken place since.
    This significantly leveled the balance of power between the two companies, and it's extremely likely that AMD's fate may have been very, very different if not for this series of events.

    • @bartkoens5246
      @bartkoens5246 2 ปีที่แล้ว +3

      slightly off-topic. I still deplore the demise of Zilog's Z80 followers like Z8000 with a superior register layout.

    • @Barkebain
      @Barkebain 2 หลายเดือนก่อน

      We've all benefited from the competition between these two companies for decades as a result of these cross licensing agreements. I'm curious why Intel has so far stated that they are not going to jump into the upcoming ARM(s) race that's going to attempt to displace x86 in both the PC and server realms. We already have AMD, Nvidia, MediaTek, Qualcomm, Apple, and Samsung in the mix, and so many proven performers from cell phones, and Apple's new laptops/desktops that Intel might not be able to ignore this round of ARM on the desktop.

  • @skeletonbow
    @skeletonbow 2 ปีที่แล้ว +88

    I've got an ancient HP zx2000 Itanium workstation in my basement that's been sitting there for a decade and a half. They were horrible to develop on, and insanely slow. I named the computer "insanium". I didn't realize that they existed this long, and thought the architecture died like a decade ago. We're all so lucky that AMD created AMD64 and brought it to market when they did, or we could be plagued by the Itanium. :)

    • @maybeanonymous6846
      @maybeanonymous6846 2 ปีที่แล้ว +3

      dID yOU tRy LinUx ?

    • @mytech6779
      @mytech6779 2 ปีที่แล้ว +4

      most replacement parts rather than new installs. The 80386 was in production to about 2007 partly for the same reason, support for existing industrial equipment. (And the 80386 had been fully tested for bugs and corner cases in critical systems so remained popular in new designs for quite a while too. Having more than enough power to handle many embedded style tasks like systems monitoring, navigation calculation, and so forth.)

    • @Thorovain
      @Thorovain 2 ปีที่แล้ว

      You should send it to LTT. I bet Anthony could make an interesting video with it.

    • @ailivac
      @ailivac ปีที่แล้ว

      Nah, there were plenty of other options at the time. Sparc, ARM, MIPS, etc, which eventually all added 64-bit support and were based on clean-sheet designs not hampered by vestiges from the 1970s. But none of those would have let us keep relying on millions of lines of Windows-based code that no one would ever bother to recompile.

  • @jacko314
    @jacko314 2 ปีที่แล้ว +145

    i had to debug ia64 assembly crash dumps. nightmare. debugging root kits would have been impossible.

    • @dycedargselderbrother5353
      @dycedargselderbrother5353 2 ปีที่แล้ว +7

      NCommander debugged ia64 recently in his video about what happened with Space Cadet Pinball. He had trouble with it and it looked alien to me.

    • @jacko314
      @jacko314 2 ปีที่แล้ว +4

      ​@@dycedargselderbrother5353 that is because it is alien. i've rummaged through risc and tonnes of x86/64 assembly. the only thing ia64 had going for it is that if you had the right symbols files you could figure out quite a bit. but the actual execution logic was was like swimming in spagetti. i think assembly should be somewhat readable as many errors are only detectable via assembly inspection. (kd rules)

    • @UltimatePerfection
      @UltimatePerfection 2 ปีที่แล้ว +1

      Feature, not a bug.

  • @niduroki
    @niduroki 2 ปีที่แล้ว +50

    Didn't Intel try to (also) create ARM processors back in the 80's, too, but IBM was like: uuuh, naah, you better not?

    • @chuuni6924
      @chuuni6924 2 ปีที่แล้ว +15

      No, they did buy DEC's StrongARM in the 90s and tried to use it for a series of low-power chips, but IBM had nothing to do with its demise.

    • @Rainmotorsports
      @Rainmotorsports 2 ปีที่แล้ว +9

      Intel did make Arm processors but that was way later. They sold their license to Marvel who still uses it.

  • @raulsaavedra709
    @raulsaavedra709 2 ปีที่แล้ว +66

    A possible topic of interest: game engines, how they work at a very high, easy to grasp level

    • @etaashmathamsetty7399
      @etaashmathamsetty7399 2 ปีที่แล้ว +4

      A game engine is just a development environment like an ide

    • @etaashmathamsetty7399
      @etaashmathamsetty7399 2 ปีที่แล้ว +2

      @thatonespathi what I meant by development environment was a bunch of libraries that you use in YOUR OWN development environment. (I have used both unity and unreal engine + making my own)
      game engine != car engine
      a game engine is more of a library, it wraps low level graphics api calls, physics calls, and etc into simpler to use functions. Along with giving you an environment to code in. (environment = mono behavior in unity, or like the c++ enviornment UE gives you)

    • @Roxor128
      @Roxor128 2 ปีที่แล้ว

      High-level might be easy to grasp, but low-level is much more interesting.

    • @reillywalker195
      @reillywalker195 2 ปีที่แล้ว

      Game Makers' Toolkit did basically that.

    • @raulsaavedra709
      @raulsaavedra709 2 ปีที่แล้ว

      @@reillywalker195 What title did that video have?

  • @ElliotGindiVO
    @ElliotGindiVO 2 ปีที่แล้ว +65

    The backwards compatibility of x86 is awesome. I hope it always remains supported.

    • @powerfulaura5166
      @powerfulaura5166 2 ปีที่แล้ว +29

      It will.
      ..via emulation on ARM lol

    • @danzjz3923
      @danzjz3923 2 ปีที่แล้ว +23

      @@powerfulaura5166 not ARM, RISC-V

    • @brandonn.1275
      @brandonn.1275 2 ปีที่แล้ว +3

      @@danzjz3923 por quo no los dos?

    • @A.Martin
      @A.Martin 2 ปีที่แล้ว +9

      x86 is also a liability because of its age too, it needs to retire, and maybe now it might be able to be phased out. When they first tried it was just too early I think.

    • @afelias
      @afelias 2 ปีที่แล้ว +8

      At some point RISC-V will have an extension that helps emulate x86 with good hardware support and that'll be the end of that.
      Though, in the present, we're way more likely to emulate RISC-V on x86 native hardware...

  • @XantheFIN
    @XantheFIN 2 ปีที่แล้ว +41

    AMD was actually team green back then.. right?

    • @wince333
      @wince333 2 ปีที่แล้ว +10

      yes before buy ATI

    • @Wahinies
      @Wahinies 2 ปีที่แล้ว +2

      @@wince333 I still remember the green moulded biodegradable stuffing for my Opteron 165.. Made me go and buy a pack of spearmint gum

    • @TheExileFox
      @TheExileFox 2 ปีที่แล้ว +1

      Amd used to have a green logo

    • @RandomnessCreates
      @RandomnessCreates 2 ปีที่แล้ว

      Yep, wanted to get Nvidia too but Jensen said nope.

    • @Hadar1991
      @Hadar1991 2 ปีที่แล้ว

      Actually black was and still is the official AMD brand colour and they use black to brand their CPU's. Only Radeon brand is officially red.

  • @aah_einstein
    @aah_einstein 2 ปีที่แล้ว +6

    I never heard of ITANIUM, but I came across IA64 during college, particularly as we were using Intel's manual when we were studying x86 architecture assembly language.

  • @hardrivethrutown
    @hardrivethrutown 2 ปีที่แล้ว +45

    didn't they also try during the 486 era with the i860 and even earlier with iAPX 432?

    • @bionicgeekgrrl
      @bionicgeekgrrl 2 ปีที่แล้ว +9

      More of a case of multiple options going on rather than trying to kill off another with a new one with those. The 8086 chips were initially never seen as the future, just a chip to pump out while working on the 432 series, but that never took off. I860 ended up in printers mostly.
      It was a similar thing with Motorola, they wanted to move to a new chip design, but the 68k was so popular they ccould not get the follow on design going and then hit the limit of the 68k design. Eventually theyd get to the powerpc design though not alone.

    • @kensmith5694
      @kensmith5694 2 ปีที่แล้ว +4

      The 432 was a daft design anyway. Instructions didn't contain addresses. Instead they contained a reference number that have to be looked up in a table to get the type and address of the variable. The type was use to check that this operation was permitted on this type of variable. The address from the table was used to get the value from physical memory. This made all if it take an extra memory access. The 286 outperformed the 432

  • @Friedbrain11
    @Friedbrain11 2 ปีที่แล้ว +58

    I remember those things. I was never interested in one. Apparently neither was anyone else LOL

    • @chunye215
      @chunye215 2 ปีที่แล้ว +4

      I'd want one just for fun, but the later models that run halfway fast are expensive even on eBay. It's old shit someone is throwing out but still, there has to be demand by some exotic shops depending on them, I have no other theory.

  • @FarrellMcGovern
    @FarrellMcGovern 2 ปีที่แล้ว +7

    I ran an HP SuperDome, what they called at the time a "Supercomputer". It could have multiple CPUs, with both PA-RISC and Itanium CPUs at the same time. I had a lot of fun bringing up Linux and HP-UX on the machine that I maintained as part of a testing lab for an HP software cataloging product. Fond memories...

  • @MatrixRoland
    @MatrixRoland 2 ปีที่แล้ว +7

    I had an instructor about five years ago who was part of the design/implementation of those chips. He brought in some samples to show us. They were really big chips compared to x86. They were on daughter cards that plugged into the main pcb.

    • @mytech6779
      @mytech6779 2 ปีที่แล้ว +2

      Slotted CPU mounting was also popular with x86 at the time, purely a matter of manufaturing. It was a way of adding cache without harming production yield. Basically they couldn't test the silicon until it was all mounted together, so a bad bit of cache [It was separate silicon from the core.] would trash a whole traditional pinned package, but by mounting on a mini-PCB they had the option to replace the bad piece of silicon.

  • @canoozie
    @canoozie 2 ปีที่แล้ว +14

    VLIW architectures like EPIC are great (if your compilers are great), until you have to deal with register file pressure. TTA's (Transport Triggered Architectures) are better, but are bus heavy so occupy more space, but they eliminate the register file pressure problem. Hobby CPU designer here, and owner of more than a dozen FPGAs. I've built CPU architectures that were based on both of these designs. AMA

    • @capability-snob
      @capability-snob 2 ปีที่แล้ว

      Do you have to deal with register file pressure? One of the nicer innovations on the itanium over other windowed designs was that you could allocate register frames to your heart's content and the Register Stack Engine would lazily write them back for you.

    • @jmickeyd53
      @jmickeyd53 2 ปีที่แล้ว +1

      "if your compilers are great" - I think this was the real downfall for Itanium. It was just a bit too early. Its easy to forget just how bad compilers were then. SSA was still a largely ignored research paper from IBM at the time.

    • @chainingsolid
      @chainingsolid 2 ปีที่แล้ว

      This sounds really cool. How'd you get into this? Is it primary buying FPGAs and investing time?

    • @canoozie
      @canoozie 2 ปีที่แล้ว +1

      @@chainingsolid I'm a software engineer, have been writing code since I was a smallish child. I had a fascination with understanding how computers executed code, and when I heard of FPGAs I found an opportunity to explore some of that. It all started with a simple clock divider, an LED, and some verilog code to implement said clock divider and send voltage to said LED making it blink. From that point forward, learned how to build a debugging interface, and consumed a ton of material on old cpu designs, since they're comparatively simple (modern cpus are really complex). A decade later, I had written my first CPU implementation, compiler toolchain capable of running code I wrote for it.

    • @canoozie
      @canoozie 2 ปีที่แล้ว +1

      @@capability-snob in TTAs there is no inherent register file pressure. Intel's solution to the problem was ingenious but not a panacea. Most VLIW designs have hundreds to thousands of registers to "solve" this issue, but Intel came up with the window. They have more than 64 registers, but only some of them are available for writing for any instruction

  • @jmtrad1906
    @jmtrad1906 2 ปีที่แล้ว +13

    The reason we use AMD64 today.

  • @ChristianStout
    @ChristianStout 2 ปีที่แล้ว +6

    Another fun fact: Oracle still makes SPARC CPUs for their HPC customers.

  • @magnemoe1
    @magnemoe1 2 ปีที่แล้ว +16

    As I understand it Itanium wanted to execute groups of instructions at once by grouping then into an 256 bit call and then do all at once.
    So it would would require a lot of the compilers. Feels a bit PS 3 to me.
    Yes X86 has plenty of flaws but as 90% of your cpu is cache and branch prediction and +5% is float and integer math units.
    30 years ago we had 200K transistors on an chip and saving 50K was an huge deal, now we have 10 billions, saving 50K transistors :)

    • @danielbishop1863
      @danielbishop1863 2 ปีที่แล้ว +1

      Itanium instruction encoding is based on 128-bit "bundles" each consisting of three 41-bit instructions, with the other 5 bits used to encode what types of instructions (integer, floating point, memory access, or branch) these are.

  • @billazen4865
    @billazen4865 2 ปีที่แล้ว +9

    Itanium, more like ***INSERT MATCHING WORD***

  • @MegaManNeo
    @MegaManNeo 2 ปีที่แล้ว +9

    der8auer did some real nice die shots off Itanium CPUs a few months ago.
    Definitely worth checking out, maybe even worth using as desktop wallpapers.

    • @DigitalJedi
      @DigitalJedi 2 ปีที่แล้ว +2

      I have a die shot of a 1660ti as a wallpaper. I like to think of it as my gpu drawing a self portrait every time I turn on the computer.

  • @TheLaser450
    @TheLaser450 2 ปีที่แล้ว +6

    If i recall it was also very expensive

  • @stephenkennedy266
    @stephenkennedy266 2 ปีที่แล้ว +2

    It’s about time you guys did a video on the Itanium!

  • @pronounjow
    @pronounjow 2 ปีที่แล้ว +7

    I just watched a Pulseway ad featuring no-beard Linus.

  • @zenbum2654
    @zenbum2654 2 ปีที่แล้ว +4

    By the time Itanium was unveiled, Intel had made at least 3 previous attempts to replace the x86: iAPX 432, i860, and i960. Now it looks like they're jumping on the RISC-V bandwagon.

  • @MasticinaAkicta
    @MasticinaAkicta 2 ปีที่แล้ว +3

    I heard about the Itanic, it did go just as smoothly as the boat.
    Without x86 support and hardly software for it made it really sank quickly.

  • @marcello4258
    @marcello4258 2 ปีที่แล้ว +1

    3:09 si, claro.. es a la izquierda

  • @FR4M3Sharma
    @FR4M3Sharma 2 ปีที่แล้ว +2

    I remember downloading some software to install but windows just couldn't run it. Then i tried again thinking it was probably corrupt while downloading to only realize i downloaded ia86 architecture based setup package and that was the day i learned there are more than two instructions sets.

  • @r.j.bedore9884
    @r.j.bedore9884 2 ปีที่แล้ว

    Oh! Oh! I suggested this topic on a previous video. When Itanium came out it was on the cover of every tech magazine, and I was fully expecting to be building a gaming PC with a 64-bit Itanium processor and RD-RAM within a year or so, but then both of those technologies just sort of disappeared. I had wondered why I stopped seeing advertising for Itanium CPUs since they were supposed to be so much more efficient, and now I know they were simply too good to be true. Thanks James and the rest of the LMG crew for satisfying my curiosity on this!
    Maybe next you can do a video about those physics processing cards that were supposed to enable fully destructible environments in games and were eventually going to allow for realistic real-time, procedurally generated particle based physics on every object in the game so that every grain of sand and blade of grass would behave just like it would in the real world (or so advertisements and news articles of the time would have you believe). I almost bought one of those, but didn't quite have the money at the time as I was still in school. I think it was called a PhysX card.

  • @krystina662
    @krystina662 2 ปีที่แล้ว +14

    0:55 knowing how many things are programmed, just that is enough to understand why it would never take off haha

  • @alexander1989x
    @alexander1989x 7 หลายเดือนก่อน +1

    Itanium goes to the pile of "We tried to be disruptive but were too ambitious" together with PowerPC.

  • @laurendoe168
    @laurendoe168 2 ปีที่แล้ว +3

    Anyone familiar with the difference between software and hardware knows: If you want something cheaper, do it in software; if you want something faster, do it in hardware.

  • @erroltheterrible
    @erroltheterrible 2 ปีที่แล้ว +1

    Anyone remember the Transmeta cpus? They where non x86, but emulated x86 by running a cross compiler in real time to convert x86 instructions to its native instruction set. Still have my Compaq TC1000 on the bookshelf...

  • @Lionel212001
    @Lionel212001 2 ปีที่แล้ว +13

    It would be interesting to see how Intel leverages RISC-V.

  • @mycosys
    @mycosys 2 ปีที่แล้ว +11

    That time AMD saved x86

  • @kodykj2112
    @kodykj2112 2 ปีที่แล้ว +3

    Video idea: Full explanation as to why it is possible to have a virtual/software cpu or control the CPU instructions through software and not hardware, because in order for the software to do anything it needs to have the appropriate hardware and depends on the hardware in the end.

    • @MrFram
      @MrFram 2 ปีที่แล้ว +1

      You’re misunderstanding what controlling instruction process through software means. The instructions are bigger and have extra data to tell the cpu what to do in more details. Since this data is part of the instructions, it is software, and it controls the cpu.
      They are just instructions control the cpu more precisely than usual.
      Now, since the instructions can tell the cpu how to do some of this stuff, it does not need to calculate it itself, hence no need for scheduler circuits, etc. It can also add room for extra optimizations through clever manipulation of instructions

  • @MrMediator24
    @MrMediator24 2 ปีที่แล้ว +8

    Elbrus (Russian ISA) is VLIW and they thought it was a good idea to use it despite failure of Itanium

    • @IvanSoregashi
      @IvanSoregashi 2 ปีที่แล้ว +6

      well our guys didn't have a market share to lose :/

    • @naamadossantossilva4736
      @naamadossantossilva4736 2 ปีที่แล้ว

      Russia is trying to close off its internet,incompatibility for them is not a bug,it's a feature.

    • @MrMediator24
      @MrMediator24 2 ปีที่แล้ว +2

      @@naamadossantossilva4736 this processors are meant for corporate use (government and companies working with the state contracts), yet now there's money being stol... allocated for new designs based on RISC-V

    • @DimkaTsv
      @DimkaTsv 2 ปีที่แล้ว

      @@MrMediator24 tbf, if it is like that, doesn't it seem that executing anything potentially dangerous from internet will be even possible ...
      Given for government use ofc

  • @wskinnyodden
    @wskinnyodden 2 ปีที่แล้ว +16

    Ok, just so that you know, even before Itanium there were attempts at killing the X86 family, actually not long after it's creation Intel tried to drive the market away from the x86 architecture because that architecture was pretty much a "hack" of a design implemented on a rush to ensure they'd win the IBM contract (which they did therefore creating the legacy we have today), those CPUs were the iAPX 432, the i960, the i860. (yes this is the correct order) Although not all of these were specifically designed with that goal you can find references of that goal for the likes of the iAPX 432 and i960. Truth be said they never expected backwards compatibility to be the market driver it proved to be. Although in those days this incorrect perspective would be understandable if one didn't think things through properly. This is a mistake we nowadays have more than enough evidence to know Intel at some semi-random times does this kind of short sighted crap.
    That said, back then backwards compatibility IN HARDWARE was a must to assist with software development, the development environment for any architecture was pretty much in it's infancy (not to say still in the wound) and therefore any system that could use prior art to speed up it's use in the real world would have a major advantage. Nowadays we even have enough horsepower to emulate other instructions sets in software, and if someone becomes smart enough to grant our consumer CPUs some FPGA grunt logic directly next to the cores we could even add those instructions so we could run other architecture code natively (some Xeons do have FPGAs on them, just not as deeply integrated as this, still is faster than software emulation though). As such, backwards compatibility is not that much of an issue. A good example for this is Apple and how they've first migrated from Motorola 68x00 family to the PowerPC (which was somewhat simpler than the next change) then to x86-64 and nowadays are migrating to ARM (which would likely would have been easier coming from PowerPC as those are 2 RISC architectures unlike x86-64 even though nowadays x86-64 could be called a mix of CISC-RISC design, just a mess if you ask me even though I love it)

    • @epobirs
      @epobirs 2 ปีที่แล้ว +3

      IBM was nowhere near deciding to test the waters in microcomputers when work started on the 8086 in 1976 to launch in 1978. At the time, Intel was lagging badly on 16 and 32-bit products, at least in demo form, and knew its 'clean' designs would not arrive in time to prevent competitors from staking out the market. Thus the 8086, building atop the existing Intel products, which saved a great deal of time in the design stages.
      When IBM decided to create the Entry Systems Division, there were already plenty of 8086 boards and systems being sold. C/PM-86 was already in existence and IBM first sought to license that in their rushed effort. Dorothy Kildall, Digital Research's lead attorney and wife of the founder, Gary Kildall, did not like the terms IBM proposed and told them so. An IBM exec worked on a United Way board with Mary Gates, wife of a prominent WA lawyer. She mentioned that her son had a software company and perhaps they could fill IBM's need. Microsoft didn't have anything to offer except the understanding that this was an immeasurably huge opportunity and went looking for a company that did have something that would serve. One of those companies was Seattle Computer Products, who produced 8086 S-100 bus products and had its own in-house C/PM clone, 86-DOS. Microsoft bought the rights to that and some months later it was shipped with the 5150 as PC-DOS.

    • @danielandrejczyk9265
      @danielandrejczyk9265 2 ปีที่แล้ว

      Do you think ARM and RISC are the future for next generation PC chips?

    • @wskinnyodden
      @wskinnyodden 2 ปีที่แล้ว

      @@epobirs 8088 specifically was the core I had in mind truth be said

    • @epobirs
      @epobirs 2 ปีที่แล้ว +1

      @@danielandrejczyk9265 That depends: when is the future? Within the next five years, X86-64 will continue to dominate. In ten years, it gets a lot foggier as we reach the limits of how small we can go for a silicon process node, both in terms of physics and the immense cost of creating a mass production facility. Any major shift away from silicon opens up the opportunity for new architectures to emerge, either from new players or from existing players looking to take advantage of the right moment to make something entirely new. (Or as close as one could come without ignoring the principles that will still apply.)
      One failing has been the downfall of many companies: dragging their heels or outright trying to kill a new technology that competes with their existing product. A competitor with a superior product is inevitable, so better to have that competitor be part of your own company rather than some outsider who'd be perfectly happy to see you die off rather than transition.

    • @epobirs
      @epobirs 2 ปีที่แล้ว

      @@wskinnyodden The 8088 was not created at IBM's behest either. It was an obvious follow-on to the 8085 thanks to its compatibility with a lot of existing chips needed to make a functioning system back then. IBM was not very serious about making their initial product the best it could be. They wanted to keep it cheap and would then produce something better if the market was proven viable. There was much dissent within IBM that wanted to strangle the project in its crib, so there was no lack of pressure to keep expenditures down if this thing struggled to find customers. This didn't get better when the PC was a huge success. It instead got worse until culminating in the undermining of OS/2 and IBM dropping out of the PC business entirely.
      IBM didn't really get around to making their 'real' PC until the 80286, largely because Intel didn't give them the option. There were variants for making cost reduced systems but they came along well after the standard for 286 PCs was well established, even if that didn't live up to its original promise. (Insisting on running on 80286 was one of the big downfalls of OS/2.)

  • @abdulazizalserhani7625
    @abdulazizalserhani7625 2 วันที่ผ่านมา

    In fact, Was NetBurst kind of originally meant to be a kind of a stopgap between P6 and Itanium that would keep them competitive with AMD's Athlon reaching higher clock speeds than Intel's Pentium III at the time and quickly phased out once Itanium matured and got a sizeable library of software, But of course, reality turned out to be very different with Itanium failing.

  • @MasterCommandCEO
    @MasterCommandCEO 2 ปีที่แล้ว

    Completely random but I love this lil info vid tbh. Also looking good my guy! w.e you're doing is working!

  • @slashtiger1
    @slashtiger1 2 ปีที่แล้ว

    02:42 LOL @ Professor Trelawny reference...!

  • @kensmith5694
    @kensmith5694 2 ปีที่แล้ว +2

    Itanium wasn't the first attempt from Intel. There was also the IPX432 and the I860. Each of these was worse than the other.
    Working out which instruction goes first is fairly easy on a RISC machine and would be within the grasp of a compiler to do. It would require a custom compiler for each version of a chip but that could be done if speed was your biggest goal.

  • @Kattakam
    @Kattakam 2 ปีที่แล้ว

    Wonderful work as always.

  • @JeremieBPCreation
    @JeremieBPCreation 2 ปีที่แล้ว +1

    An sfx in that video's music sounds like an HDD of graphics card rattling.

  • @WizardNumberNext
    @WizardNumberNext 2 ปีที่แล้ว +2

    Intel never intended to replace x86 with Itanium (unless you run server on every computer)
    Itanium was strictly targeted at server, not even workstation
    x86 at that time wasn't targeted at server
    xeon was pretty much same thing as AMD Athlon MP
    Itanium was designed as server architecture

    • @Hanneth
      @Hanneth 2 ปีที่แล้ว +2

      No, Intel was very clear when Itanium was announced that it was meant to replace x86 completely in servers and desktop. The plan they laid out was they were going to start Itanium in the server market where there was bigger margins to allow them to better develop the technology. When the performance matched, or exceeded x86, they would bring it into the consumer space. Part of making that performance match was to work with developers to design their software to work better with Itanium. It was a long term strategy.
      I also remember when they announced that they were scrapping their plans to replace x86.

  • @Zer0Blizzard
    @Zer0Blizzard 2 ปีที่แล้ว +1

    You guys should cover the inherent delay in cycles while the processor waits for various levels of cache (and ask some people about exactly WHY you have to wait 3-4 cycles for L1 cache), an extremely in depth video (but in plain english) of how you get from bit level circuitry to machine, then assembly, then say, C when running a hello world, an in depth video on GPU pipelines (and why old GPUs had like 800 MHz chips, 3GB of GDDR5, and why the new GPUs run so much faster and have more ram/etc.), a video on distributed computing (Boinc/curecoin/the ethereum network, theoretically), another video that explains why AWS is so damn expensive in comparison to say bare hardware+free linux software/storj/other distributed computing/storage systems, a video on how cheap bandwidth actually is (putting down wires is a one time cost, network servers basically only cost electricity, have no moving parts, and they basically never die) for landline and cell (a 5G antennae basically costs like $5k and you can service 500+ people, and you don't need to replace it for 20 years).

  • @snope1779
    @snope1779 2 ปีที่แล้ว

    literally just went over all this in my computer engineering class... WILD

  • @Roxor128
    @Roxor128 2 ปีที่แล้ว +2

    Ah, yes, the Itanic. I wonder if it might have done better if Intel had gotten it into a games console like IBM did with the Cell processor in Sony's Playstation 3. The explicitly-parallel architecture of the Itanium does sound like the kind of quirky thing you'd find in a console.

  • @fluffycritter
    @fluffycritter 2 ปีที่แล้ว +4

    I always appreciated how The Register referred to this whole initiative as “Itanic.”

  • @PLZFrosty
    @PLZFrosty 2 ปีที่แล้ว

    That metaphor, or whatever you call it, outtro EPIC! X"D

  • @bradleymott7389
    @bradleymott7389 2 ปีที่แล้ว +8

    I think saying that software scheduling is a "bad idea" essentially ignores the idea or virtualization... there are amazing software schedulers out there that can outperform bare-metal these days (given, with instuction-set integrations like vt-d). The evolution in that space has been fascinating to watch

    • @vyor8837
      @vyor8837 2 ปีที่แล้ว +2

      No, no, god no.

  • @randallcuevas5562
    @randallcuevas5562 2 ปีที่แล้ว +1

    "if you are trying to reinvent the wheel, make sure it is compatible with your car" 4:25

  • @capability-snob
    @capability-snob 2 ปีที่แล้ว +8

    Ooh, mostly accurate, well done! So: the software compatibility thing wasn't as big a deal for itanium and amd64 as people imagine, indeed the itanium had hardware x86 support for user applications, although that hardware support lacked the OoOE of the competing Pentium Pro and MMX. But the target market were migrating from SPARC, MIPS, and HP-PA RISC, so x86 support was not so important for these - nobody with money ran x86 servers in the 90s. The problem actually came down to price. Intel wanted to charge like they were SGI or DEC and make great margins, but you could buy pentiums and later opterons that would give you more bang for the buck thanks to competiton and scale. It was the bean counters that killed it, which was the theme of the 90s (c.f. Apple under Gaseé, symbolics, all the RISC manufacturers).

    • @Luredreier
      @Luredreier 2 ปีที่แล้ว

      I'm not surprised.
      Thank you for sharing.

  • @motoryzen
    @motoryzen 2 ปีที่แล้ว

    1:49. "we'll tell you..right after a massage from our sponsor iFixit"
    Jayztwocents: " Hold my keg...Let me show you how it's done"

  • @chuuni6924
    @chuuni6924 2 ปีที่แล้ว +2

    Itanium was more like Intel's fourth attempt at killing x86, after the iAPX432, the i860 and the i960. They must really hate their own child.

    • @Berobad
      @Berobad 2 ปีที่แล้ว +3

      Because other manufacturers could produce x86 chips, if their new architecture would have worked, intel wouldn't have had to bother with AMD Cyrix/VIA and co anymore.

    • @davidbonner4556
      @davidbonner4556 2 ปีที่แล้ว

      You have to remember the pre-historic Intel chips... The 4004, designed to be a traffic light controller, and the 8008 which was commissioned by ARPA (now DARPA) to implement "Fly-by-wire" control systems in military aircraft.
      The 8080 was intended as an improvement but also became popular when MIT built a prototyping system in 1972 called the "Altair 8800" which they decided to sell to hobbyists because students were having a blast with it.
      The 8086 was the 16 bit version, with the 8088 being version in a smaller package that multiplexed data and address lines. 8088 was in the first PC/PCXTs, while a newer 8086, the 80286 was used in the AT models for 16 bit (IBM skipped the 8086 and 80186).

  • @GoochiFPV
    @GoochiFPV 2 ปีที่แล้ว +1

    How weird, just two days ago YT suggested a video on itanium and running pinball, never heard of it before, and now you guys

  • @StigDesign
    @StigDesign 2 ปีที่แล้ว +7

    More indept on how X86 works and maby kernel too? :D

  • @samvega827
    @samvega827 2 ปีที่แล้ว

    Just got that tool kit today! its sooo badass

  • @bootmii98
    @bootmii98 ปีที่แล้ว

    This wasn't the first time Intel tried to kill the 8086, or even the second. The first was the iAPX 432. A kinda-sorta was the 376 which was a legacy-free 386 for embedded systems. The second was the 860/960, which saw more commercial success than Itanium, the iAPX 432, or the 80376.

  • @Aranimda
    @Aranimda ปีที่แล้ว

    The lesson of this is backwards compatibility.
    New development is great, but people like to use their existing software as they transition to make use of the new technology. Therefore backwards compatibility is very important, even if the final result is a bit less efficient than a brand new but incompatible design.

  • @gobbel2000
    @gobbel2000 2 ปีที่แล้ว

    That's so interesting, I've never heard of this.

  • @ASOTFAN16
    @ASOTFAN16 2 ปีที่แล้ว +1

    I wish we could live in a world where everything is 64-bit. Imagine how glorious that would be.

  • @whosonedphone
    @whosonedphone 2 ปีที่แล้ว +2

    That was a dangerously good textbook Segway to a sponsor. Don't ever do that again.

  • @TehJumpingJawa
    @TehJumpingJawa 2 ปีที่แล้ว

    I have an idea for a topic; the origin of "Alt+F4".
    It's quite an interesting rabbit hole that goes a long *long* way back; the stimulus for IBM's CUA, its adoption in Windows, why it was less influential in Unix, and its eventual obsolescence.

  • @vivago727
    @vivago727 2 ปีที่แล้ว

    3:30 ltt doing their own stock footage

  • @md.abdullaalwailykhanchowd3974
    @md.abdullaalwailykhanchowd3974 2 ปีที่แล้ว +3

    Intel : IA64 is the future.
    Also Intel : IA64 never existed 😶‍🌫️

  • @ReallyPhamous
    @ReallyPhamous 2 ปีที่แล้ว

    Ngl that was a good Segway. I didn’t see that one coming 😂💯

  • @Psychx_
    @Psychx_ 2 ปีที่แล้ว

    Fun fact: Nvidia has in-order VLIW (like Itanum) CPU designs with very high IPC (Denver, Carmel). The cores use an internal design and perform dynamic instruction translation to run 64bit ARMv8.2 code. Theoretically they could also run x86 code by using a different firmware, but that isn't available atm due to Nvidia not having access to x86 patents. The technology was invented in the 90s by Transmeta and even made it to the market back then, in form of consumer devices (laptops), which even recieved support for new CPU instructions via software updates.
    Radeon graphics cards up to and including 6000 series (terascale architecture) were VLIW designs aswell and the architecture was very efficient. Unfortunately, the driver's shader compiler had to be tweaked for individual games to optimize things like instruction order, scheduling and utilization in order to achieve good performance. GCN, the successor which doesnt use a VLIW design, was supposed to reach full utilization much easier, aswell as having a simpler shader compiler and being easier to program for. To an extent, this worked very well (i.e HPC compute), but for games not so much. One of the problems was that you need at least 256 threads to utilize one CU fully; on top of that: the execution of an instruction takes 4 clock cycles.

  • @spitfire7772
    @spitfire7772 2 ปีที่แล้ว

    3:06 That spanish moment made me laugh 😂

  • @georgecy5937
    @georgecy5937 2 ปีที่แล้ว +21

    We shall make ARM the standard kek.

    • @ImtheIC
      @ImtheIC 2 ปีที่แล้ว +4

      No way..

    • @piekay7285
      @piekay7285 2 ปีที่แล้ว +8

      RISC-V

    • @johnsontzu41
      @johnsontzu41 2 ปีที่แล้ว +6

      That's a RISC-y statement

    • @Kubush1
      @Kubush1 2 ปีที่แล้ว +2

      @@piekay7285 Arm is far superior to Risc-V

    • @DacLMK
      @DacLMK 2 ปีที่แล้ว +3

      @@Kubush1 It's the same pleb

  • @cbeemaac20
    @cbeemaac20 2 ปีที่แล้ว +5

    Itanium lives on in many college computer architecture courses! It's a really well thought out architecture that is more modern than MIPS but less complicated than x86. A great architecture to study.

    • @drooled2284
      @drooled2284 2 ปีที่แล้ว +3

      Wasn't Risc-V invented for that exact purpose though?

    • @kensmith5694
      @kensmith5694 2 ปีที่แล้ว +4

      @@drooled2284 Yes but sadly the RISC-V is not as good of a CPU as an ARM. The lack of a status register makes simple checks on operations a lot harder. It takes several instructions to detect an overflow for example.

  • @PurpleKnightmare
    @PurpleKnightmare 2 ปีที่แล้ว +1

    Yeah, I was a tech working for them at Microsoft... OMG Those things were horrid.

  • @NealMiskinMusic
    @NealMiskinMusic 2 ปีที่แล้ว

    It's the same problem everybody trying to build a competing architecture faces: The vast majority of software is for x86 and making software compatible with a new instruction set is time consuming. So unless you're a company like Apple that has enough industry clout to essentially force the industry to port everything to their new chip as quickly as they can, it's going to be really difficult to sell many chips if most software won't run on them.

  • @georgegonzalez2476
    @georgegonzalez2476 2 หลายเดือนก่อน

    Actually, they tried twice. In 1981 they marketed the iapx432, a really weird and complex 32-bit CPU. It had a lot of advanced features. Far too many and far too restrictive a set of features. It was also very slow. They didn't learn from that total flop.

  • @kiwimonster3647
    @kiwimonster3647 2 ปีที่แล้ว +4

    Only the future can tell what we'll get next

  • @inverse_of_zero
    @inverse_of_zero 2 ปีที่แล้ว +1

    2:00 - i feel it's intentional by iFixit that their kit has as many bits as the processor!

    • @sundhaug92
      @sundhaug92 2 ปีที่แล้ว

      Also fitting for them to sponsor of the screw-up that was Itanic

  • @The_Murdoch
    @The_Murdoch 2 ปีที่แล้ว

    Oooooo I got a Linus Pulsway ad before the video started. He was in skiboots too!

  • @Rainmotorsports
    @Rainmotorsports 2 ปีที่แล้ว +4

    The fact that went into Itanium at all seems a little misleading. Itanium was never on Intel's plate as competition to x86. That's was part of HPs long game, the entire reason for buying Compaq. All those DEC customers. Entirely different market. x86 was a stop gap from the beginning to a product intended for the mainframe market. All of this coming from a company that never intended to make CPUs. A lot of happy accidents.

    • @jeremyroberts2782
      @jeremyroberts2782 2 ปีที่แล้ว

      Whilst x64 may well have saved AMD due to royalties , it probably has not done us any favours as far as hardware is concerned. X64 is not a true 64 bit architecture and we are probably a lot worse off for it than if we had all moved to Itanium. It may be that the door has been left open for 64 bit ARM as is being tested/proved by Apple.

  • @interlace84
    @interlace84 2 ปีที่แล้ว

    Have access to an Itanium-based HP cluster.. wish they compiled some binaries for some games/apps to IA64 for performance comparisons :( it's a database *BEAST*

  • @BinkyBorky
    @BinkyBorky 2 ปีที่แล้ว

    very cool video. I overclocked an i7 980xee to 5.3 ghz on linux. base clock was 3.33. windows wouldnt load anything above 4.9 ghz back then.

  • @AithaChannel
    @AithaChannel 2 ปีที่แล้ว +1

    2:42 Potterhead spotted.

  • @supervisedchaos
    @supervisedchaos 2 ปีที่แล้ว +1

    Back to basics... describe everything that happens chronologically when you double click an app or game up to when it displays on screen

    • @chainingsolid
      @chainingsolid 2 ปีที่แล้ว

      That video could be a year long series, or a 15 minute vid depending entirely on how deep/ to the basics they would go with it. (instruction by instruction or the general steps taken ex: load exe from disc -> load libraries -> get opengl working/make a window....)

  • @fvckyoutubescensorshipandt2718
    @fvckyoutubescensorshipandt2718 2 ปีที่แล้ว +1

    They should have killed it off. After 50 years x86 just isn't the best way to crunch numbers anymore. It was great when all it did was basically run a suped up calculator and the CPU's didn't even need heatsinks and came in DIP packages. Today not so much. Maybe when a CME from the sun takes out every electronic device on the planet what rises from the ashes will be better, because it seems it will take no less than that to make the switch.

  • @nathanaelink
    @nathanaelink 2 ปีที่แล้ว

    topic idea: videos that review old game console hardware and relates it to other hardware at the time, noting interesting choices along the way

  • @Personnenenparle
    @Personnenenparle 2 ปีที่แล้ว

    That metaphore was actually pretty good xD

  • @mrknighttheitguy8434
    @mrknighttheitguy8434 2 ปีที่แล้ว +8

    Have you got hives James, you seemed to be scratching a lot!

  •  2 ปีที่แล้ว

    That HP idea was not HP's idea, but based on the much older idea of the RISC, Reduced Instruction Set Computer. Simplify, and use that to speed things up (by being able to parallelize better). And let the software do the heavy lifting. See SUN's SPARC, DEC's Alpha, MIPS' MIPS chips (yes, indeed :D). Oh, the idea is *old*, much older than SUN's and DEC's (and certainly HP's) developments. Stanford and Berkeley were the origins of that idea (of course, it's older than that even, but at those universities it was kind "put in writing", fully developed). And one of the core parts was the compiler that knew how to create machine code optimized for the strengths of the design.
    Another bit: there was no x86-64 when AMD developed the Opteron. It was (and is still) known as AMD64. Only when Intel *copied* it (legally since cross-licensing shenanigans in effect) did it morph into x86-64. Credit where credit is due, you know.

    • @davidbonner4556
      @davidbonner4556 2 ปีที่แล้ว

      At the time HP 'had' this idea they owned DEC, through their ownership of Compaq, hence they also had expertise with DEC Alpha available.

    •  2 ปีที่แล้ว

      @@davidbonner4556 I looked that up because I suspected that… but: they bought Compaq (thus Digital) in 2002, and Itanium roll-out was in 2001. So, while it was an enticing idea, the timing doesn't fit.

    •  2 ปีที่แล้ว

      Though on the matter of Compaq/DEC: I was pretty shocked back then in the late 90s that this young whippersnapper of a company would *dare* buy old, legendary old company (Compaq is younger than I am, while DEC is almost 10 years older :D)

  • @serifini2469
    @serifini2469 2 ปีที่แล้ว +1

    I'll see your Itanium and raise you an Intel iAPX 432. This from someone who was subjected to having to write multiprocessor real time code in assembler on the Intel i860, a RISC chip that came out at about the same time as the i486 and which had user visible pipelines and delayed execution slots. The nightmares have mostly stopped...

    • @kensmith5694
      @kensmith5694 2 ปีที่แล้ว +1

      Some early DSP and bitslice machines were even more "interesting" to code on. ATT made one where a store into a memory location followed by a load from that location would get you the value from before the store because that would not complete until after the load instruction. There was always the branch latency to consider too. A branch would do the next instruction then the one on the new path.

  • @Gunstick
    @Gunstick 2 ปีที่แล้ว

    Remember transmeta, which wanted to do faster x86 code via emulation. And nowadays the M1 actually does that using ARM.

    • @catchnkill
      @catchnkill 2 ปีที่แล้ว

      Do the same thing with very different implementation. Transmeta is runtime translation. Rosetta 2 is install time translation.

  • @LenstersH
    @LenstersH 2 ปีที่แล้ว

    The IA-64 architecture was conceptually based on HP-PA for which HP had written very successful branch prediction compilers for HP-UX. The big problems came when Intel wanted to add support for out of order execution that they had in their x86 compilers. The concepts of out of order execution and software branch prediction conflict with each other in ways that had never been studied. With the secretive natures of Intel and HP, they didn't let the details of the problem out to the open source community which might have had a chance at solving it. That along with supporting the x86 hardware emulation shell killed the project. If some logicians and mathematicians ever come up with the rules for compiling with both out of order execution and software branch prediction enabled the IA-64/HP-PA architectures will far surpass x86 and probably compete with Apple's new RISC architecture.

  • @amigang
    @amigang 2 ปีที่แล้ว +1

    Should do one about PPC history

  • @Knoah321
    @Knoah321 2 ปีที่แล้ว +2

    RISC-V is the way to go! 👐🏻

  • @esra_erimez
    @esra_erimez 2 ปีที่แล้ว +1

    Itanium was VLIW (Very Long Instruction Word) and was implemented poorly in Itanium . However, aspects of it live on today in x86 just like RISC aspects have made its way into x86.

    • @TatsuZZmage
      @TatsuZZmage 2 ปีที่แล้ว +1

      Oh God x86 is the English of instruction sets hehe

    • @9SMTM6
      @9SMTM6 2 ปีที่แล้ว

      I was always wondering about SIMD extensions and stuff like Itanium, which seem to have similar design.
      That is what you're speaking about yes?
      I do wonder though if you could give a few examples for how Itanium was badly implemented and how to do it better.
      Also what do you mean that RISC aspects have made it into x86? I know that modern x86 CPUs seem to be mostly RISC with a translation unit or similar, but it seems to me you were talking about something else.

  • @registeredblindgamer4350
    @registeredblindgamer4350 2 ปีที่แล้ว

    I learned about Itanium from Der8auer who has one. Super cool.

  • @nonetrix3066
    @nonetrix3066 2 ปีที่แล้ว +1

    Funny story I was configuring the Linux kernel and I thought IA32 referred to the 32bit version of Intel Itanium, and couldn't install Nvidia drivers.. turns out that doesn't exist and is just 32bit x86 :/

  • @ZekePolarisBSH
    @ZekePolarisBSH 2 ปีที่แล้ว

    In video list this shows up as 4:44 but on the video playback it says 4:43. I wonder what is the actual video length when it was on the computer.