Your NEW PC will be Irrelevant…

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 พ.ค. 2024
  • Current x86 processors have been ruling the computing landscape, but are looking like their progress is slowing down. It's been a LONNNNG time coming, but ARM Architecture for our PCs looks like it's finally getting around.
    ==JOIN THE DISCORD!==
    / discord
    We saw recently with Apple's M-series Silicon that ARM offers many improvements to them over the legacy x86 processors that we have been using for so long. However, Apple has had an easier time of making this transition than Windows, because MacOS can be vertically integrated. Where Windows has a lot more struggles, but it has been improving significantly and NOW we are seeing new ARM chips like the SnapDragon x Elite SOC that will be able to take advantage of it in just this year. Is ARM the future?
    GN: • Intel's 300W Core i9-1...
    HUB: • Intel CPUs Are Crashin...
    Optimum: • Apple's Fastest Mac vs...
    Dave2D: • The Biggest Moment For...
    MaxTech: • Snapdragon X Elite vs ...
    www.apple.com/newsroom/2023/1...
    nvidianews.nvidia.com/news/nv...
    nvidianews.nvidia.com/news/nv...
    resources.nvidia.com/en-us-gr...
    en.wikipedia.org/wiki/X86
    / how_does_x86_licensing...
    www.quora.com/Why-is-it-that-...
    en.wikipedia.org/wiki/RISC-V
    semiconductor.samsung.com/new...
    www.arm.com/partners/catalog/...
    en.wikipedia.org/wiki/Google_...
    www.theverge.com/2024/4/9/241...
    www.theverge.com/2023/10/23/2...
    www.intel.com/content/www/us/...
    blogs.nvidia.com/blog/mediate...
    www.qualcomm.com/products/mob... / how_is_the_app_support...
    learn.microsoft.com/en-us/win...
    / arm_hardware_ram_upgrade
    forums.macrumors.com/threads/...
    0:00- Are things actually getting faster?
    2:53- Progress slowing down
    3:40- Most exciting silicon of recent years
    6:30- ARM the future?
    7:34- x86 and why it may become irrelevant
    12:03- ARM is OPEN
    13:42- Efficiency
    16:37- Competition with ARM
    17:42- The SnapDragon X Elite chip is looking awesome for Windows
    18:23- ARM's negatives
    21:40- ARM has crazy potential
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 1.2K

  • @vextakes
    @vextakes  หลายเดือนก่อน +287

    My bad I misspoke, meant to say RISC-V is open-source not ARM and it seems to be easier to acquire a license compared to x86 because ARM doesn’t make their own chips (yet).

    • @ninjabastard
      @ninjabastard หลายเดือนก่อน +2

      ARM makes their own chips. It's that they also license at low fees (.30 cents per a chip for apple ) their cpu cores for others to integrate into their own chips or SoC's (apple, qualcom, nvidia, etc.). You can find ARM A series cortex's chips in many embedded devices and phones. ARM has traditionally not been big in the desktop or server market where x86 legecy support which has been the dominant factor for business. There are a few companies moving to RISC for desktop or server such as AWS and their gravitron servers using risc processors. But, it's only really Apple who has a decent desktop/laptop consumer offering at the moment.
      RISC-V is an interesting project. But it's pretty not really that mature and whatever processors that do exist are like 10-15 years behind the current procesors.

    • @sleepingvalley8340
      @sleepingvalley8340 หลายเดือนก่อน +6

      As somone that does buy SoC systems for gaming the closest thing I can compare it to is the BD790i you will still have the x16 PCI-e Slot, RAM Slots, M.2 slots, Sata slots and other misc slots, but it will all be in 1 neat little package, but most people will just buy it as a Mini PC prebuilt because it is easier.

    • @helpmedaddyjesus7099
      @helpmedaddyjesus7099 หลายเดือนก่อน

      I was about to comment on that lol, thanks for the correction.

    • @NoX-512
      @NoX-512 หลายเดือนก่อน +21

      RISC-V is not open source, but an open spec. It means you are free to make RISC-V cores without asking for permission, or pay license fees. The cores you design can be either open or closed source. That’s up to you.
      RISC-V will dominate in the future because it’s an open specification, not because the design is brilliant, which it is.
      Currently, most RISC-V cores are embedded. You probably already own several products with RISC-V cores. For example, Western Digital use RISC-V for their storage controllers.

    • @aleksazunjic9672
      @aleksazunjic9672 หลายเดือนก่อน +11

      @@NoX-512 RISC "dominates future" for the past 25 years 😁 On a serious note, I remember bombastic titles about RISC taking over already in late 90s. It did not happen mostly because RISC CPUs were never fast enough to justify abandoning enormous x86 software library.

  • @Pouria_1664
    @Pouria_1664 หลายเดือนก่อน +1042

    what do you mean, my pc is already irrelevant

    • @desiotaku3435
      @desiotaku3435 หลายเดือนก่อน +6

      can you tell which game is this 14:37

    • @Pouria_1664
      @Pouria_1664 หลายเดือนก่อน +3

      @@desiotaku3435 i have no idea what the game is, looks like an indie game though

    • @ricky4673
      @ricky4673 หลายเดือนก่อน +9

      What do you mean, I am irrelevant 😅

    • @kira991
      @kira991 หลายเดือนก่อน +5

      @@desiotaku3435 Jusant

    • @haniawaja9311
      @haniawaja9311 หลายเดือนก่อน +2

      @@desiotaku3435 I think its jusant

  • @temperedglass1130
    @temperedglass1130 หลายเดือนก่อน +650

    Are you threatening me about my imaginary PC.

    • @user-zw1oy5pm3s
      @user-zw1oy5pm3s หลายเดือนก่อน +11

      haha

    • @Redditard
      @Redditard 28 วันที่ผ่านมา +5

      sucks to suck
      edit: it's me

    • @lamquythestupid
      @lamquythestupid 25 วันที่ผ่านมา

      Let just build a gas stove pc

    • @Choom2077
      @Choom2077 6 วันที่ผ่านมา

      I seriously pity all of you broke boys
      ...for being in the same exact situation as me.😐

  • @internetbestfriend
    @internetbestfriend หลายเดือนก่อน +533

    RISC-V is royalty free... But ARM, unfortunately, is propriety. Both are RISC

    • @dex4sure361
      @dex4sure361 หลายเดือนก่อน +74

      Yup he didn’t realize ARM and RISC-V are 2 different things, even if both are RISC.

    • @ricky4673
      @ricky4673 หลายเดือนก่อน +1

      He knows, it is irrelevant to his points.

    • @ThePgR777
      @ThePgR777 หลายเดือนก่อน +39

      @@ricky4673 lol? It isnt royalty free and it isnt open

    • @dex4sure361
      @dex4sure361 หลายเดือนก่อน +24

      @@ricky4673 he clearly didnt know based on what he said on the video

    • @RavenZahadoom
      @RavenZahadoom หลายเดือนก่อน +35

      Vex has made these types of mistakes a lot recently... a bit more time researching instead of trying get a video out every X days would be better for him for sure.

  • @larrythehedgehog
    @larrythehedgehog หลายเดือนก่อน +111

    Please be aware. Apple is already hitting the same wall that all the CISC chip makers are at. The wall's name is: the laws of physics. The M series has also jumped in wattage gen over gen for improvements.

    • @totallyrealcat4800
      @totallyrealcat4800 26 วันที่ผ่านมา +11

      Yeah, while using a different chip design improves performance, we're approaching the limits of silicon and have to start using new materials to get vene more performance

    • @andyH_England
      @andyH_England 25 วันที่ผ่านมา +13

      That is true, but they are starting from a much lower wattage per performance metric, which means they have an inherent advantage. The M4 chip should add some headroom as it is considerably more efficient than the M3, which used N3B, which was more performing and less efficient. The M4 on N3E will be the opposite.

    • @wadimek116
      @wadimek116 14 วันที่ผ่านมา +2

      ​@@andyH_EnglandThey have advantage until they start adding instructions to their cpus. Even now x86 is just easier to work with and its much more compatible with everything. I doubt most programs are even ported to arm.

    • @TamasKiss-yk4st
      @TamasKiss-yk4st 13 วันที่ผ่านมา

      Apple M series only used thr same design since M1, so M3 was just a boosted M1 with better manufacturing progress, and yes, that basicaly increased the power consumption, but check the M4 what made with N3E instead of N3B (it's only the enchanted version of the enchanted 5nm, all 4nm and even the first 3nm was just the 5nm yearly enchancements, and the current N3E is the new technology), it's already show a huge performance jump with less heat generation, so with less power consumption.

    • @crisalcantara7671
      @crisalcantara7671 4 วันที่ผ่านมา +1

      so the new iphone will summon lighting to play games lol .

  • @karathkasun
    @karathkasun หลายเดือนก่อน +191

    Guess what? ARM is nearly as old as X86 and carries the same baggage.
    All of these "hot takes" on RISC VS CISC are ignorant of the fact that everything is RISC under the hood with CISC front ends now, including both ARM and X86. ARM is just designed to run in a tighter power envelope and has been for decades. AMD chose the middle ground with Zen, and Intel bet the farm on huge FAST cores that eat power. It has nothing to do with the underlying architecture at this point.

    • @HappyBeezerStudios
      @HappyBeezerStudios 25 วันที่ผ่านมา +25

      And the whole CISC vs RISC argument is truly something of the 90s. Nowadays both share so much design.
      The Intel P6 design translates x86 CISC instructions into RISC-like micro-ops. The AMD K5 is based on a highly parallel RISC design with an x86 decoding frontend.
      So every CPU of the two big x86 manufacturers since the mid 90s is internally much closer to a RISC design than a CISC design, but offers a legacy x86 CISC interface on the outside.
      And modern RISC designs are as complex as their CISC contemporaries.
      And interestingly Intel's most efficient era was when AMD wasn't breathing in their neck.
      I'm talking P6/Pentium M-based Core 2 (which easily outpaced K8) to about Kaby Lake (which is just Skylake with higher clock scaling and was quickly succeeded with another Skylake design with less efficiency to combat Zen)
      Before that was NetBurst, the hyper inefficient, ultra long pipeline design that came to be because IA-64 turned out to be bad.
      And afterwards when AMD offered real competition again, Intel started forcing their CPUs to run way past their optimal efficiency, basically factory overclocking it.
      And even IBM had POWER6, which was a RISC design scaling up to 4.7-5.0 GHz, with the same low efficiency seen from NetBurst, Itanium, Bulldozer/Steamroller and Alder Lake.

    • @karathkasun
      @karathkasun 23 วันที่ผ่านมา +4

      @@HappyBeezerStudios Absolute facts.

    • @mhavock
      @mhavock 18 วันที่ผ่านมา +2

      If the hardware is ready, then its a perfect time for an ARM/Linux OS with translation code for x86 etc to take the stage.
      Let build PCs like that!

    • @mintymus
      @mintymus 11 วันที่ผ่านมา

      It's cool to hate Intel and shill for AMD/ARM. In reality no companies care about any of us, they just care about our $$ and how to separate us from it. Another thing Vix missed is that AMD is more efficient, and they're x86 based. He totally missed on the node size.

    • @jclosed2516
      @jclosed2516 8 วันที่ผ่านมา +4

      Yep - The first (relatively affordable) home computer with a RISC processor was the Acorn A305 and A310. Those where sold around 1987. I owned one, and have learned machine code programming on those chips for the first time. We have come a long way since then.

  • @mleise8292
    @mleise8292 หลายเดือนก่อน +183

    Bruh, ARM itself is 41 years old. 😂

    • @Riyozsu
      @Riyozsu หลายเดือนก่อน +7

      Yet took decades to make itself known.

    • @nadtz
      @nadtz หลายเดือนก่อน +17

      @@Riyozsu ARM has been used in all kinds of devices for years, I think you mean make itself known in consumer desktop hardware. MS tried (very badly) with Windows/Surface RT a while back so I grudgingly have to give Apple credit for making it very clear that ARM is viable for desktop systems.

    • @Skylancer727
      @Skylancer727 หลายเดือนก่อน +3

      ​@@nadtz Even if it is viable it has always been over hyped. The value of ARM over x86 has always been a talking point based on theoretical over proof. The same extent that ARM can run better by chewing the fat can also just translate to rebuilding windows from scratch rather than just building on existing versions. There are still remnants of DOS in windows today. That is far more significant than the hardware.

    • @nadtz
      @nadtz หลายเดือนก่อน +2

      @@Skylancer727 There are OS's aside of Windows and there is a reason hyperscalers and the like are looking to move to ARM or are moving to ARM where CPU compute isn't the priority. Whether QUALCOMM's first offerings are going to be as good as they claim is still up in the air but either way it's just a matter of time before decent/good ARM offerings come to consumer PC's/laptops.

    • @Gen0cidePTB
      @Gen0cidePTB หลายเดือนก่อน

      ​@@nadtzHyperscalers were on ARM when Opteron was teaching Intel what multicore was all about.😂
      This transition to ARM's big selling point for Windows is that it's the first clean break they will have had since Windows 95. They will be able to get rid of lots of legacy code and streamline the OS, and because that code will still have to be there on x86, people will think it's the new ARM CPUs. This clean break will also get them away from Intel, which has been stagnant on 14nm then 10nm for a decade.

  • @Carnage_Lot
    @Carnage_Lot หลายเดือนก่อน +336

    Your cat is drinking your water at 7:40 lol

    • @ricky4673
      @ricky4673 หลายเดือนก่อน +41

      Never saw a cat, you have catovision. You could be a superhero that finds missing cats. 😮

    • @garystinten9339
      @garystinten9339 หลายเดือนก่อน +28

      Brought to you by NVIDIAs RTX series graphics cards.

    • @asdfjklo124
      @asdfjklo124 หลายเดือนก่อน

      @@ricky4673 It's all about focus, check out the gorilla study (edit: that's also what pickpockets take advantage of)

    • @danielhayes3607
      @danielhayes3607 หลายเดือนก่อน +4

      That's not water 🥺

    • @bodasactra
      @bodasactra หลายเดือนก่อน +11

      No, the cat found out Vex was drinking his water.

  • @vstxp
    @vstxp หลายเดือนก่อน +135

    Dude, the reason why we are still on x86 is back-compatibility. Apple M-Series have a ton of issues that their x86 compatibility layer barely fixes. Microsoft is also trying to make Windows on ARM work with programs made/compiled for x86, but it barely works. We have a LONG way to go before we can shed x86 completely, don't let the headlines fool you.

    • @henson2k
      @henson2k หลายเดือนก่อน +3

      Who is holding developers from recompiling for ARM?

    • @Dragon_Slayer_Ornstein
      @Dragon_Slayer_Ornstein หลายเดือนก่อน +16

      @@henson2k Legacy stuff won't be re-compiled, so all games won't be, you will have to rely on emulation.

    • @aleksazunjic9672
      @aleksazunjic9672 หลายเดือนก่อน +15

      @@henson2k No one in their right mind would rewrite whole PC software library to run on RISC processors. I said rewrite, not recompile, because lots of stuff was tailored specifically to run on x86 or x64 CPUs. Furthermore, more software that is x86 compatible is written every day. Only way RISC could win is if someone makes RISC CPU that would run x86 trough emulator at decent speed at decent price, which is unlikely.

    • @vstxp
      @vstxp หลายเดือนก่อน +7

      @@henson2k Sadly, it is never that easy. Because the instructions are entirely different, there is an entirely new toolkit required for that. There are bound to be bugs, to put it mildly. There is also the issue that a lot of software relies on older pieces of software, in general.
      Furthermore, I think Vex should have mentioned that, most 3D libraries and games, both on the PC as well as the main consoles, use certain complex instructions in the CPUs for 3D rendering, and supporting the GPU operations. Most of the big games we play use drivers and libraries that simply cannot work, either at all or with the same performance, on anything other than x86-64. Sadly advanced 3D-rendering in general is actually better with an x86 CPU than any other, no matter what we might say now. And do not let the synthetic benchmarks fool you, the application run in the different platforms is not even the same, most of the numbers are meaningless. Several simple day-to-day apps can run well on RISC due to the compilation on the new environment being relatively easy, but other advanced apps REQUIRE specific complex instructions. CISC computers aren't going away anytime soon, and there are several scenarios they are better for, and yes that sadly includes 3D and gaming, and it will for some more years for sure. Do not think your PC is geting obsolete anytime soon.

    • @ivok9846
      @ivok9846 หลายเดือนก่อน +6

      can i build risc pc for 300$?
      that draws less than 100w from wall at 100% cpu?
      right now?
      i just did that with x86-64 few months ago...

  • @PlayingItWrong
    @PlayingItWrong หลายเดือนก่อน +131

    The AMD X3D processors are genuinely more efficent, just by virtue of their shortcoming removing heat, even outside of gaming they definetly have use cases.

    • @lycanthoss
      @lycanthoss หลายเดือนก่อน +4

      Intel chips are more efficient at low power usage simply because of the chiplet nature of Ryzen chips. High power usage at max load does not indicate how much power a CPU uses at small workloads.

    • @PlayingItWrong
      @PlayingItWrong หลายเดือนก่อน +1

      ​@lycanthoss absolutely, I only meant in comparison with other amd cpu.

    • @YountFilm
      @YountFilm หลายเดือนก่อน +9

      ​@@lycanthossWho's getting these chips for small workloads...?

    • @lycanthoss
      @lycanthoss หลายเดือนก่อน +7

      @YountFilm older games or things like browsing the web, using excel/word and etc. "Small workload" might be the wrong words here. "Lightly threaded" is probably better for what I mean.

    • @HappyBeezerStudios
      @HappyBeezerStudios 25 วันที่ผ่านมา

      @@lycanthoss For that kind of workload a well optimized Intel design might actually compete well with Ryzen.
      For a very narrow threaded fixed task (like mp3 encoding, which has a set amount of work and is single threaded), the Intel P-core might simply be done faster and go back to a low power state earlier.
      (That is why I overclocked my Core 2 Duo back in the day. It was about 20% faster, but drew only about 5% more power, meaning overclocking made it more efficient)
      And for lightly multithreaded workloads with low per-thread requirements, a block of Intel E-cores might offer sufficient performance at lower power draw. If four "slow" cores are still enough to render a website or run old games at the aimed at framerate, they might do better.
      (On my slightly older i5 I've set up different power plans with lower maximum clocks, because they are sufficient for older games. Same with my GPU, I have a profile set to 50% power limit with reduced clocks, that is still enough to run old games)
      Sadly nobody tends to test that kind of use.
      Just like nobody tests APUs using their strong iGPU against "pure" CPUs with dedicated cards and APUs with added dedicated cards against a CPU/dGPU setup. They tend to be only run against CPUs with their weak iGPUs, in which case the APU always wins.

  • @ilovelimpfries
    @ilovelimpfries หลายเดือนก่อน +185

    When this kid said 1978 is sooo long ago and eligible for midlife crisis, I feel attacked.

    • @TropicChristmas
      @TropicChristmas หลายเดือนก่อน +15

      Bold of him to assume I'm mid-life. I was born in 84 and I still figure I'm 2/3 life

    • @Thomas_Angelo
      @Thomas_Angelo หลายเดือนก่อน +7

      You are all ancient. People call me an old timer even tho I'm born in 2005. Just because PS2 and DVDs existed in my time doesn't mean I am old lol.

    • @JohnnyEMatos
      @JohnnyEMatos หลายเดือนก่อน +20

      ​@@Thomas_Angelo you need to be like 50 before I consider you old

    • @Thomas_Angelo
      @Thomas_Angelo หลายเดือนก่อน +3

      @@JohnnyEMatos That's not what the new kids are saying

    • @TropicChristmas
      @TropicChristmas หลายเดือนก่อน +4

      @@Thomas_Angelo
      settle down, greybush

  • @UltraVegito-1995
    @UltraVegito-1995 หลายเดือนก่อน +257

    ah yes Qualcomm the NVIDIA of Android..
    Hope Mediatek breaks the ARM PC monopoly

    • @Bolt451
      @Bolt451 หลายเดือนก่อน +57

      to be fair Qualcomm has done pretty well and hasn't massively screwed over consumers at least as far as I know

    • @azravalencia4577
      @azravalencia4577 หลายเดือนก่อน +35

      Qualcomm are more like Intel intead of Nvidia tho.

    • @navilzawadkhan1213
      @navilzawadkhan1213 หลายเดือนก่อน +31

      Qualcom is pretty fair imo its like Nvidia in its early stages

    • @viktorbaresic4180
      @viktorbaresic4180 หลายเดือนก่อน +2

      Exclusivity deal for woa is expiring this year, mediatek and samsung will enter laptop soc market next year

    • @aviatedviewssound4798
      @aviatedviewssound4798 หลายเดือนก่อน +3

      Qualcomm is more like AMD since they're older partners.

  • @jredubr
    @jredubr หลายเดือนก่อน +136

    Dude, nvidia uses ARM, because it can’t use x86.

    • @OneAngrehCat
      @OneAngrehCat หลายเดือนก่อน +65

      x86-64 is also known as AMD64 because AMD designed the spec.
      Intel pays a license to AMD every year.
      Nvidia would rather launch the nukes all over the earth than pay AMD a penny.
      So they've been trying with ARM instead, lol

    • @terliccc
      @terliccc หลายเดือนก่อน

      what you mean cant?

    • @frankseyen9156
      @frankseyen9156 หลายเดือนก่อน +22

      @@terliccc because they dont have a x86 License. If they would have bought VIA, they would have one.

    • @Darth-TBAG
      @Darth-TBAG หลายเดือนก่อน +10

      @@OneAngrehCatirony is the ceo of both companies are relatives. lol 😂

    • @torque4394
      @torque4394 หลายเดือนก่อน +14

      @@OneAngrehCat it's not that they don't want to pay, it's because amd would not license it out to them, and why would they? only reason why intel licenses amd64 is because they both have ip that they cross-license and are critical to making a modern cpu

  • @ThisGuyDrives
    @ThisGuyDrives 26 วันที่ผ่านมา +54

    A PC is never irrelevant. They can become outdated, but never irrelevant. All depends on what games you want to play.

    • @salvadordollyparton666
      @salvadordollyparton666 23 วันที่ผ่านมา +3

      i hear they can even do other things, besides gaming... i know, crazy... and even then, a LOT of those other things, are even LESS demanding. absolutely insane. like, i could be using a 3rd gen i5 now, and not even pushing it above 10%... hypothetically. cause in this entirely hypothetical situation my 4 gen board didn't somehow all go on strike at once, while not even being installed but when i do, nothing. and because of all these idiots paying ridiculous money for 2% gains, prices stay high and i refuse to pay retail for a 12th gen cpu to finally use my 690. yeah, stupid rant on a kinda dumb comment.

    • @saricubra2867
      @saricubra2867 22 วันที่ผ่านมา +2

      I played Counter Strike 2 at 80fps (GPU bottleneck) with a CRT monitor on an i7-4700MQ laptop that is 11 years old, gaming is not CPU intensive.

    • @vyruss9348
      @vyruss9348 22 วันที่ผ่านมา +3

      ​@@saricubra2867It depends on what games.
      Cities Skyline 2, The total war games, Star Citizen , ARMA 3. are all very heavily CPU based

    • @saricubra2867
      @saricubra2867 22 วันที่ผ่านมา

      @@vyruss9348 Laughs in Dragon's Dogma 2.

    • @talison461
      @talison461 21 วันที่ผ่านมา +1

      Sure, have fun playing your ps1 graphics games lol 😂😂

  • @user-ot3zm2rz2x
    @user-ot3zm2rz2x หลายเดือนก่อน +20

    I remember hearing about how RISC was the future and would make x86 obsolete in the very near future back when I was 12.
    I'm 40 now.

    • @anonymousx6651
      @anonymousx6651 9 วันที่ผ่านมา +2

      To be fair, ARM has completely taken the smartphone market and may be more common than x86

  • @reD_Bo0n
    @reD_Bo0n หลายเดือนก่อน +78

    CISC vs RISC doesn't matter that much.
    The name CISC was only coined after the introduction of the RISC concept.
    The summary would be: CPU instructions do more things at once vs a CPU instruction does only one thing.
    Also RISC-V is not the umbrella of all RISC processors, it's just one CPU architecture set which utilizes RISC principles like ARM. And RISC-V is royalty free, not like ARM.
    If you, as a CPU producer, want to use ARM technology you have to either buy a base ARM design and then modify it to your liking, or do the Apple way and buy a License of the ARM instruction set.

    • @rusty1253
      @rusty1253 หลายเดือนก่อน +1

      so... x86 will still be relevant or nah ?

    • @YannBOYERDev
      @YannBOYERDev หลายเดือนก่อน +7

      ​@@rusty1253 x86 won't disappear for at least 15 more years.

    • @AAjax
      @AAjax หลายเดือนก่อน +10

      Agreed. Muddying the water even more is the fact that modern x86 first decode complex instructions into RISC micro ops.
      Legacy instruction support on x86 is indeed a source of die bloat. But that doesn't have much to do with RISC vs CISC.

    • @johndavis29209
      @johndavis29209 หลายเดือนก่อน

      It does matter though? X86 is CISC, ARM/RISC-V is RISC based. There's differences in their fundamental designs in both.

    • @xianlee4628
      @xianlee4628 หลายเดือนก่อน

      @@YannBOYERDev Probably much more

  • @bogganalseryd2324
    @bogganalseryd2324 หลายเดือนก่อน +101

    clearly you havent heard that they faked benchmarks. they arent even half as fast as claimed

    • @Riyozsu
      @Riyozsu หลายเดือนก่อน +29

      That's the best thing about the arm. Only show benchmarks and not show real-world performance. If the chip cannot beat the 7800x3d in game performance or 14900k in productivity or have less power consumption than a 7600 yet deliver i7 level performance, it's a waste of sand or if it wants to create headlines it should be cheaper than i3 so 60 bucks or something.

    • @bogganalseryd2324
      @bogganalseryd2324 หลายเดือนก่อน +7

      @@Riyozsu 100%

    • @Gen0cidePTB
      @Gen0cidePTB หลายเดือนก่อน +21

      ​@@Riyozsu Remember when the M1 came out and people were saying it beat all i7s of the era. Yeah they switched their tune to "it beats some i7 mobile CPUs on a fraction of the power" soon afterwards. The Snapdragon X1 won't even be able to be fairly benchmarked in Windows due to the codebase changes for Windows on ARM, you'll have to do it in Linux when it comes out.
      P.S Geekbench was developed by an ex-Apple reviewer. The type that published for Apple-only sites. Can you smell the bias?

    • @hjf3022
      @hjf3022 16 วันที่ผ่านมา +1

      It's a claim, and an unsubstantiated one. I'll wait for tests once it's available to reviewer.

    • @bogganalseryd2324
      @bogganalseryd2324 16 วันที่ผ่านมา

      @@hjf3022 Yeah that is true, there are rumors going both ways so the benchmarks will settle it.

  • @guydude4124
    @guydude4124 29 วันที่ผ่านมา +15

    ARM fans have been saying this for years and nothing has come of it. ARM has more issues than this.

  • @mythicalnewt7242
    @mythicalnewt7242 หลายเดือนก่อน +75

    You got one thing wrong, while RISC is royalty free ARM is proprietary. Please correct it.

    • @karehaqt
      @karehaqt หลายเดือนก่อน +4

      RISC is an ISA which ARM uses and charges licence fees for, whereas RISC-V is the ISA that is royalty free.

    • @wuza8405
      @wuza8405 หลายเดือนก่อน +17

      @@karehaqt RISC is a philosophy for how to create an ISA, RISC-V and ARM are ISA that implement this philosophy.

    • @chefren77
      @chefren77 หลายเดือนก่อน +3

      @@wuza8405 Yes! The whole video is full of this basic misunderstanding. RISC and CISC are like philosophies about how to design CPUs, they are not specific instruction sets.
      RISC-V is not the same as ARM, it's an architecture designed at Berkeley University and released in 2010 as open source/royalty free. Berkeley's RISC designs originate in the early 1980s (RISC-I is from 1981), and strongly influenced ARM at the time.
      ARM was designed by the UK company Acorn Computers during 1983-1985, and designed to be low-power in order to make it cheaper to manufacture (by being able to use a plastic rather than ceramic housing). Acorn's computers didn't manage to stay relevant, but they spun off their architecture team into a separate company Advanced RISC Machines (=ARM) in 1990 which still today designs and licenses the ARM architecture.

    • @1DwtEaUn
      @1DwtEaUn หลายเดือนก่อน +2

      @@chefren77 I still like saying Acorn Risc Machine

  • @bengolko2270
    @bengolko2270 หลายเดือนก่อน +18

    Minor distinction to be made, ARM is not open-source. Risc-V is the open source architecture but ARM is a licensed architecture from the ARM company. They are both RISC architectures but ARM is a discrete architecture licensed by Apple for their M-series of chips.

  • @kevinwood3325
    @kevinwood3325 หลายเดือนก่อน +9

    So happy with AM4. In 2017 I spent $1,000ish on an R5 1600/B350/16GB/256GB/RX 580/700W rig. In 2022, I spent another $800 and now I'm running R7 5700X/B350/32GB/1TB/6800XT/700W. 7 years from the original build, and it's still considered a fairly good gaming PC.

  • @Vialli100.
    @Vialli100. หลายเดือนก่อน +34

    I have a Ryzen 9 5900x, absolutely great CPU..
    Got a negative 30 curve all core, it hardly ever gets over 60 - 62c under full load, it's going to last a few years..
    I wouldn't buy anything made by Apple!

    • @ervinase9661
      @ervinase9661 28 วันที่ผ่านมา +1

      I have it too. It's so calming to have a powerful cpu. You don't need to check any game requirements, you just know it will run anything. Including creator programs.

    • @onatics
      @onatics 8 วันที่ผ่านมา

      @@ervinase9661 9 5900x wont run everything lol

  • @GimeilVR
    @GimeilVR หลายเดือนก่อน +23

    these cpus/gpus are gonna be in VR headsets... just think about that.

    • @micmanegames695
      @micmanegames695 หลายเดือนก่อน

      Standalone PCVR.. that will be the day.

    • @anonymousx6651
      @anonymousx6651 9 วันที่ผ่านมา +1

      They were from the beginning. Look at the chip of the Quest 1. GPUs already use different architecture from CPUs and the advantages of using an ARM system for VR mostly don't translate in terms of graphics.

  • @XeqtrM1
    @XeqtrM1 หลายเดือนก่อน +72

    To be fair doesn't matter how good the M2 is when almost no games work

    • @caydilemma3309
      @caydilemma3309 หลายเดือนก่อน +12

      I mean it matters for every other use besides playing games lmao but I know what you mean

    • @gbitencourt
      @gbitencourt หลายเดือนก่อน +3

      ​@@caydilemma3309what other uses? We are fighting trash ports from console, we don't need to port from PC to arm again. It will be trash

    • @nopenope1
      @nopenope1 หลายเดือนก่อน +8

      don't forget the (shared) RAM. Tim Apple (Accountant) does run the company ;) How much electronic waste because of the 8GB models is out there... at least lost potential/reduced usefulness in the long run.

    • @NatrajChaturvedi
      @NatrajChaturvedi หลายเดือนก่อน +1

      Always has been the case with Mac. Its not an arm problem, its a Mac problem. Lower adoption and Apple's b.s has driven away devs from wanting to develop and port to Mac.
      However both those things are changing and there could be an explosion of ports if Apple just plays its cards right.
      (Just like x3d, Apple could have a more gaming focused sku or machine which gives more value to buyers but Apple is Apple so we dont know)

    • @bradleylauterbach7920
      @bradleylauterbach7920 26 วันที่ผ่านมา +2

      @@gbitencourt professional apps. Physics simulations, video editing, CAD, etc. It’s not all about gaming and it never has been. Gaming follows professional use.

  • @Bennet2391
    @Bennet2391 29 วันที่ผ่านมา +3

    This has all been said since the 386 and nothing has happened, because the anti-x86 crowd doesn't understand that content is king not performace or power saving. Porting everything to arm is such an astronomical undertaking, that you can't even estimate how long it would take. Also software with lost source code would need a complete rewrite.
    Emulation only takes you so far and if you need high performance, you are out of luck - unless you have a risc processor with HW-level x86 compatibility, which is exactly what amd and intel doing today. No modern PC CPU is truly CISC. The cpu translates one cisc instruction into several risc instructions before it executes them.

  • @chrisbird4913
    @chrisbird4913 หลายเดือนก่อน +22

    We are approaching the theoretical limit to computing power, things will slow to a crawl, that's why the focus on efficiency

    • @VertexPlaysMC
      @VertexPlaysMC 28 วันที่ผ่านมา +2

      there isn't really a limit to computer power, just as large as you can make your supercomputer array. Efficiency is the thing that has a limit.

    • @Nabee_H
      @Nabee_H 25 วันที่ผ่านมา

      I think its also just rising costs of electricity, materials, finite resources and the push for renewable energy that has put efficient energy use into the spotlight even more than it already would have been.

    • @HappyBeezerStudios
      @HappyBeezerStudios 25 วันที่ผ่านมา

      There was a big focus on efficiency from about 2007-2017, when peak consumption stagnated and idle consumption decreased. Ironically that was a time when Intel had no competition trying to beat them at the upper end and they could focus on other things.
      The issue since than is that competition again put the focus on being "the fastest" ant any given task, and if you want to get that 5% lead, you'll sacrifice efficiency and throw 30% more power consumption at it.
      And that efficient era also started with moving from low thread, high clock designs to wider, multicore designs with improved efficiency.

    • @mintymus
      @mintymus 11 วันที่ผ่านมา

      100% wrong. Have you ever heard of node sizes?

    • @kennyoffhenny
      @kennyoffhenny 5 วันที่ผ่านมา

      @@mintymushave you heard of quantum tunneling?

  • @beasttitan8747
    @beasttitan8747 หลายเดือนก่อน +27

    Spending millions for shadows and volumetric lighting is wild.
    If my 5700xt can run red dead redemption at 60 fps 1080p ultra am good, lmao I don't even own a 2k monitor.

    • @Grandmaster-Kush
      @Grandmaster-Kush หลายเดือนก่อน +2

      People on the internet are pushing HARD for 1440p / 4k and above 60 fps refresh rate, meanwhile my 6700xt with a cheap freesync monitor WILL last me another 5 years at 1080p 60 fps if simply because smaller transistors can't carry a current and silicone is hitting the physical limit of the material.
      And 12gb vram is the same as PS5 so that's future PC ports taken care of all in a 300 euro gpu second hand bought, I don't care about tech in its infancy (framegen, RTX) I already bought into that once before with Nvidia Physx that is now either incorporated in several engines or considered deprecated technology in others.

    • @CeeDeeLmao
      @CeeDeeLmao หลายเดือนก่อน +3

      Ray tracing is good now it isn’t like it was on 20 series gpus

    • @christophermullins7163
      @christophermullins7163 หลายเดือนก่อน +1

      ​@@CeeDeeLmaoI disagree. It has benefits and some games are starting to be worth running RT but overall it is still nvidia's manipulation on devs to implement it to sell Nvidia GPUs. Raster can soooo good.aking RT look good is easier so it is a crutch just like upscaling and frame gen.
      Also, before your head explodes.. RT looks better in some games. Sure. It also brings 4090 to a fraction of the fps so it's not viable unless you're literally bored of throwing money away and want to buy some new fancy tech.

    • @vitalsignscritical
      @vitalsignscritical หลายเดือนก่อน

      red dead redemption isn't on PC.

    • @stangamer1151
      @stangamer1151 หลายเดือนก่อน +1

      Even if you do not own high res screen, it does not mean you can not take advantage of higher resolution rendering.
      Just use VSR (or DSR/DLDSR in case of Nvidia). VSR makes your GPU render 4x more pixels and then downscales it to your monitor's res (for example, 4K->1080p). If your GPU is not powerful enough to run games at native 4K, just use upscaling.
      I always use DSR or DLDSR, since it greatly improves image quality. 4K + DLSS Performance looks much better than native 1080p, even with applied DLAA. And even 1440p + DLSS Quality still looks significantly better than 1080p + DLAA. So just render the resolution, your GPU can handle at targeted framerate. Higher res renderer provides better AA, better texture filering and more micro details. The resulted image looks so much better. Even native 4K renderer still looks a bit better than any lower res on a 1080p screen.
      RDR2 looks very soft and blurry at 1080p. Even at 1440p it is much sharper and cleaner, let alone 4K.

  • @tmsphere
    @tmsphere หลายเดือนก่อน +10

    My old PC with 1070 and 6700k was so good and lasted 7 years I decided to put it aside and build a brand new case instead of upgrading.

    • @user-uj4gr9ql4m
      @user-uj4gr9ql4m หลายเดือนก่อน

      >1070, 6700k
      >intel 6xxx
      >upgrading
      how do you think you can upgrade it without basically replacing the entire pc?

    • @uncommonsense8693
      @uncommonsense8693 หลายเดือนก่อน

      @@user-uj4gr9ql4m He said he isn't upgrading.
      He just built a new case for it.
      Reading comprehension and humor are not your strong points my autistic friend.

    • @user-uj4gr9ql4m
      @user-uj4gr9ql4m หลายเดือนก่อน

      @@uncommonsense8693
      >humor
      i was trying to be funny there?

    • @uncommonsense8693
      @uncommonsense8693 หลายเดือนก่อน

      @@user-uj4gr9ql4m ... reading comprehension DEFINATELY not your strong point friend.

    • @user-uj4gr9ql4m
      @user-uj4gr9ql4m หลายเดือนก่อน

      @@uncommonsense8693
      what's wrong?

  • @user-gb2ly2kv2x
    @user-gb2ly2kv2x หลายเดือนก่อน +20

    So you're saying to fix x86, we need to get rid of old instructions? You know, those are the instructions we use the most because they have more support and can most of the things we need (example: mov, add, and, xor, etc.).

    • @killkiss_son
      @killkiss_son หลายเดือนก่อน +1

      Also x86 is so shit that if you remove one of the instructions set, even if it's not used anymore, it's going to break everything. x86 was originally made to do word processing, now we game on it.

    • @ABaumstumpf
      @ABaumstumpf หลายเดือนก่อน +9

      @@killkiss_son "Also x86 is so shit that if you remove one of the instructions set, even if it's not used anymore, it's going to break everything."
      Bruh - tell us you have absolutely no clue about computers without telling us you have no clue.
      That is the same for EVERY architecture. If you take something away that was specified to exist and programs are using it then it breaks. That's it.

    • @AwankO
      @AwankO 29 วันที่ผ่านมา +2

      got to move on from that at some point, its holding efficiency back

    • @nolan412
      @nolan412 24 วันที่ผ่านมา +1

      Loads and stores are what stalls all the cores.

    • @svechardannex4200
      @svechardannex4200 11 วันที่ผ่านมา +1

      @@AwankO CPUs are already RISC architecturally, there's translation layers. Nothing is holding efficiency back but physics and people's demand for performance. Modern AMD and Intel CPUs are already running up against the limits of how much performance you can get without just driving more power through the system, and Apple has already run into these issues too, because it's fundamentally a physics problem now, not an architecture problem.
      At this point all we can do is try to minimize logic gate size and minimize space between them, re-designing them any other way doesn't really do anything for us anymore.

  • @braindead2813
    @braindead2813 หลายเดือนก่อน +7

    I was reading some numbers from a national article about the quality of PHD candidates and other professionals graduating in their fields and they found that the overall capabilities and knowledge of the newer graduates are substantially worse than the same counterparts even 10 years ago. I think the new generation of professionals going to work at these companies are just lazy and stupid 😂 at least that is what the data shows.

  • @xxyourhunterxx4044
    @xxyourhunterxx4044 27 วันที่ผ่านมา +5

    "My house was built in the 1940s, there's all this legacy stuff, there's all these BONES."

  • @uwu_peter
    @uwu_peter หลายเดือนก่อน +11

    20:30 there is the ARM Ampere Altra Server CPU, that has a socketable ARM CPU, socketable RAM and PCIe, so upgradability is possible

    • @thedesk954
      @thedesk954 17 วันที่ผ่านมา

      It have been caught running with a770 gpu

  • @fellipecanal
    @fellipecanal หลายเดือนก่อน +6

    Probably won't happen. The core of how CPU works is completely different.
    All software wrote for x86 need to be remade to Arm.
    There is a motive all of our CPUs still support 32bit instructions. A 64bit only today probably break more than half of all software running in the world.

    • @Gramini
      @Gramini 24 วันที่ผ่านมา +1

      For the vast majority of application it'd be enough to simply recompile them for an ARM target.

  • @mrtuk4282
    @mrtuk4282 หลายเดือนก่อน +4

    I disagree with your assessment that NVidia know what they are doing by using ARM. ARM is basically a standard chip design which you pay for and then tweak/modify its design in any way you want. NVidia choice ARM rather than Intel or AMD because its cheaper not because its better and probably because the would have total control rather than beholden to its competitor being AMD/Intel.

  • @bumbeelegends7018
    @bumbeelegends7018 หลายเดือนก่อน +10

    The way pc consume power right now is rediculiously High

    • @davidszep3488
      @davidszep3488 29 วันที่ผ่านมา +2

      Undervolt it. Facepalm. I have a 7950X and its only consume 120W. CB score is over 38000...You dont have to use the default bad efficienty preset.

    • @rasky2684
      @rasky2684 25 วันที่ผ่านมา +2

      Nah man, mine probably uses less power now than it did 10 years ago 😂 depends what you put in, then that's on you.

    • @mintymus
      @mintymus 11 วันที่ผ่านมา

      @@rasky2684 True. In his defense, he's just trying to be trendy.

    • @HifeMan
      @HifeMan 3 วันที่ผ่านมา

      ⁠​⁠​⁠​⁠@@rasky2684what’s not power efficient about 14900KF OC to 7Ghz and an 4090FE on water OC to 3.5Gghz?!?! lol
      But jokes aside, you aren’t lying, the amount of raw CPU and GPU power you can get per watt now days is insane, especially if you pick the parts with efficient in mind and do some tweaking you can get crazy efficiency.

    • @rasky2684
      @rasky2684 3 วันที่ผ่านมา +1

      @@HifeMan I know! To be honest I'm still on Am4 and my whole system runs pretty much most things maxed out at 1440 and probably doesn't even touch 400w.
      5700x3d with 7800xt cheap and cheerful 😆

  • @bh0llar702
    @bh0llar702 หลายเดือนก่อน +3

    Nah. It may seem like things are slowing down, but they're not. Every other generation uses insane power. Then, refined and efficient. Rinse and repeat

  • @KushiKush
    @KushiKush หลายเดือนก่อน +3

    The power draw from the cpu is at stock, with motherboard default. It has been known for a while that the motherboard overclock the cpu's which are already at their near overclock default limits. I have a 14700k and it draws 125-160 in games at max settings, and only gets to its tdp of 253wats when it needs to boost and thats only for 53 seconds.

  • @holyknighthodrick5223
    @holyknighthodrick5223 หลายเดือนก่อน +2

    Power efficiency is so bad because of competition in the CPU market. The voltage is being ramped up for very small gains in performance, for massive increases in power draw. Unlike the 11900K, modern CPUs essentially come overclocked out of the box, the same applies to graphics cards. There is also the fact that, why leave extra performance on the table, when the product could come with it out of the box, then you get to charge more for it. The mobile market on the other hand takes power efficiency much more seriously because its a huge concern for all consumers as being in a country with cheap power such as the US, doesn't get around the fact that your battery can only store so much charge.

  • @the_oc_brewpub_sound_guy3071
    @the_oc_brewpub_sound_guy3071 4 วันที่ผ่านมา +1

    Weve been saying "your pc will be irrelevant" since the the 90s, when I helped run a computer repair shop, working on like 20 desktops sometimes .
    Once MMX processesing came out i felt this way, same with dual channel ram.

  • @vstxp
    @vstxp หลายเดือนก่อน +4

    There are server motherboards that take ARM processors and take common RAM kits and PCI-E devices. RISC processors in general can be replaceable in the same way as our x86 are now. It's just that there is no market for it YET.

    • @HappyBeezerStudios
      @HappyBeezerStudios 25 วันที่ผ่านมา

      Yup, it's all about the market.
      And on the other side x86 CISC chips are soldered on in modern laptops.
      So it's all about supply and demand.

    • @headlibrarian1996
      @headlibrarian1996 25 วันที่ผ่านมา

      There never will be a market on the desktop. No native apps and I’m not sure where you’d get Windows for ARM. Apple made the transition from Intel because they could force it. The x86 monopoly reinforces the Windows monopoly, so MS has no incentive to force adoption of ARM.

  • @thenewimahich
    @thenewimahich หลายเดือนก่อน +3

    With out going into detail i think when they made the 4090 i'm pretty sure they have the 5090 or even 6090 all ready planned and made already but wont release it yet

  • @eye776
    @eye776 หลายเดือนก่อน +2

    94w in IDLE is a bit sus. Even a Dual CPU workstation from 2018 didn't draw that much power at idle.

  • @KillerPSS
    @KillerPSS หลายเดือนก่อน +2

    If we must replace the whole motherboard just to upgrade a single component (like RAM, CPU or even GPU) it will be so much waste so we will sink in it.

  • @BruceRichwineJr
    @BruceRichwineJr หลายเดือนก่อน +5

    Problem is Apple is already running into the same problems you speak of in this video. Definitely more power efficient, but they’re pasting processors together for more performance. The M4 won’t be a major upgrade over the M3. But the PC space definitely needs to do the same thing.

    • @BlogingLP
      @BlogingLP หลายเดือนก่อน +1

      Hopefully not because I hate when CPUs aren't exchangeable.

    • @1DwtEaUn
      @1DwtEaUn 8 วันที่ผ่านมา

      @@BlogingLP There are socketed ARM options out there like the Ampere ARM chips, and some use a COM-HPC daughter board for CPU / RAM slots that in theory allow further upgrade than most PC MB designs, in that changing out the daughter board can change out RAM spec and CPU socket.

    • @BlogingLP
      @BlogingLP 8 วันที่ผ่านมา

      @@1DwtEaUn
      Iam not in the arm game but if I understood correctly it would still be possible to Exchange CPU and RAM because I want to upgrade for example?

    • @1DwtEaUn
      @1DwtEaUn 8 วันที่ผ่านมา

      @@BlogingLP not universally, but with the Ampere correct, you could upgrade from a 32-core to a 128-core CPU for example and it uses DIMMs for RAM. For a socketed CPU in a non-COM-HPC you could use a AsRock Rack ALTRAD8UD-1L2T or ALTRAD8UD2-1L2Q with an Ampere CPU.

    • @BlogingLP
      @BlogingLP 8 วันที่ผ่านมา

      @@1DwtEaUn
      Why not universally like we can do now with X86? because I think it's kinda whack if I could not do that?

  • @ninjabastard
    @ninjabastard หลายเดือนก่อน +62

    Maybe its not clear to me but it seems you're mixing up ARM and Risc-V. ARM is a company who licenses its proprietary RISC based processor using with its own instruction set. RISC-V is a an open source RISC based instruction set that can be used in processors, which there are a few. Since they're both RISC instruction sets for RISC processors there will be some overlap but the instruction sets, which tell the processor what the transistors mean and how to do calculations, are not the exact same. RISC-V is far behind ARM in support and capability at the moment.

    • @quantumdot7393
      @quantumdot7393 หลายเดือนก่อน +22

      Dude the whole video seems like someone researching something for the first time and just repeating what others said with no understanding that is why I didn't bother trying to correct him on everything. This is a video made to get views and nothing else

    • @karehaqt
      @karehaqt หลายเดือนก่อน +12

      @@quantumdot7393 There's no researching at all, he's reading the RISC-V wiki page when talking about the RISC ISA. I get that he's trying to branch out from just doing GPUs as they're pretty boring now but at least do the research so you actually understand what your talking about.

    • @uncommonsense8693
      @uncommonsense8693 หลายเดือนก่อน +2

      @@quantumdot7393 Yeah... literally everything in the video was factually incorrect.

    • @imkvn4681
      @imkvn4681 หลายเดือนก่อน

      His main point was that the optimization and chip fabrication is subpar. I don't see it getting any better because of the increase complexity for a company. Currently, the new thing is chiplets and added cache to the cpu. He should have avoided the Risc-v and ARM.

    • @quantumdot7393
      @quantumdot7393 หลายเดือนก่อน +1

      @@imkvn4681 I know a ton of people keep mentioning how x86 is bloated but Jim Keller said multiple times that they are no fundamental advantages to arm or risc V and it is all about design. unless you are talking about things like boot times. i trust what he says over internet arm chair experts .

  • @tonep3168
    @tonep3168 หลายเดือนก่อน +8

    You got this very wrong. The benchmarks have been proved to be fake.

  • @Skylancer727
    @Skylancer727 หลายเดือนก่อน +1

    Let's be serious, other companies don't use ARM because it's better, they use it because it's open source. The bloat benefits have more just been a theoretical claim with very little proof. It's hard to prove it works better when there are no apple to apple comparisons.
    People have made the same claims of ARM being superior in efficiency, but there isn't any proof other than a couple people working on them having good designs. It could just be the competition in x86 has limited it.

  • @headlibrarian1996
    @headlibrarian1996 25 วันที่ผ่านมา +1

    Itanium was intended to replace x86, but it got no traction in the marketplace because vendors didn’t bother to recompile their Windows applications for native Itanium and Microsoft shipped an incomplete edition of Windows for it. Basically it was DOA. For the same reasons ARM will never get traction on the desktop, as Windows basically reinforces the x86 monopoly.

  • @justindressler5992
    @justindressler5992 หลายเดือนก่อน +3

    MIPS and PowerPC were RISC architecture as well. Its not really the instruction set it's almost always the node size. For each node size decrease you can literally build the same CPU and improve performance and power efficiency. The only reason to re-design at each node size is to copy paste more cores and add more cache with the extra space. ARM has been focused on power efficiency from the beginning they change instruction set every 5 or so years to take advantage of better process nodes. ARM has alot of headroom to grow considering the performance they get from top end phone chips on a 5w chip. RISC has been around for a vary long time.

    • @justindressler5992
      @justindressler5992 หลายเดือนก่อน

      Also I think Intel is already talking about dumping x86 in future designs. They actually might have too at this point.

  • @xan1242
    @xan1242 หลายเดือนก่อน +11

    About the x86 bloat, there's a great video on Primagen's channel which explains why that's not the correct way to approach the issue of x86.
    x86 CPUs haven't been true on-die x86 CPUs for years now. They're all executing their own microcode within which the architecture specifications are defined.

  • @bummers
    @bummers หลายเดือนก่อน +1

    Just to correct you CISC is called Complex ISC does not mean that there is something complex in there that can be simplified. It is one of the two design philosophy of which the other, RISC, is what ARM is based on. CISC simply means that the op codes are optimised a different way from RISC.
    CISC has more instructional code sets that can do more things in one op cycle while RISC is optimised to have instructions that are much more simpler, requiring functions to be composed of a series of instructions instead of say 1 or 2 for CISC.
    So if CISC instruction set are simplified, they get broken down into small op codes that fundamentally changes it into a RISC design.
    There was a time and place for CISC where commonly used instructions can be optimised as a single call, simplifying assembly code and in turn higher level language compilers like c and the likes. At that level, reducing the need for a few fetches and execute can mean quite a bit of savings for CISC arch.
    And the most part, Intel and AMD was able to get away with simply running their chip at a higher clock speed to get more performance. Now we are hitting the physical limit and ARM arch is showing its advantage.
    Also, while ARM uses RISC architecture, RISC cpu != ARM 'cos besides ARM, there are quite a few other CPUs that uses RISC arch, like Alpha, MIPS, Power Architecture, and SPARC etc.
    I prob got some (or a lot) of the details wrong 'cos I'm recalling mostly from memory stuff I learn from 30+ years back. Just google CISC vs RISC to get your facts straight.

  • @H786...
    @H786... หลายเดือนก่อน +1

    why is it "gpus arent getting better" and not " cpus arent even close to bottlenecking gpus" or more importantly, optimisation.

  • @dontowelie1302
    @dontowelie1302 หลายเดือนก่อน +3

    That power consumption comparison you did is @wall.
    So you basically included the power draw of the casefans etc.

  • @GonthorianDX
    @GonthorianDX หลายเดือนก่อน +3

    Arm isn't new lol. the GBA uses Arm. It is probably way older than the GBA too
    Arm also isn't royalty free either

  • @saricubra2867
    @saricubra2867 22 วันที่ผ่านมา +2

    It's a tradeoff. On x86, memory is very cheap, on ARM, it's very expensive because you need significantly faster bandwidth because there's a lot more instructions per second.
    Also, it can happen that x86 actually is significantly faster for some tasks vs Apple, i remember that Hardware Unboxed tested some video related stuff that gets a huge perfomance increase on x86 (i7-12700H) thanks to AVX2 vs the M1 Pro.

  • @baronvonslambert
    @baronvonslambert หลายเดือนก่อน +1

    My thoughts on the subject are that RISC is almost as old as x86 CISC, if it was truly the magic pill people seem to think it is these days, it would have replaced x86 as the main consumer level instruction set decades ago. It makes great sense to use ARM chips in the mobile market due to the much lower power draw, but power draw isn't typically a concern for desktops since they're plugged into a wall outlet capable of delivering well over 1000 watts, unless your electricity is expensive, I live next to a nuke plant, there's also a coal plant nearby, and there are numerous wind farms in my area so electricity is literally pennies per kilowatt hour, plus I don't run my PC hard enough for long enough to make much of a difference in my total bill, my AC and heat have exponentially larger impacts on my bill.

  • @hunn20004
    @hunn20004 หลายเดือนก่อน +5

    With CPUs, I've yet to witness a reason for why I have to sell my 5800x3d....
    It's at a point that I pointlessly upped my ram from 16 to 64 to avoid any more bottlenecks in that department. Modded games are going to run great at least.
    My only weak point would still be my rx5700xt, but the moment I get past 4k 60fps in my favourite game, I'd probably stick with the GPU for as long as the silicone lasts.

    • @HappyBeezerStudios
      @HappyBeezerStudios 25 วันที่ผ่านมา

      Remember, unused RAM is wasted RAM.
      But neither modern Linux nor modern Windows lat it sit idle. Stuff get's precached (which is why photoshop or chrome start so much quicker the second time)
      I'm still on 16 GB and I have games that sometimes crash because I run out. Even with an additional 64 GB swapfile it can't handle it.

    • @Dolvey
      @Dolvey 19 วันที่ผ่านมา

      If you end up upgrading, the 7900xt has been fantastic so far. A bit pricey as of now at $700 but the prices are dropping. Even a 7800xt would nearly double performance

  • @lysergicaciddiethylamide6127
    @lysergicaciddiethylamide6127 หลายเดือนก่อน +5

    I just built my $2,400 pc and it’s already irrelevant 😐

    • @YannBOYERDev
      @YannBOYERDev หลายเดือนก่อน +7

      No it's not, this guy made a click bait video, he doesn't even know what he's saying...

    • @lysergicaciddiethylamide6127
      @lysergicaciddiethylamide6127 หลายเดือนก่อน

      @@YannBOYERDev I was being facetious lol

    • @user-rt9qd8pe7f
      @user-rt9qd8pe7f 26 วันที่ผ่านมา

      @@YannBOYERDev yeah this guy is clueless, hailing M1 chips what a clown

    • @definingslawek4731
      @definingslawek4731 24 วันที่ผ่านมา

      @@user-rt9qd8pe7f I just got an m3 max laptop and it benchmarks significantly higher in cpu single and multicore than the asus zenbook pro I was testing before (i9 13900h, top or close to top chip on windows laptop.)
      So I don't see how it's clownish to point out the incredible performance of apple silicon.

  • @purerandomness4208
    @purerandomness4208 หลายเดือนก่อน +1

    I'm not scared my CPU will be irrelevant in a few years. I want my CPU to be irrelevant as soon as possible as it shows a great degree of progress that is worth investing in.

  • @kramnull8962
    @kramnull8962 หลายเดือนก่อน +1

    "7800x3d does about 1/2 the work of a 13700K no matter how you try to slice it...... 18K R23 worth of rendering for a 13700K's 31K.... Or 1636 for the 13700k in R24 vs ~900 for a 7800x3d.
    "Those E cores don't do shit......" -Lisa Su

  • @rabahaicha7724
    @rabahaicha7724 หลายเดือนก่อน +3

    i searched for it and i found that amd own x64 archetecture

  • @christophtoifl6848
    @christophtoifl6848 หลายเดือนก่อน +4

    My PSU is a semi-passive so it makes zero noise while surfing, reading, etc
    My 6800XT has a 0rpm mode and is also completely quiet for surfing, office work etc
    And my 3900x really really isn't quiet at all...
    I would love a fully semi-passive system, completely quiet when watching TH-cam videos, but a lot of headroom for gaming, python programs and other stuff ...

    • @revi5343
      @revi5343 16 วันที่ผ่านมา

      watercooling maybe? or beefier air cooler and undervolting? the 3900x is a really warm cpu.

    • @Spock369
      @Spock369 8 วันที่ผ่านมา

      Noctua has a fanless cooler...

    • @christophtoifl6848
      @christophtoifl6848 7 วันที่ผ่านมา

      @@Spock369 yeah, but it is too weak for the 3900X. I could use it, but my CPU would throttle a lot. And it is not optimized for semi- passive mode, even with a optional fan at full power it couldn't handle a 3900X at max power draw...
      Right now you could ether have a system that can operate completely noiseless with light load, but is severely limited in max computational power (even with fans), or a system that is really powerful but it will never be silent.
      Wells there is a case that can passively cool a powerful CPU and GPU at full power, but it is waaay to expensive for me...

  • @perochialjoe
    @perochialjoe 27 วันที่ผ่านมา +1

    I don't care how good they are, I don't want Apple products anywhere near me. They'll have us paying subscriptions on our CPUs to "unlock performance" within five years of them entering the market.

  • @Ernismeister
    @Ernismeister หลายเดือนก่อน +2

    Clickbait title. The benchmarks aren't fair because of different litography nodes (intel 10nm+++ vs TSMC 3nm), not to mention the X Elite benchmarks are proven to be fake.

  • @Misfit138x
    @Misfit138x หลายเดือนก่อน +3

    Dude, I love your channel, but Arm is not open source! You are confusing it with risc-v

  • @alexandrustefanmiron7723
    @alexandrustefanmiron7723 หลายเดือนก่อน +4

    M3 has hit the thermal wall.

    • @RickBeacham
      @RickBeacham หลายเดือนก่อน +1

      Exactly.

  • @darnice1125
    @darnice1125 หลายเดือนก่อน +2

    If nothing, and no games run on it, no future for snapdragon.

  • @pinatasrule
    @pinatasrule 21 วันที่ผ่านมา +1

    The day ARM replaces x86 is the day I go back to console. ARM would completely fuck compatibility with everything like using a mac.

  • @hoffyc.h393
    @hoffyc.h393 หลายเดือนก่อน +3

    I use Ryzen 7 5700g at 4.5Ghz 1.32v draw around 30-50w during Gaming :D

  • @Gin2341
    @Gin2341 หลายเดือนก่อน +29

    Heard that before with Apple's M1 chips and they're still a laughing stock at gaming and there's no way an ARM cpu running x86/x64 games and application

    • @arewealone9969
      @arewealone9969 หลายเดือนก่อน +11

      Apples SOCs are quite impressive actually.

    • @nuddin99
      @nuddin99 หลายเดือนก่อน +7

      That’s only because they run Mac OS. Their performance per watt is very good and is usually much faster with Mac specific applications

    • @uwu_peter
      @uwu_peter หลายเดือนก่อน +14

      ARM CPUs are able to run x86_64 Applications. There are translation layers on mac OS, Windows and Linux for that

    • @Gin2341
      @Gin2341 หลายเดือนก่อน +11

      @@uwu_peter translation layer which cost performance, decrease, introduces higher latency and stuttering also it doesn't even do AVX that most modern games already uses

    • @terliccc
      @terliccc หลายเดือนก่อน

      @@nuddin99 exactly M1 on windows would be as good at most

  • @philipreininger2549
    @philipreininger2549 หลายเดือนก่อน +1

    The only problem with the x elite arm chip is it seems great now, but when it launches next gen x86 will have advanced so while being insanely efficient, it won’t be as problem free and as fast.

  • @nilsolsen8727
    @nilsolsen8727 21 วันที่ผ่านมา

    Ok, since we are plateauing on graphics can we work on higher framerate, and stable framerates now?

  • @dy7296
    @dy7296 หลายเดือนก่อน +8

    20:12
    You're kinda wrong on this part. Socketed ARM cpus with separate ram slots and separate gpus like usual already exists, just on the server.
    th-cam.com/video/ydGdHjIncbk/w-d-xo.htmlsi=jZHm0qkoCd7SS_ON
    You don't have to watch this entire thing. Just the first 10 seconds and you'll notice that it has normal dimms and graphics card.

    • @TamasKiss-yk4st
      @TamasKiss-yk4st หลายเดือนก่อน

      But if you compare the 40-50GB/s DDR5 transfer speed (even if it per channel, you still need 8 RAM slots at least to match the 400GB/s M3 Max which is just a laptop, not a server..) and the PCIe 4.0 x16 slot 32GB/s also a huge limit. (you need PCIe 8.0 x16 slot for your GPU to reach the 400GB/s, and again that is only the laptop speed for Apple)
      So in other words you need to decide which one you prefer the HDD or the SSD? Because even if you can more and more RAM/GPU after a comfortable point, the extra max capacity mean nothing without the transfer speed, just like the the 10+TB HDD, not a lot of us have one, but guess how many of us have 1+TB SSD..? So when you need to pay with your transfer speed, you should think it twice, before you think the way slower but replaceable things are the future, the future is that what not just calculate the 20GB data in a blink, but also can send the result without 1sec delay too..

  • @user78405
    @user78405 หลายเดือนก่อน +4

    pretty soon intel will move onto 128bit memory address space from 64bit memory address space, since data space already in 256 avx to 1024 bit amx for ai processing

    • @anttikangasvieri1361
      @anttikangasvieri1361 หลายเดือนก่อน +10

      Why? No computer is anywhere near exhausting 64-bit addresses. There is nothing to gain from making addresses bigger.

    • @YannBOYERDev
      @YannBOYERDev หลายเดือนก่อน +6

      ​@@anttikangasvieri1361True.. but you know people talk shit even when they don't understand what they are talking about lmao, 128bit CPUs for the average consumer is useless, and increasing bits isn't this easy.

  • @inwedavid6919
    @inwedavid6919 3 วันที่ผ่านมา +1

    ARM is not that new, super efiscient but it is still a 1980's technology

  • @Ph42oN
    @Ph42oN หลายเดือนก่อน +2

    Its not so simple to change, all the software is made to run on x86. Unless you want to abandon all your old games we need emulation to run x86 software. Even if cpu would be 2x faster it may end up being slower to run x86 software.
    Windows on arm is irrelevant to me, microsoft just keeps making many things worse and worse, so when valve helped to make most of games run on linux i left windows behind. So, most of software i run can just be recompiled for whatever cpu i will get, but games will require translation that works well.

  • @Yasir_emran
    @Yasir_emran หลายเดือนก่อน +3

    What do you mean? My old pc is already irrelevant 😅

  • @KaoruSugimura
    @KaoruSugimura หลายเดือนก่อน +4

    The future isn't ARM. It's quantum computing. ARM is just an alternative to x86 with a different use case. In terms of processing ARM is like a specialized tool while x86 is like a full set of tools. None are as efficient in a particular task as arm but x86 allows you to do much more.

    • @4ytherium
      @4ytherium 26 วันที่ผ่านมา

      yeah but will we ever have consumer quantum computing

    • @andyH_England
      @andyH_England 25 วันที่ผ่านมา

      Yes, but most people do not need "to do much more". That is why WOA will take over the ultrabook market for starters. Apple has proven that ARM can do 95% of X86 better and people buying Macs see that. This will be the same now Windows ARM chips are finally catching up.

    • @Gramini
      @Gramini 24 วันที่ผ่านมา +1

      Isn't quantum computer super-useless for common tasks and *highly* specialized?

  • @cal1953
    @cal1953 หลายเดือนก่อน +2

    Yeah, but like compare the 7800X3D to the 14700KF and the 7800X3D consumes under half the power that the 14700KF does in gaming and heavy workloads. It's more of an INTEL problem than desktop CPU in general problem.

  • @patrickweaver1105
    @patrickweaver1105 หลายเดือนก่อน +1

    So how many decades have they been claiming that? If it happens fine, but until it does no one will believe it.

  • @darkhorse29-yx8qh
    @darkhorse29-yx8qh หลายเดือนก่อน +5

    nope x86 instruction sets are better

    • @anttikangasvieri1361
      @anttikangasvieri1361 หลายเดือนก่อน +1

      Any reason why?

    • @Riyozsu
      @Riyozsu หลายเดือนก่อน

      ​@@anttikangasvieri1361they have been a standard for all developers for atleast half a century. For arm to make revolution, it would take decades. So no one is jumping to arm for their main pcs just yet.

    • @anttikangasvieri1361
      @anttikangasvieri1361 หลายเดือนก่อน +1

      @@Riyozsu few developers deal with cpu instructions directly. Apple made switch over to arm with few problems. But yes history has huge inertia and x86 will be available for the forseeable future.

    • @Riyozsu
      @Riyozsu หลายเดือนก่อน +2

      @@anttikangasvieri1361 we will probably reach the limits of silicon's use as the semi- conductor for more performance upgrades, and Moore's law will be officially dead by then. Most probably they will have to switch silicon for a better semi-conductor. Switching to another element or a compound will be more of a trouble than switching to another architecture.

    • @mintymus
      @mintymus 11 วันที่ผ่านมา

      @@Riyozsu No...they're just reducing the node size.

  • @Riyozsu
    @Riyozsu หลายเดือนก่อน +1

    I remember people hyping m1 just the same way they are hyping the x elite. Yet everyone is excited for next gen intel and amd x86-64 chips.

  • @1967KID
    @1967KID หลายเดือนก่อน +2

    When I was in California a friend told me about arm is going to take over this was in 2012.

  • @alexanders.4591
    @alexanders.4591 หลายเดือนก่อน +2

    I will accept ARM once software compatibility MASSIVELY improves. They need to transfer everything you can currently do on WindowsX86 to Windows ARM, etc.

    • @Gramini
      @Gramini 24 วันที่ผ่านมา +1

      There's FEX-Emu and Box86/Box64 that seem promising, but I'm not sure if they work on Windows as well.

    • @alexanders.4591
      @alexanders.4591 23 วันที่ผ่านมา +1

      @@Gramini That would be awesome if it does.

  • @DavidAlsh
    @DavidAlsh 25 วันที่ผ่านมา

    Engineer here. AppleSilicon is amazing, their vertical integration helps but their power efficiency is largely derived from their experience making mobile phones for decades. The hardware is amazing but Apple's anti-competitive practices nerf their hardware (e.g. refusing to support Vulkan & Proton for gaming was a choice they made - Valve ditched MacOS after that). Basically, Apple sells Lambos with square wheels. If Apple decided to supported Linux on AS, the MBP would be the best laptop ever developed, period.

  • @uncrunch398
    @uncrunch398 หลายเดือนก่อน

    Each gen it'd be nice to see benchmarks compared with a power cap that the lowest powered typically uses when loaded.

  • @fanshaw
    @fanshaw หลายเดือนก่อน +1

    Should we mention that ARM goes back to 1985? Or that the functions that are executed are pretty much the same across all chips because they all need to do the same things. The high-complexity operations are just broken down into lots of lower-complexity ones. Its an extremely efficient process. Today we have more cores and we do more speculative execution, which means we are trading... yes - power for speed. We run lots of operations in case we manage to hit the jackpot on one of them. Hence ARM performance (with fewer transistors) tops out much lower than x86 but gets better performance/watt figures. Apple throws in more hardware accelerators because they are very efficient. The downside is that you can't change the algorithm, because its baked into the hardware; you can't change the RAM; you can't plug in a 40, 50, or 100G NIC; you can't even plug in a graphics card. You're stuck with Thunderbolt4 which seems like a nice 40G link you could use for storage or networking, but then you find a big chunk is dedicated to graphics and you can't change it.

  • @jthedood1605
    @jthedood1605 หลายเดือนก่อน +2

    Thing is, if they stop the x86, we might leave behind custom pc building cuz they might make it, proprietary? Or am i wrong here?

    • @wuza8405
      @wuza8405 หลายเดือนก่อน +2

      That's true, the "PC" and ATX could die. But at the same time, we have only two options right now - AMD and Intel, the difference is - most of "connectors" are the same - SATA, M.2, PCI-E etc. and of course, we can choose the processor and that's what could be different - unswappable processors. But we don't know the future, maybe even processor connectors would be standarized thus giving the opportunity for many more manufacturers to get in, but probably not with ARM.

    • @notaras1985
      @notaras1985 หลายเดือนก่อน

      ​@@wuza8405yeah it's already a monopoly

    • @Gramini
      @Gramini 24 วันที่ผ่านมา

      x86 is already proprietary.

    • @jthedood1605
      @jthedood1605 24 วันที่ผ่านมา

      @@Gramini not for most mainstream products

    • @Gramini
      @Gramini 24 วันที่ผ่านมา

      @@jthedood1605 What do you mean with that? It always was an still is a proprietary architecture. What do "mainstream products" have to do with this?

  • @victorradu9645
    @victorradu9645 25 วันที่ผ่านมา +1

    I understand that those changes are revolutionary but do we still notice the difference? I don't see a difference between my powerful pc at work and doing regular office stuff on my home 4th gen i5. For most of us that don't play games, current pcs are fast enough. Maybe the only thing to notice would be the much longer battery life for laptops, if you use laptops.

  • @Tyrian3k
    @Tyrian3k หลายเดือนก่อน

    I'm with you mostly on the CPUs, but what I find very curious is why the 7950X3D has basically the same performance as the 7950X while drawing 100 watts less power.
    Does someone have an explanation for that?

  • @imkvn4681
    @imkvn4681 หลายเดือนก่อน +2

    Arm is just an instruction set. Depends on TSMC and capital to shift the market and how well the product sells. The future is intel, apple, google that just won contracts and government subsidies to build fabrication facilities for chips. TSMC, samsung, amd, hawuai, baidu will check the US companies.

  • @jeffreydurham2566
    @jeffreydurham2566 หลายเดือนก่อน

    On the subject of what may happen if everything is integrated on the motherboard, it might be a good idea to be able to order the board with the specs that you want. Maybe not totally customized like people can do on a PC now, but definitely better than here you go take it or leave it.

  • @LordKosmux
    @LordKosmux 4 วันที่ผ่านมา +1

    Whatever, no one needs the bleeding edge tech trust me.

  • @kennyjohnson8479
    @kennyjohnson8479 24 วันที่ผ่านมา +1

    I read on X that Qualcomm test was done in very favorable conditions. Also, the chips were good but not as large a gap ahead as they first benchmark release numbers that they had up. So, we will see for sure. If it works great, I just hope to function as good as we hope to get Arms true benefits, they have to solder everything to the motherboard. Making memory, CPU and SSD upgrades harder and more expensive. And maybe I am ignorant thinking all that might be need. I almost bought a 17.3 4k laptop because mine has some problems. But should I go ahead or wait and see?

  • @dmaxcustom
    @dmaxcustom หลายเดือนก่อน +1

    The success or failure of something on mass scale is determined by the price.
    If cheaper than current tech? Sure. I dont think people would be too angry about it.

  • @jeremytine
    @jeremytine 27 วันที่ผ่านมา +1

    yes things are getting faster.. 14900k is almost 50% more power efficient than a 10900k.. 7950x3d is 253% more efficient

  • @maxloverU
    @maxloverU หลายเดือนก่อน

    Vex, have you ever tried Andoid on pc?
    If you do you would know that pc hardware is currently is handicapped by windows

  • @AshtonCoolman
    @AshtonCoolman หลายเดือนก่อน +1

    Nvidia makes ARM CPUs. They should already have an ARM CPU that works with their RTX cards on Linux.

  • @krazyolie
    @krazyolie หลายเดือนก่อน +1

    Arm/RISC isn’t inherently more efficient, or at least hasn’t been proven to be.
    The main difference is that mobile CPUs just target a much lower level or performance and power draw.
    intel/amd just haven’t tried to target that space… save for Intel atom CPUs, which were probably cancelled too early.
    Current x86 laptop chips are just downclocked desktop parts and not true mobile chips but that will change.

  • @TerenceKearns
    @TerenceKearns 27 วันที่ผ่านมา

    Regarding your questions at the end, i think the answer for arm PCs are compute modules. So basically cluster computing on a single motherboard.