Apple M Chips - The End. Was it even worth it?

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ธ.ค. 2024

ความคิดเห็น • 1K

  • @arthurwiner
    @arthurwiner  หลายเดือนก่อน

    👉 Click to start TH-cam Channel with my team salebot.site/kozwin_yt_6?Z-vc5D-Tww

  • @jameshewitt3489
    @jameshewitt3489 5 หลายเดือนก่อน +730

    "Without getting too technical" - proceeds to demonstrate that the reason you aren't getting too technical is because you literally don't understand it on a technical level.

    • @KicksonAcapulco13-no5rd
      @KicksonAcapulco13-no5rd 5 หลายเดือนก่อน

      So?

    • @geostel
      @geostel 5 หลายเดือนก่อน +61

      @@KicksonAcapulco13-no5rd so author of the video should stop telling BS since he does not have a clue what he is talking about

    • @KicksonAcapulco13-no5rd
      @KicksonAcapulco13-no5rd 5 หลายเดือนก่อน +1

      @@geostel True, but we're not CPU engineers as well. It's mostly science, mathematics, physics, microprogramming and so on. Most of viewers would probably not understand this and close this video. Sad but true.

    • @YNfinityX
      @YNfinityX 5 หลายเดือนก่อน +8

      @@KicksonAcapulco13-no5rd🤦🏽‍♂️

    • @KicksonAcapulco13-no5rd
      @KicksonAcapulco13-no5rd 5 หลายเดือนก่อน

      @@YNfinityX feel free, cheers👍

  • @TheRockingest
    @TheRockingest 5 หลายเดือนก่อน +103

    I was all fired up to bash this video, but after reading the comments, it looks like everything has been addressed! I have faith in humanity!

    • @deanx0r
      @deanx0r 3 หลายเดือนก่อน +5

      Same thing here. I just wish there was a button on this page to no longer recommend this creator in the future.

    • @SquillagusNiggle
      @SquillagusNiggle 3 หลายเดือนก่อน +3

      @@deanx0r Dont Recommend Channel exists

    • @simply_ohizu
      @simply_ohizu 3 หลายเดือนก่อน +1

      I am right here with you. I legit thought I was getting good info.
      Shame on you Arthur

  • @Holden_McHock
    @Holden_McHock 5 หลายเดือนก่อน +160

    Bro's getting destroyed in the comments 💀

    • @CaioFreitas1987
      @CaioFreitas1987 5 หลายเดือนก่อน +7

      this video is ridiculous

    • @jimmymac2292
      @jimmymac2292 5 หลายเดือนก่อน +5

      Bruh literally laid out Apple rinsed and repeated making their chips bigger, with higher clocks, and more transistors. Then said the M3 should have existed because it... fell in line with what he laid out. Sounds like weird cope

  • @QuantumCanvas07
    @QuantumCanvas07 5 หลายเดือนก่อน +858

    When you ask GPT to write scripts for you video

    • @solidgalaxy3339
      @solidgalaxy3339 5 หลายเดือนก่อน +7

      😂

    • @sirtra
      @sirtra 5 หลายเดือนก่อน +44

      This has to be some sort of social experiment or joke.
      How can one person get so much fundamentally wrong and create this video with so much confidence.
      It's like taking a bunch of true statements and putting them into a blender creating something which isn't quite right but not entirely incorrect either.. it's a weird hybrid unique to AI generated content.
      "Increasing clock speed AKA current" 😂
      Increasing clockspeed generally does require more power but current is the enemy in silicon chips and the cause of heat - energy that is wasted, ie the biggest inefficiency! You want to engineer less not more of this!
      I refuse to believe a human interested in technology wrote this script.

    • @desembrey
      @desembrey 5 หลายเดือนก่อน +7

      @@sirtra Dunning Kruger

    • @rursus8354
      @rursus8354 5 หลายเดือนก่อน +9

      Thank you for warning me, so that I didn't waste time to watch it!

    • @jorgvespermann5364
      @jorgvespermann5364 5 หลายเดือนก่อน +2

      I don´t think it could be this dumb.

  • @MarbsMusic
    @MarbsMusic 5 หลายเดือนก่อน +150

    Tell us you don't understand processor design without telling us you don't understand processor design...

  • @iokwong1871
    @iokwong1871 5 หลายเดือนก่อน +1332

    Yet, another TH-camr who has no idea what they are talking about when it comes to CPU instruction set......

    • @yedaoctopus114
      @yedaoctopus114 5 หลายเดือนก่อน +94

      Chatgpt make me a script for new video

    • @cyrusshepherd4902
      @cyrusshepherd4902 5 หลายเดือนก่อน +2

      its u

    • @Pistol4
      @Pistol4 5 หลายเดือนก่อน +40

      Arthur is the master of bullshit

    • @im4ch3t3dimachete5
      @im4ch3t3dimachete5 5 หลายเดือนก่อน +46

      “With a r m chips” says a lot already

    • @micp5740
      @micp5740 5 หลายเดือนก่อน +24

      How about you add some credibility to your statement, by being specific?
      Otherwise you just come across as a troll.

  • @michaelashby9654
    @michaelashby9654 5 หลายเดือนก่อน +159

    30% gain isn't impressive?! Ok, let's see you improve the performance of anything in computer hardware or software by just 1%.

    • @TheCiiyaah
      @TheCiiyaah 5 หลายเดือนก่อน

      Greedy!

    • @echelonrank3927
      @echelonrank3927 5 หลายเดือนก่อน

      ha ha what u mean lets see? relax, u will not notice such a small improvement 😞

    • @ShopperPlug
      @ShopperPlug 3 หลายเดือนก่อน +1

      I swear I heard him also say that 10-15% "isn't much improvement" 🤣 This is how regular MacBook users think and only buys a Mac because he saw and heard a pro bought one... if only he understood computer engineering and percentages in accomplishments.

  • @BeaglefreilaufKalkar
    @BeaglefreilaufKalkar 3 หลายเดือนก่อน +20

    The famous German philospher Nuhr once stated: "If you don't have a clue, just shut the f*ck up."

  • @swdev245
    @swdev245 5 หลายเดือนก่อน +44

    Is this the spiritual successor to The Verge PC build video?

    • @alessandroblue7
      @alessandroblue7 3 หลายเดือนก่อน +1

      sick burn!

    • @nate6908
      @nate6908 2 หลายเดือนก่อน

      insane comment haha

  • @amritrosell8561
    @amritrosell8561 5 หลายเดือนก่อน +202

    If you instead of making assumptions about architecture learn a bit deeper about the differences between the various nodes of 3nm architecture that the chips are made on you would perhaps realize M3 was more of marketing strategy from Apple to be the first CPU that used 3nm, but they did it on the same "dead" branch of 3nm node because the branch the M4 uses is very different from M1-M3 uses. The reason why there was so little improvement is mainly because they didn't redesign the chip particularly much except from removing some parts that M2 Ultra uses to use that area for other things. But now with M4, the architecture is using a very different 3nm node AND they have done an overhaul of the chip design as we can see on the M4 in iPad.
    So yes, M3 was a bit of a shrude move and a whole lot of shenanigan and mostly marketing just to be the first CPU that uses 3nm.
    But to use the numbers from M2 to M3 and get the M4 numbers is probably not going to be so correct as both the new 3nm node is much more efficient and the chip design is overhauled to accommodate the efficiency of the new node... So, no, Apples chip design isn't dying, it's evolving. But sometimes they jump onto things just to be first snd that might look Peculiar...

    • @brunonascimentofavero6097
      @brunonascimentofavero6097 5 หลายเดือนก่อน +9

      I think your assumption is a bit wrong, the main factor for a new processor node to come down in price is iteration/yeld and scale. The reason apple probably launched M3 so fast after M2 was so TSMC could pick up more scale and iterate faster on 3nm chips to bring costs for both iPhone and Mac chips as well as advance in their 3nm node. One other thing is that the base iPhones still use last years chips, so only having the pros on the new node would mean less scale.

    • @TheWallReports
      @TheWallReports 5 หลายเดือนก่อน +7

      @@brunonascimentofavero6097 Also the channel host was incorrect in his description that a 3nm architecture means the transistors were 3nm in size. 3nm just means thats the smallest feature size that fabricated w/that technology, NOT the size of the actual transistor.

    • @logtothebase2
      @logtothebase2 5 หลายเดือนก่อน +4

      M4 will be an improvement but its not going to be huge, die shrinks are facing other challenges such as not all features, for example cache RAM shrink proportionally forget Tesla and starship the engineering of ASML Zeiss and TSMC is the most impressive on earth, by far and improving it is incredibly, incredibly hard

    • @ashishpatel350
      @ashishpatel350 5 หลายเดือนก่อน +4

      apples entire company is a marketing gimmick

    • @sergioyichiong7269
      @sergioyichiong7269 5 หลายเดือนก่อน +1

      M chips are on 3rd gen and has 100billion transistors. Intel chips are on 14th gen and have less transistors. Do math or at least try.

  • @beragis3
    @beragis3 5 หลายเดือนก่อน +197

    Arthur did not do research, at the 10:36 mark he says that TSMC can not go lower than 3nm. They mentioned in April moving to 1.6nm which is smaller than Intel's 1.8nm. Samsung, Intel and TSMC are all racing to the 1nm barrier with a goal of 2030.

    • @TheJmac82
      @TheJmac82 5 หลายเดือนก่อน

      nm doesnt mean anything.. They are all made up numbers. Go by transistor density.

    • @ConernicusRex
      @ConernicusRex 5 หลายเดือนก่อน +7

      Intel's 1.8 is just 5 nm for everyone else renamed.

    • @TheJmac82
      @TheJmac82 5 หลายเดือนก่อน +3

      @@ConernicusRex I question even that. I would suspect its actually much larger than that. Intel 7 had a transistor gate pitch of 54nm and a fin height of 53. The main number that matters is Transistor density (M Tr/mm2). Edit: Comparing to everyone else. I suspect 1.8 will be ~3nm TSMC give or take a little.

    • @echelonrank3927
      @echelonrank3927 5 หลายเดือนก่อน +3

      towards net zero nm by 2040

    • @muhammedowais
      @muhammedowais 5 หลายเดือนก่อน

      it's what happens when you write the script using ChatGPT which only has info until 2023 🤣

  • @vernearase3044
    @vernearase3044 5 หลายเดือนก่อน +128

    If you don't understand computer architecture and processor design, just make shit up based on what you _think_ is going on.
    Just adding transistors don't make the processor faster - adding decoders and making the pipeline deeper with a lot of instruction prefetch and execution reordering and branch prediction make things faster; the transistor count increases to support these things.
    Nuvia was formed from ex-Apple silicon engineers, and the Snapdragon X was a joint project of ARM and Nuvia engineers. They collaborated to build a server chip, then Nuvia was acquired by Qualcomm - so the Snapdragon X is pretty much a bastard child of Apple.
    The M4 is built on TSMC's N3E node whereas the M3 was built on TSMC's N3B node - a custom node built for Apple when N3E wasn't going to be ready in time (and Apple _really_ wanted a 3nm processor). N3B is more complicated to manufacture and has lower yields, whereas N3E is on TSMC's official 3nm roadmap and is compatible with future nodes like N3P which will result in lower cost. M4 has higher memory bandwidth and is (I believe) Apple's first ARMv9 chip.

    • @andyH_England
      @andyH_England 5 หลายเดือนก่อน +2

      Well explained. I wonder what the repercussions would be if ARM versus Qualcomm were a win for the plaintiff?

    • @vernearase3044
      @vernearase3044 5 หลายเดือนก่อน +20

      @@andyH_England Microsoft ached for a processor which would compete with Apple Silicon - they released Windows for ARM and to their horror, the machines which ran it best were _not_ their own Surface laptops but Apple's 'M' family computers.
      So … Microsoft tasked their silicon proxy - Qualcomm - to come up with something that would remove this humiliation since they didn't have the silicon chops to accomplish their objective.
      Qualcomm has the ethical standards of an alley cat - they've been extorting the handset market for over a decade by insinuating their IP into cellular standards and promising to deliver their IP in a FRAND (fair, resonable, and non-discriminatory) manner, but later turning around selling their modems charging first for the modem chip, second as a license for the IP in their modems, then thirdly charging a percentage of the enclosing device's _entire retail price._
      So when Microsoft called on Qualcomm to deliver _at any price,_ Qualcomm stood ready to answer the call.
      Now Qualcomm had the silicon expertise to create the SoC (System on a Chip) - they've been making 'em forever - what they _didn't_ have was the processor design chops to engineer a faster processor. Qualcomm had been building cell phone SoCs but their designs pretty much all used standard ARM reference cores. Qualcomm would take some ARM cores, add memory, a GPU, some cache and abracadabra: a new Snapdragon SoC was born. But what Microsoft wanted was beyond their expertise, so they scoured the market looking for a faster processor.
      Nuvia was formed by a bunch of ex-Apple silicon engineers, and they collaborated with ARM to design a new server processor. ARM had been wanting a new, fast server processor to compete with Intel Xeon processors in the data center, so they provided Nuvia with a cheap architectural license and collaborative services to design a new server processor core.
      When Qualcomm saw what Nuvia had, they acquired Nuvia to put their new processor into a laptop SoC. That's why Snapdragon X has no e-cores - you don't need e-cores in a server chip (though they _are_ handy to have in a laptop SoC). Engineering a corresponding performant e-core would take almost as much additional work as designing the new, powerful p-core.
      When ARM found out they were _pissed._ They'd handed Nuvia a cheap architectural license and collaborative services because they thought they were getting a server core out of the deal - but instead their baby was going into a laptop. If Nuvia or Qulacomm had come to them talking about building a new laptop processor they would've charged 'em _much more,_ but ARM discounted everything since they thought they were designing a processor to penetrate the server market.

    • @alexhajnal107
      @alexhajnal107 3 หลายเดือนก่อน +2

      _"making the pipeline deeper … make things faster"_ Until you hit a hazard and have to stall the pipeline or worse, mispredict a branch and have to flush the pipeline. There's a reason the Pentium 4 failed.

    • @vernearase3044
      @vernearase3044 3 หลายเดือนก่อน +2

      @@alexhajnal107 CPU design has come a lot further than it was back then.
      The M1 looks forward 690 instructions, and has a massive reorder buffer and eight decoders. It can schedule up to eight simultaneous instructions. And Apple has moved past the M1.
      M4 improves branch prediction and improves on the microarchitectual features of M1.
      Also remember that ARM is a fixed length instruction set, whereas x86 is variable length. You have to do a full decode in order to determine where the next instruction in the pipe is with x86, and if you guess wrong you have to empty the pipeline.
      There's a reason why M1 moved so far ahead of the x86 pack in terms of both performance and efficiency, and M4 builds on that further.

    • @alexhajnal107
      @alexhajnal107 3 หลายเดือนก่อน +1

      @@vernearase3044 No matter how you look at it, beyond a certain point a deep pipeline is more a hinderance than a help. Sure it can help improve clock speed (dictated by the slowest potential operation in a single stage) but the downside is more instructions in flight at a given time. Hit a hazard and worst-case all in-flight instructions have to be abandoned; that's brutal for IPC.

  • @ceateio
    @ceateio 5 หลายเดือนก่อน +206

    2:14 you're saying that a 15% speed increase per generation isn't "that much"? THAT'S MASSIVE for a single generation jump.

    • @MoireFly
      @MoireFly 5 หลายเดือนก่อน +10

      15% is pretty normal; that's slightly worse than the average for successive AMD zen generations from zen 1 all the way up to zen 5 (on that last one believing AMD's claims, but they've been truthful on this front the past 4 gens, so it's pretty plausible). Qualcomm's gen-on-gen increases have also been in this ballpark - and yeah, we all know that intel has been "struggling" - but it's less so that apple is increasing their lead and more so that intel is falling ever further behind. If anything, the rest of the market appears to be catching up to apple.

    • @andyH_England
      @andyH_England 5 หลายเดือนก่อน +16

      @@MoireFly Intel 13 to 14 gen was a zero upgrade in CPU and go back through the history of Intel's monopoly, and 15% was rarely seen.

    • @MoireFly
      @MoireFly 5 หลายเดือนก่อน +3

      @@andyH_England Yes, but as explained in the comment you're replying to - that's not been the norm; it's just intel and even for them only for a limited period.

    • @rokor01
      @rokor01 5 หลายเดือนก่อน +4

      True and kind of sad at the same time. Ten years ago this would have been kind of weak especially in the mobile space, twenty years ago 15% would have been considered a generational refresh and thirty years ago chip makers would not have released a generation with such low performance gains.

    • @sergioyichiong7269
      @sergioyichiong7269 5 หลายเดือนก่อน

      Apple only can increase transistors soon the amount of transistors will make a lot of heat and the pros of usin arm will be nonsense.

  • @diebygaming8015
    @diebygaming8015 5 หลายเดือนก่อน +31

    You never mentioned the actual reason they don't just make arbitrarily large chips is because increasing the size of the chip decreases the yield

    • @JeremyPickett
      @JeremyPickett 5 หลายเดือนก่อน +2

      That is true, it isn't debatable for mainstream, consumer, prosumer, and even professional (like dev machines or content creation). But there are some startups angling for the, "you want an enormous chip? Hold my beer market." 🙃 Cerebras I seem to recall just uses the whole wafer. Which takes waaay more guts than I have. But the real question... Can it run crysis? (I'll see myself out)

    • @vespuccini
      @vespuccini หลายเดือนก่อน

      This is why M4 has trickled into every product by now. More binning for more products!

  • @SpaceTimeAnomaly
    @SpaceTimeAnomaly 5 หลายเดือนก่อน +163

    The 5nm or 3nm is NOT the transistor size -- it is only the size of the smallest structure.

    • @TheWallReports
      @TheWallReports 5 หลายเดือนก่อน +4

      🎯💯Exactly! It the smallest size feature of a structure that fabrication technology permits.

    • @jadoo16815125390625
      @jadoo16815125390625 5 หลายเดือนก่อน +15

      No the line width for 5 nm node is ~25 nm. “5” is only a number for marketing

    • @MysterCannabis
      @MysterCannabis 5 หลายเดือนก่อน +20

      Not even that. There is nothing physically inside that has 3nm size. Process nodes are the same size but their geometry and architecture make them perform like a 3nm theoretical tegular planar transistors. But it's just a marketing term. It was possible because of the introduction of fin fet transistors.

    • @koenignero
      @koenignero 5 หลายเดือนก่อน +3

      Dont tell him,it will blow his mind

    • @Mikri90
      @Mikri90 5 หลายเดือนก่อน +2

      @@MysterCannabis yep, the nomenclature lost any sense of relation with physical size a while back. It's basically a useless matric for the general public.

  • @khyleebrahh7
    @khyleebrahh7 5 หลายเดือนก่อน +111

    Bring back dislike views. To stop miss information

    • @IronKore
      @IronKore 5 หลายเดือนก่อน +2

      Amen!

    • @MuhammadbinYusrat
      @MuhammadbinYusrat 5 หลายเดือนก่อน +8

      install the "Return of dislikes" chrome extension.

    • @echelonrank3927
      @echelonrank3927 5 หลายเดือนก่อน

      @@MuhammadbinYusrat chrome is spyware

    • @sullivan912
      @sullivan912 4 หลายเดือนก่อน +2

      Currently 4.2k likes, 4.1k dislikes.

    • @khyleebrahh7
      @khyleebrahh7 4 หลายเดือนก่อน +2

      @@MuhammadbinYusrat thats all good on pc but i use youtube on iPhone. Not pc. So I cant use that plugin

  • @amiltonfcjunior
    @amiltonfcjunior 5 หลายเดือนก่อน +9

    I usually don't dislike videos but this one deserves it. I currently own both an Intel i9-13900K and an Apple M3 Pro, so I know how they are in the real world.

  • @Noobtaco
    @Noobtaco 5 หลายเดือนก่อน +73

    Who calls it A.R.M? No one. It’s arm. 💪

    • @FlyboyHelosim
      @FlyboyHelosim 5 หลายเดือนก่อน +14

      Or "E-Light" for Elite. LOL

    • @jinchoung
      @jinchoung 5 หลายเดือนก่อน +3

      srsly. wtaf

    • @saurabh_tanwar
      @saurabh_tanwar 5 หลายเดือนก่อน +5

      Cause he never seen a tech video talking about this thing before making the video

    • @vernearase3044
      @vernearase3044 4 หลายเดือนก่อน +1

      It originally stood for Acorn RISC Machine.

    • @williamsjm100
      @williamsjm100 2 หลายเดือนก่อน

      ❤ Acorn!! I learned to program on BBC micro and remember watching when ARM released the RISC … BtW this video is hot garbage. But there you go!!

  • @Suqrat400
    @Suqrat400 5 หลายเดือนก่อน +237

    He doesn't know anything about chip design or manufacturing. Very bad info in this video. I am Electronics Engineer with 15 years experience in chip design field.

    • @parthpatel8532
      @parthpatel8532 5 หลายเดือนก่อน +16

      Ah yes, people can't lie on the internet. I am the lead engineer for Apple and this guy is right

    • @gnstallientood5007
      @gnstallientood5007 5 หลายเดือนก่อน +5

      20 years experience criticizing content here; feels great to be right and superior

    • @Mark_Williams.
      @Mark_Williams. 5 หลายเดือนก่อน +16

      @@parthpatel8532 Don't call people liars when you obviously aren't aware enough about the topic to instead agree with them. You end up looking the fool instead.

    • @cozimo64
      @cozimo64 5 หลายเดือนก่อน +6

      @@parthpatel8532 Ah yes, genuine people can't exist on the internet, specialists are a myth and everyone who makes content that validates my preferred narrative are correct.

    • @TheJmac82
      @TheJmac82 5 หลายเดือนก่อน +3

      @@parthpatel8532 he might be full of it but he isnt wrong. I mean almost everything in this video was incorrect. The one that sticks out the most to me was the 3nm is the size of the transistor. Actually writing code for arm is much easier due to less instructions.... I think that one takes the cake. With A - R - M computers coming in a close 3rd.

  • @epicsynthwave
    @epicsynthwave 5 หลายเดือนก่อน +34

    It's "eleeete" not "e-light" lol

    • @daan3298
      @daan3298 3 หลายเดือนก่อน

      It is across the pond.

  • @gambaloni
    @gambaloni 5 หลายเดือนก่อน +25

    Some things that you got wrong is the understanding of smaller nm process isn't correlated to size of the processor, it used to when it was the switch from 14nm to 7nm (give or take) but now it just means a more efficient production for the processor, where improvements come from the lithography (either by laser improvements, more cleaner etching on the wafer, less defects, etc). The winner in this nm production race is TSMC who are the kings of EUV and FinFET production.
    However, there's still one more hurrah of Moore's law, which is Gate All Around Field Effect Transit (GAAFET). This is why both Intel, Samsung and TSMC are pushing so hard to create foundries based on it, it will be a 'reset' of some sorts where they will once again compete to take clients as they are trying to be the first ones to produce it. Intel is also pushing for backside power delivery (which they marketed as PowerVIA) which might actually push them in the lead for processors. TSMC is also working on backside power delivery and I think Samsung is looking into it.
    Considering Apple's desire to be the first one on a new node production, it should be interesting (and possible) if they have some of the future A/M chips made by Intel (Intel's CEO is very interested in making Apple a customer for their chips). We'll see what happens in 2025 where we expect GAAFET to show up.

    • @echelonrank3927
      @echelonrank3927 5 หลายเดือนก่อน

      i dont think is just power delivery. backside everything delivery. it should help by attaching all of the working surface of the chip more directly to cooling,
      and therefore help to increase the capacity for bloatware and computational waste

  • @tutacat
    @tutacat 5 หลายเดือนก่อน +13

    It's not TSMC's fault that quantum tunnelling/leakage exists.

  • @itstehgamer
    @itstehgamer 5 หลายเดือนก่อน +29

    Arthur *Whiner* seems like a pretty fitting name tbh

  • @dpptd30
    @dpptd30 5 หลายเดือนก่อน +21

    Correction, the X Elite laptops are NOT the first ARM windows laptops, there are dozens of windows laptop with an ARM SOC before the X Elite all the way back to the Surface RT, this is just Microsoft's ANOTHER attempt to bring windows to ARM, and so far, they still haven't delivered yet, with it still having multiple app compatibility issues such as most adobe professional apps just not even available. To call this a new competition is very misleading, it'll be like saying the Surface Pro X from just a few years ago is a competitor to the M1 Macbooks. They only caught up in terms of performance, they haven't caught up in the actual things that are the problem for windows on ARM for years: compatibility.

  • @BrentLeVasseur
    @BrentLeVasseur 5 หลายเดือนก่อน +148

    I’m watching this video on my new M4 iPad Pro. It plays TH-cam superfast! I was able to watch this video in half the time!😂

    • @LazyGrayF0x
      @LazyGrayF0x 5 หลายเดือนก่อน +8

      I whip out my intel mac during superbowl so I dont miss half of half time show if I watched it on my m1

    • @Hart-en-Ziel
      @Hart-en-Ziel 5 หลายเดือนก่อน +1

      Even watching it 50% of the time was a waste of time

    • @LazyGrayF0x
      @LazyGrayF0x 5 หลายเดือนก่อน

      @@Hart-en-Ziel word. 2000’s were the bomb. Even dilly dilly was great. Now, ehh

    • @rickwookie
      @rickwookie 3 หลายเดือนก่อน +2

      You only wasted half as much of your life than the rest of us did then.

  • @patrickkayser
    @patrickkayser 4 หลายเดือนก่อน +4

    The descriptions of the technology are incorrect... do more research

  • @olampros321
    @olampros321 4 หลายเดือนก่อน +12

    You have no idea what you talking about. It’s sad.

  • @rajkarayadan8080
    @rajkarayadan8080 5 หลายเดือนก่อน +4

    "competition is catching up faster" - yes, but so? Before the M series, Apple was using Intel chips just like their competition.

  • @HeavenSevenWorld
    @HeavenSevenWorld 5 หลายเดือนก่อน +65

    Snapdragon X chips aren't anywhere close to M3 regarding perf/W, both in idle and especially in heavy load - research first instead of making assumptions based on marketing materials. Also, "almost no loss in performance" in terms of executing x86 apps on ARM under Windows is utter bullshit that is misleading your viewers and may end up in making wrong choices.

    • @mrrolandlawrence
      @mrrolandlawrence 5 หลายเดือนก่อน

      the biggest issue is MS. they are the kings of making things overly complicated. id wager emulated mode used 50% more energy too & that the selected programs were ones that did well on perf. not a fair sample of the market.

    • @sergioyichiong7269
      @sergioyichiong7269 5 หลายเดือนก่อน +4

      Do research yourself instead of repeating info you read somewhere with no personal evidence that the snapdragons are not faster.
      You just cant run a test on your self so dont talk about data you dont know its real.
      You dont know what is research.
      Have you checked by yourself on a serious test that they re slower? I m confident that the answer is NO
      Research is not watching maxtech or marques.

    • @chidorirasenganz
      @chidorirasenganz 5 หลายเดือนก่อน

      @@sergioyichiong7269they’re slower. Deal with it

    • @DimitarBerberu
      @DimitarBerberu 5 หลายเดือนก่อน

      Snapdragon X Elite focuses on AI (multiprocessing & more memory as needed for AI). M3 is for past gen single processing. Emotions will not save Apple. Huawei is already getting ahead with its HarmonyOS next complete solution.

    • @HeavenSevenWorld
      @HeavenSevenWorld 5 หลายเดือนก่อน

      @@DimitarBerberu The Oryon cores in Snapdragon X Elite were designed to be used in a server chip (by engineers who literally stole the recipe for a fast ARM core from Apple), so it's power-hungry (for ARM standards) and is being raped by M3 Max in every way, which can be configured with up to 128GB RAM and has a much better GPU for AI use cases in general, so get your facts straight; especially in the case of HarmonyOS that no one cares about.

  • @popquizzz
    @popquizzz 5 หลายเดือนก่อน +7

    No, No, NO! at 6.37 The 3nm process does not mean that the size of the transistor is 3nm in size. That is inherently wrong and deceiving. In fact you could have a transistor in a gate array one size and a transistor used in memory storage both use the same 3nm process but be very different in size.

  • @gustavinus
    @gustavinus 5 หลายเดือนก่อน +5

    ARM is just as old as x86. It is becoming king because of SoC. And because it draws little power with low heat, it is better for embedding in SoCs.

  • @froschfreak1699
    @froschfreak1699 5 หลายเดือนก่อน +28

    I think we shouldn’t expect a revolution with every minor processor update. The M2 gave me Nanite support in Unreal Engine 5, the M3 added hardware raytracing. There are important steps forward for a lot of users. I am everything else but disappointed.

    • @newyorkcityabductschild
      @newyorkcityabductschild 5 หลายเดือนก่อน +5

      these non-tec savvy TH-camrs have no clue about the actual advancements, they just expect raw power and not refinement.

    • @mrrolandlawrence
      @mrrolandlawrence 5 หลายเดือนก่อน +2

      totally. but id also say that the perf bumps are actually quite good and if i recall the intel previous mac bumps in speed i cant recall them being too impressive. i do remember though sub 2hr battery when editing video on my intel MBP. i will get the M4 and so i can run local LLM's for analysis of financial datasets. otherwise im still stoked with my M1's.

  • @LukaPetrovic84
    @LukaPetrovic84 5 หลายเดือนก่อน +21

    Maybe check into how 'Elite' is pronaunced...

    • @mattelder1971
      @mattelder1971 5 หลายเดือนก่อน +3

      Glad I'm not the only one annoyed by his pronunciation.

    • @SolarLantern424
      @SolarLantern424 5 หลายเดือนก่อน +2

      Maybe just play Elite for a while it would be a good start.

    • @DimitarBerberu
      @DimitarBerberu 5 หลายเดือนก่อน

      Most words in English are not pronounced as written. Crappy spelling ;)

    • @LukaPetrovic84
      @LukaPetrovic84 5 หลายเดือนก่อน +1

      @@DimitarBerberu My native languange is Serbian, we exactly read as written and pronounce it the same way. I know exactly what you mean...

    • @DimitarBerberu
      @DimitarBerberu 5 หลายเดือนก่อน

      @@LukaPetrovic84 I speak all Yugoslav languages + Aromanian & Esperanto - all phonetic (why stay stubborn & complicate spelling ;)

  • @rmzzz76
    @rmzzz76 4 หลายเดือนก่อน +4

    @10:37 "TSMC can not go lower than 3nm". Not true. Fact: trial production of 2nm chips exclusive to Apple's m series already underway, with production expected second half of 2025
    This little snippet of information, invalidates much of the point you're trying to make. Flagging.

    • @_IMNNO
      @_IMNNO 2 หลายเดือนก่อน

      Exactly. The M5 is rumored to use N3P, and the M6 will use N2.

  • @bujin5455
    @bujin5455 5 หลายเดือนก่อน +43

    The analysis of what Apple is up against is pretty solid. But the analysis regarding how close the Snapdragon is, is pretty optimistic to say the least. I also think the industry at large seriously underestimates just how hard it is to do a complete architecture switch. Apple is the only company on earth to do it successfully, and they've done it successfully multiple times, and at this point, they make it look easy.
    Microsoft on the other hand has tried many times, and has yet to do it successfully. Largely this is because of mixed incentives. Microsoft doesn't control the whole stack, one of their largest areas of strength is their legacy code base, and vast software compatibility, especially for legacy systems, which M$ caters to more than almost any other company.
    Also, these Snapdragon chips do not provide Apple Silicon like performance while providing AS like power efficiency (which was the game changer), they're actually as bad as x86, while not providing x86 performance for all that x86 software out there, and MANY software houses just aren't going to be incentivized to update their code, assuming they're even still around to do that. Additionally, there's loads of important things they can't run, many video games for instance (which is a forcing function in the industry), and other productivity titles as well.
    This is going to be a VERY difficult transition, because you don't have a single hardware/software stake holder who can manage the whole thing, and force the move forward. The real question isn't whether Apple can maintain their lead, it's can the PC industry actually manage to switch? And of course, AMD and Intel will be doing nothing to help push the industry in that direction, in fact they're hugely vested in making sure it doesn't happen. It's quite possible that vanilla ARM PCs are going to fail to gain traction, though Apple's success with the M-series chips does provide a measuring stick which will help incentivize people to try to make it happen, so maybe, but I don't think Apple has to "worry." After all, what Apple really got out of the move, was being the master of their own destiny, and being able to bring all of their software under a single architecture, and all of the strategic and economic advantages of scaling their own silicon. Kicking the x86's butt was just the cherry, not the icing, let alone the cake.

    • @SteelyEyedH
      @SteelyEyedH 5 หลายเดือนก่อน +4

      Thanks. Good summary.

    • @DoublePlus-Ungood
      @DoublePlus-Ungood 5 หลายเดือนก่อน +3

      As much as MS seems to hate legacy they ARE legacy. Of course they get weak in the knees at the thought. Apple can do it cuz Apple can put out a new $2400 microwave that doesn't even fit a pizza slice with the wrong plug and people would wait in line to buy it.

    • @DimitarBerberu
      @DimitarBerberu 5 หลายเดือนก่อน

      MS is S/W company & much better than Apple in that. Apple is niche Hardware co & much better on that. Huawei is coming on top of this marketing jungle with better Human Capital & Asia behind their back. Watch for HarmonyOS Next ;)

  • @shantooobeg
    @shantooobeg 5 หลายเดือนก่อน +5

    This is what happen when a farmer try to be a pilot for a day. That's kind of information he is providing in this video.

  • @rozetked
    @rozetked 5 หลายเดือนก่อน +14

    Не понял

    • @TimssTims
      @TimssTims 5 หลายเดือนก่อน

      хахахахах

    • @Name-tn3md
      @Name-tn3md 5 หลายเดือนก่อน

      инглиш учи

    • @xivxvi263
      @xivxvi263 5 หลายเดือนก่อน +1

      Если бы не этот коммент, то я бы уже подумал что блоегеры откуда то берут тепмлейты для своих видео.

    • @rupertgrimsby
      @rupertgrimsby 3 หลายเดือนก่อน

      «Вдохновился» человек 🥴

  • @gw7624
    @gw7624 หลายเดือนก่อน +1

    The fact that this guy's complaining that the jump from M1 to M2 wasn't as big as the jump from Intel to M1 tells me everything I need to know about this video.

  • @rsdotscot
    @rsdotscot 5 หลายเดือนก่อน +4

    The transistors are not 3nm, it's just called the '3nm process'. Anything smaller than ~7nm and you begin running into quantum tunnelling errors.

  • @Hardwaregeekx
    @Hardwaregeekx 5 หลายเดือนก่อน +5

    Personally, the M1 works just fine for me. For me, long battery life, low heat, low power consumption is where it is at in a notebook. The increasing power consumption and heat to the point where you actually need a fan is a real turn off for me.

    • @YannBOYERDev
      @YannBOYERDev หลายเดือนก่อน

      What ? Yeah the M1 is good. But a proper cooling solution is required no matter what if you want to have the full performance... Even the M1 needs a fan, the MacBook Air M1 doesn't have a fan so the CPU throttle(reduces performance and power consumption) on heavy workloads to prevent it from burning, I know that I have a MacBook Air M1... If you have a CPU that uses 65W you'll need a fan, even 15W... Smartphones CPUs consume 2-6W and still throttle because of the lack of fans.

    • @Hardwaregeekx
      @Hardwaregeekx หลายเดือนก่อน

      @@YannBOYERDev What do you need the performance for?

  • @hyksos2
    @hyksos2 4 หลายเดือนก่อน +10

    Arthur, you really need some help to do 'tech' proofing, many thing you said is wrong:
    - ARM vs x86: modern X86 processor no longer have dedicated circuitry for each instruction, they decode it into simpler instructions, btw please prononce it Arm like a single word. And the reason why Arm is superior to X86 is mostly around the way it handle its memory access.
    - Process density: TSMC N5 N3 N2 etc... are marketing, they do not represent the real transistor size and can't be use to deduce transistor density, but between each major number reduction there's a technology leap, using new type of gate, adding an extra patterning mask etc... and the reason why the M3 was created on N3 process is that Apple did a deal with TSMC to take 100% of the process production, TSMC was struggling with this process and wanted to reduce the number of client on it to optimize the process for that single client, Apple took the deal because it was still giving better result then N5 process and is good for marketing.
    - AI TOPS: it is not the data set that is different size, it is each of the data element that is, this is a HW implementation and is why the previous generation M chip can't double its performance using smaller data size. Btw not all AI task can use that small data size, it is still pertinent to compare FP16 performance, simply divide the XElite and M4 number by 2. But for AI task that can work with in8, the XElite really is 2.5X better then M3 (and 4X the M1) I really don't understand why Apple did not put the A17's NPU on the M3, my guess is that the M3 chip was created before the A17 (even if it was released after) to run production test on N3 process, to de-risk A17 production.
    I know that 99% of your listener won't notice any of these errors, but it can irritate many technical peoples :)

  • @Jenny_Digital
    @Jenny_Digital 2 หลายเดือนก่อน +1

    Sir, I don’t believe you know what you’re talking about. The AMD. X86-64 architecture is also on TSMCs leading edge, Intel about crapped itself and Qualcomm is using too much power and crippled by a lack of good will and decent devkits. It was noted that clever design allowed AMD to pack more cache into a smaller space by altering the TSVs and changing their routing so new innovations could conceivably help Apple despite hitting the limits of TSMC.

  • @kimeraevent
    @kimeraevent 5 หลายเดือนก่อน +16

    Comparing the base M series SoC to Qualcomm's top tier Elite ARM SoC is wild. Do you hear yourself? You may as well compare the Ryzen 3 to the Ryzen 9 or Intel i9. You're comparing the weakest version of the SoC in a Mac to the strongest version of the competitors SoC. What are you going to start saying next, the Rockchip SoC are competition for the Snapdragon 8's?
    The Qualcomm X Plus is barely comparable to the M1 Pro from 3 years ago. The top end X Elite is on part with the M3 Pro, and that SoC is all performance cores. There is no actual understanding of the comparisons in this video.

    • @nikhilt497
      @nikhilt497 5 หลายเดือนก่อน +5

      The price segment is what matters, x elite and plus target the base m3 segment

    • @rainmannoodles
      @rainmannoodles 5 หลายเดือนก่อน

      It’s also true that even though the X Elite has good CPU performance (at least compared to the MacBook Air base model) its GPU is really weak. The M chips are more well balanced.
      I’m glad to see competition, but it just shows how far the Windows PC market has to go. Apple still has a pretty significant lead.

    • @Filtersloth
      @Filtersloth 5 หลายเดือนก่อน

      @@nikhilt497the price might matter most to you, but not to everyone or every business.
      If I want to get the equivalent of an M2 Ultra chip in a snapdragon chip, which one would I buy?
      Is there a roadmap from Qualcomm for a chip that will suit the needs of high end video editing?
      There are a lot of businesses with money that will pay for equipment that suits their needs. They pay for it because thats what they need.

    • @Filtersloth
      @Filtersloth 5 หลายเดือนก่อน

      @@rainmannoodlesactually i think the single core performance of the M3 is better than the snapdragon X elite.
      But everyone is doing benchmarks vs an M3 MacBook Air, which only has 4 performance and 4 efficiency cores, while the X elite has 10 performance cores I think.

    • @iamwisdomsky
      @iamwisdomsky 5 หลายเดือนก่อน

      @@nikhilt497what's the price for if you can't even use it for 100% of things. There are a lot of apps right now that does not work with Windows for ARM even with Prism.
      Mac on the other hand, you are guaranted that everything works.
      I'd rather stick to my Mac for my peace of mind.

  • @MStoica
    @MStoica 4 หลายเดือนก่อน +2

    I’ve only clicked on this video thumbnail because I had some leftover popcorn. But halfway through it I am amazed that the author hasn’t pulled it down yet… he can’t be serious 😂

  • @IOOISqAR
    @IOOISqAR 5 หลายเดือนก่อน +4

    Those Qualcomm Chips have each 12 Performance Cores, you can't compare those to the baseline M3.

    • @BenjaminSchollnick
      @BenjaminSchollnick 3 หลายเดือนก่อน +1

      That's been my biggest complaint between those that are reviewing the Snapdragon Elite X & the M series chips. They are not comparing Apples to Oranges. Even they take the time to try to get the same processor count, the Elite X's don't have efficiency cores, so the M Series device is going to be slower. If it's not, then there's something seriously wrong with the Elite X.
      So when I've seen benchmarks showing them fairly close on testing benchmarks, I'm surprised that they don't realize that means a significantly slower M series processor just trounced something that has 1/3rd more processing power.

  • @lundsweden
    @lundsweden 3 หลายเดือนก่อน +1

    Microsoft actually tried using ARM chips in portable devices more than a decade ago. Ut was a total failure, but Apple certainly want the first to try using ARM chips.

  • @TransCanadaPhil
    @TransCanadaPhil 5 หลายเดือนก่อน +5

    I’ve grown out of caring about minutiae like this. Still using an intel i5 imac from 2015 to edit my final cut videos; works fine. I wonder about this new class of tech enthusiasts whom seem more interested in shopping for a shiny new object every year rather than really learning and using their gear. I own a piano that’s 40 years old, works fine. I don’t salivate about replacing it every year or claim (as tech journalists often do) that Steinway or Yahama is “finished” or “dead” because people aren’t rushing out to replace their perfectly good pianos every year. The comp industry needs to become more like every other good. Long lasting products are considered “good quality” that people want to keep and not constantly replace. There’s just this odd “immaturity” that pervades the tech journalism sector; a lack of life experience and long-term maturity. It’s always like listening to a 6 year old child opine about the latest piece of candy being the “greatest ever” and he must have it yesterday.

    • @psyker4321
      @psyker4321 5 หลายเดือนก่อน

      Yep, no reason for them not to continue with AMD dedicated GPUs. Now their OS lags like hell on laptops and even m2 mac mini meanwhile my 2017 macbook pro is smooth and more usable

    • @top0657
      @top0657 3 หลายเดือนก่อน

      I do agree that getting a new computer too often is a huge waste of money for the majority of regular consumers but your piano analogy is just retarded as hell. Sure I own a skillet thats over 60 years old or an axe that is also closer to a century old. Why? Because those are simple products that have not really evolved in decades and provide same exact performance as their 2024 counterparts. Computers are way different and there is actual (and very noticable) measurable gains in performance being made quite rapidly that SOME people can actually benefit greatly, mainly for professional use. For example my previous work computer was a 2019 intel macbook with maxed out i9 processor and the development environment I work with took 20 minutes to boot up with all the code test suites taking over 2 hours to run. Now that my computer was upgraded to M3 Max laptop the dev environment boots up in under 2 minutes and the test suite run in about 20 minutes. In my use I'd estimate on average I save around 2-3 hours of time I previously spent spinning my thumbs daily of time I can do some actual coding. Hopefully this helps you understand why your piano analogy is not relevant at all. You surely wont be able to learn to play the piano any faster wether you do it with a 40 yo piano vs. brand new piano.

    • @CraigCruden
      @CraigCruden 2 หลายเดือนก่อน

      I moved to an M1 Max - and that obliterates the last Intel based Mac laptop... Though if you are happy sitting there waiting and have nothing that really needs to be done quickly - then there is no reason to upgrade. I won't be upgrading my laptop anytime soon either now since I am failing to fully utilize what I have most of the time (the one I had before was just frying itself to keep up to the workload... no problem with that these days). On the other hand, I went from an iPhone 4S to iPhone 11 and won't be upgrading that until at least iPhone 18 because I don't use it for much anyway.

  • @C-u8c
    @C-u8c 3 หลายเดือนก่อน +1

    17:31 - "dubdub-you-see"/ "feeve-tin" - the pronounciation is hilarious every then and now

  • @RaniRani-zt2tr
    @RaniRani-zt2tr 5 หลายเดือนก่อน +32

    The M1 Mac air is still impressive and I’m trying to get it now new

    • @mavfan1
      @mavfan1 5 หลายเดือนก่อน +2

      what reasons are there that you have not succeeded?

    • @RaniRani-zt2tr
      @RaniRani-zt2tr 5 หลายเดือนก่อน +4

      @@mavfan1 money reasons🤣😂

    • @yourlocalriri123
      @yourlocalriri123 5 หลายเดือนก่อน +4

      700 dollars new from Walmart is a steal!

    • @Juanisheremdfk
      @Juanisheremdfk 5 หลายเดือนก่อน +1

      It is still the best for its price to benefit ratio. Still using it now.

    • @RaniRani-zt2tr
      @RaniRani-zt2tr 5 หลายเดือนก่อน

      @@yourlocalriri123 I know bro

  • @Elkarlo77
    @Elkarlo77 5 หลายเดือนก่อน +2

    A few things:
    1) x86 Processors are RISC Processors since 1992 which got an CISC interpreter upfront. And the last development of the x86 stage was 2011 with the SSE4.2 Iteration of those Chips which is for media processing. Making it much more complicate to programm in Assember, but thats what compiler and higher Languages are for.
    2) The Problem Apple faces is the point of dimishing returns due shrinking. Shrinking down to 10nm everything still profits. But going down to 5nm only compute units have 60% more efficiency, memory cells only 40%, io Parts only 20%. Going even lower this gets more and more pronounced. Thats the Reason why AMD and now Intel are producing Chiplets, they keep some parts at 6nm and 12nm to keep them Cheap while other Parts are produced in 7/5nm and now in 3nm. The ARM Architecture depends massivly on Cache-Memory in their Pipelines for good Performance. And thats the Problem the M-Chips now faces, the Performance boost the M-Chip saw was due a restructering and lot of Caches in it. Which was a brilliant move, but the performance relies on the Cache in the Chip, and as the Cache doesn't improve a lot due shrinking, it is the bottleneck, so Apple needs to putt more Cache in it to get the performance improvement they want. But Memory is one thing: Slow and hot. And Apple is at the balancing point where increasing the Cache will cost more and more power with less and less gains. So more and more needs to be done balance this problems out.

  • @ronkemperful
    @ronkemperful 5 หลายเดือนก่อน +52

    Great review. Eventually the laws of physics will be the limiting factor for chip manufacturing regardless of who makes them. The next step will have to be to rewrite code for operating systems in general. I remember when a graphic operating system was able to run on just 4 mb of RAM, now 4 gigabytes of RAM is a minimum for Windows 11. Features keep on being added to all OS but in reality a lot of deadwood and bloat has also been added. Computers have increased in speed and bandwidth since the Mac came out in 1984 and Windows 3.0, but only so much can be improved without running into the laws of physics… atoms cannot be made smaller but operating systems can.

    • @jeffersonmp4
      @jeffersonmp4 5 หลายเดือนก่อน +3

      Quantum computing maybe?

    • @jimtipton8888
      @jimtipton8888 5 หลายเดือนก่อน +6

      What a great comment! What would it be like if the industry focus on the operating systems and software.

    • @Pipe_RS91
      @Pipe_RS91 5 หลายเดือนก่อน +1

      ​@@jeffersonmp4That is quite far from consumer computers right now.

    • @minddrug709
      @minddrug709 5 หลายเดือนก่อน +2

      Wait until we go subatomic

    • @axlrose357
      @axlrose357 5 หลายเดือนก่อน +1

      Yeah mediocre vga 640x480 with 8 bit colors. Now 4k with 24 bit. So computer need a lot more memory to deal with.

  • @emonahmmed9192
    @emonahmmed9192 5 หลายเดือนก่อน

    Brother could you please tell me should I use screen protector for my MacBook m3 pro? I mean does it really help to protect screen? Please lmk 🙏🏻

  • @PhoenixNL72-DEGA-
    @PhoenixNL72-DEGA- 5 หลายเดือนก่อน +4

    I remember reading news on Acorn starting their design on the Acorn Risc Machine for use in their Archimedes line of home computers back in the 80s. ARM has come a long way since then...

  • @jurry_murry
    @jurry_murry หลายเดือนก่อน +1

    This aged like milk 😂 M4 benchmarks blow the competition away

  • @talldarkstrangerpr
    @talldarkstrangerpr 5 หลายเดือนก่อน +10

    The M1 MacBook Pro Max is still kicking butt. I wouldn't take Apple out of the equation yet. It took the competition four years to catch up with them. We'll see.

    • @newyorkcityabductschild
      @newyorkcityabductschild 5 หลายเดือนก่อน +5

      well 4 years and they did not quite catch up, they just threw more cores at it

    • @yayinternets
      @yayinternets 5 หลายเดือนก่อน

      Agreed. I have the last version of the MBP M1 Max and it’s still great. Will keep it for a long time just like I have my previous ones. Easily get 2-3x more life from these than I would a PC laptop.

    • @psyker4321
      @psyker4321 5 หลายเดือนก่อน

      Does the OS lag like hell like my M2 mac mini?

    • @talldarkstrangerpr
      @talldarkstrangerpr 5 หลายเดือนก่อน

      @@psyker4321 not at all. How much memory yours has?

    • @psyker4321
      @psyker4321 5 หลายเดือนก่อน

      @@talldarkstrangerpr 16GB but cannot scroll smoothly on a 4k monitor, so i just use it for cpu-intensive build tasks.

  • @namd3
    @namd3 5 หลายเดือนก่อน +2

    Fun Fact: Not all of the transistors on the chip will be 3nm

  • @alonsolugo2974
    @alonsolugo2974 5 หลายเดือนก่อน +3

    When you go lower than 3 nm you start going to the realm of quantum computing and that stuff is messy

  • @MeinDeutschkurs
    @MeinDeutschkurs 5 หลายเดือนก่อน +1

    Are the TOPS at the apple chips based on ANE (Apple Neural Engine)? I just use GPU on M2 Ultra.

    • @CraigCruden
      @CraigCruden 2 หลายเดือนก่อน

      TOPS are based on Neural Engine I believe, and the M3 TOPS benchmark != Snapdragon (single vs two byte - so the M3 benchmarks would have to be multiplied by 2 for a fairer comparison if I am correct.

    • @MeinDeutschkurs
      @MeinDeutschkurs 2 หลายเดือนก่อน

      @@CraigCruden not only but also. You can produce inference with GPU, NPU/ANE and even CPU. I found this out about 1 month ago.

  • @jeffchastain2977
    @jeffchastain2977 5 หลายเดือนก่อน +16

    The difference from the Intel Chips and their operations and the M1 was and was always going to be a huge jump. But when you are working the same class of architecture differences are going to slow. Anyone who upgrades at every new release is an idiot. But my M1 MacBook Pro was a big jump over my Intel 9 MacBook Pro, and my M3 Pro MacBook Pro is a big jump ahead of my M1 MacBook Pro. To call Apple "done for" is ridiculous. If Snapdragon lives up to its hype/specs and they put into something that is more durable than the crappy fragile Windows based computers that consist of that market today, and Microsoft can make their operating system into something actually as great as MacOS, then they actually might give Apple a run for its money. Until then I will stick with Macs

    • @cogmission1
      @cogmission1 5 หลายเดือนก่อน +1

      I also commented about this. Utilizing innovated ARM Chips to run a Windows operating system is like putting lipstick on a pig. 🙂

  • @FougaFrancois
    @FougaFrancois 4 หลายเดือนก่อน +2

    full of inaccuracies and misunderstanding of the CPU/SoC market.

  • @rommellagera8543
    @rommellagera8543 5 หลายเดือนก่อน +11

    An old Dev here, I bought my son last year Mac Mini M2 8gb/512gb at 43,000 PHP
    I recently bought a Beelink SER7 PC with 7840HS 32gb/1tb at 30,000 PHP, much smaller, quite (could hardly hear the fan) and probably as powerful as M2 on most use cases
    As an added bonus, years down the road I have an option to upgrade the memory to 64gb or use 2tb or 4tb SSD, Apple should have made the Mini upgradable since it does have a bigger chassis

    • @mrrolandlawrence
      @mrrolandlawrence 5 หลายเดือนก่อน +1

      apple spend ages on the mac pro - but could not get DIMM memory to work fast enough to compete with the on chip DRAM. distance matters.

    • @chidorirasenganz
      @chidorirasenganz 5 หลายเดือนก่อน

      The GPU is 50% slower in raw compute and 3d rendering even more. In CPU tasks they are mostly on par. M4 will be significantly faster though

  • @JoTokutora
    @JoTokutora 5 หลายเดือนก่อน +2

    Im still fine with my 2020 Imac with the 10900 and the RX5700. No need to upgrade

  • @francescoatria1086
    @francescoatria1086 5 หลายเดือนก่อน +4

    i'm watched half of this video at it gave me an headache
    1 you clearly don't understand istruction set
    2 you talk as if the x elite chip are even remotely comparable to the apple M series when litterally every tech reviewer who didn't make a sponsored content has been shiting on this chip left and right (left being Microsoft and right being qualcomm)
    3 you talk about tech limitation as if they are or will affect only apple an that is obviously not true since TSMC has nearly 60% of the chip production market and has a technological lead over their competitor
    4 what Apple has done with the M series affected the industry has a whole making AMD and Intel seriously care about efficency
    5 even if the chip and the traslation layer (Rosetta for Apple and Prism for Microsoft) were equal in performance, wich they absolutely are not, the M1 devkit came out one year before launch the x elite one AFTER the product launched (no the devkit from two years ago is a lot different so is not comparable)
    anyway congrats this is the frist time ever (10+ years) that i leave a comment without finishing the video

    • @mattelder1971
      @mattelder1971 5 หลายเดือนก่อน

      You missed one headache inducing part: his idiotic pronunciation of "elite".

    • @francescoatria1086
      @francescoatria1086 5 หลายเดือนก่อน +1

      @@mattelder1971 i'm not a native english speaker myself and my pronunciation is horrible so i can't really judge others

    • @mattelder1971
      @mattelder1971 5 หลายเดือนก่อน +1

      @@francescoatria1086 That's a fair point, but if someone is going to all of the trouble of researching the topic, and producing a video IN ENGLISH on a subject, you'd think they would at least make an attempt to pronounce one of the main topics of discussion correctly. But given that so much else was wrong in this video, I'm thinking he's just someone reading a script that someone else wrote for him, and probably doesn't even realize how incorrect everything he's saying is.

    • @francescoatria1086
      @francescoatria1086 5 หลายเดือนก่อน

      @@mattelder1971
      Let's put it this way when a video is so factually incorrect a pronunciation error is so far in the background that you can't catch it without a binocular and certainty not one built with the """knowledge""" in this video 😂😂😂

  • @EnriqueRivera-sz2ph
    @EnriqueRivera-sz2ph 4 หลายเดือนก่อน

    I showed this to my EE-292L professor and he laughed pretty heartedly. He was slightly bothered that over 100k people saw this and potentially believe the speaker knew what he was talking about.

  • @KC-bv9kf
    @KC-bv9kf 5 หลายเดือนก่อน +9

    This made it easy to block the channel

  • @patrickcallahan2885
    @patrickcallahan2885 2 หลายเดือนก่อน

    I have the M3 Pro, 11/14, 18gb, 1T SSD and completely happy with it

  • @JAFOpty
    @JAFOpty 5 หลายเดือนก่อน +5

    I'd phrase it like: "They went with 3nmp to be the first AND to have something to show in the keynote with a higher number" I seriously doubt most Mac users know or care about the technical aspects, they just see M3 > M2.

    • @davidbiagini9048
      @davidbiagini9048 5 หลายเดือนก่อน +3

      A big part of Apple's show is for Wall Street, not the users. Apple fanboys and fangirls will buy anything Apple releases - it's Wall Street that really matters to Apple.

  • @MackGuffen
    @MackGuffen 5 หลายเดือนก่อน +1

    My Ryzen 5900 became a lamp stand once I received my M1 Macbook Pro. Even if the updates are only 15%, which I don’t need yet, that’s still pretty good, plus Macs just run period!

  • @roccociccone597
    @roccociccone597 5 หลายเดือนก่อน +3

    If you've tested the X Elite laptops you know that under actual load they last about as long as intel or AMD chips. So they're really not that much more efficient and lag a lot behind Apple doing the same heavy tasks. And besides Windows on ARM is still a crap show. Most reviewers didn't show off the jank, but when your device literally randomly black screens and has to be rebooted, or the keyboard magically stops working, it's quite obvious that these devices aren't market ready and were rushed to market before intel and AMD had a chance to release their new chips.
    Anything that uses vector extensions so any variant of the AVX instructions is not going to work and won't work for a while, effectively translating vector instructions is very difficult.
    Those AI bloat features don't do anything to enhance your experience and are gimmicks at their very best.

  • @PaskalS
    @PaskalS 3 หลายเดือนก่อน +1

    My guy, you realize that Qualcomm and other chip designers also use TSMC’s foundries, right? They all have the same manufacturing restrictions. Samsung and Intel have some similar capabilities, but nowhere ear the volume that TSMC has.

  • @thewelder3538
    @thewelder3538 5 หลายเดือนก่อน +6

    You can tell you're a Mac user when you start taking about instruction sets without having the slightest idea what you're talking about.
    x86 processors don't have a legacy instruction set. They simply have an instruction set. The only difference between earlier x86 processors and the latest ones is that the later ones have additional instructions added to the base instruction set. Stuff like SSE, SSE2, AVX etc.
    ARM processors have exactly the same thing, just not to the same extent. The main difference between x86 and RiSC based processors is how they work, not their instruction sets. On RiSC everything is done on chip, so you get loads of registers and access to memory itself is somewhat limited; whereas with a CiSC processor like x86 access to memory is relatively quick and so you get many fewer registers.

    • @vhateg
      @vhateg 7 วันที่ผ่านมา

      The goofy part is the concept of the microcode, the thing that translates the x86 instructions to actual instructions implemented in the CPU.
      For decades, x86 CPUs have been just RISC CPUs (like ARM CPUs are) with the microcode translating the instructions to them. Something like an integrated hardware-level Rosetta, let's say.
      The instruction set can have a difference, but it is nowhere as important as the guy claims.

  • @JustXavier
    @JustXavier 5 หลายเดือนก่อน +4

    This is the first time I think that I’ve ever been actually interested in the sponsor of the video. Board mix actually does look interesting and I actually think I’ll try it this weekend. 😅

  • @Holycurative9610
    @Holycurative9610 5 หลายเดือนก่อน +1

    M1 to M3 had a 25% increase and you think that's not impressive from a new producer of CPUs. If the M1 was equal to an 2nd gen core 2 duo I would understand your POV but it wasn't and to get a 25% jump in only 2 generations is nothing short of miraculous. I don't use Apple because of their anti-repair practices but that doesn't mean I don't appreciate good stuff.

  • @aravinddnivara803
    @aravinddnivara803 5 หลายเดือนก่อน +6

    Windows runs on chips and motherboards always designed to accept 1000s of accessories and parts manufactures. Apple manufactures chip suited exactly for OS , OS that’s targeting only Apple specific apps.

  • @JM_2019
    @JM_2019 5 หลายเดือนก่อน +2

    There is no real need to make cpus faster every couple of months. That might be a nice vendor competition but it will not decide what people buy.

  • @richieb74
    @richieb74 5 หลายเดือนก่อน +4

    M3 was a bust. That didn’t stop them for charging full price for it. Really annoyed that they dialed back the m3 on purpose and the m4 will be a big upgrade. Like what the heck.

    • @newyorkcityabductschild
      @newyorkcityabductschild 5 หลายเดือนก่อน

      they wanted the 4m's £ nanometer tech for the M3 but it was not viable at the time, TSMC needs to get the fab processes refined first, so yes the M3 was a stop-gap chip, but it gave us raytracing

    • @mrrolandlawrence
      @mrrolandlawrence 5 หลายเดือนก่อน

      yeh it was a TSMC thing. they were late and then there were higher than hoped for defect wastage. they did update though and now with M4 we are back on track. compare that to the years intel got stuck on a node. intel would also release a new set of extensions to patch the outdated ones. great for benchmarks - terrible for people making programs.

    • @andyH_England
      @andyH_England 5 หลายเดือนก่อน +1

      The M4 was six months behind, and Apple needed to upgrade the GPU and co-processors of the M2. They had every right to use the M3 as a filler regarding architecture as they needed to upgrade other package parts. This also meant they did not rush the M4, and instead, we will see a massive upgrade. I intentionally avoided the M3 as I understood the issues and what was happening. Hopefully, other techies also followed that guideline.

  • @billraty14
    @billraty14 5 หลายเดือนก่อน +1

    Clock speed isn't current, but current draw is related to clock speed. Clock speed is how often the chip changes states. The relation to current draw and heat is a by product of needing to successively charge and dischard transistor gates, which in CMOS act like capacitors.

  • @pyramidetrader
    @pyramidetrader 5 หลายเดือนก่อน +3

    I just bought m3 pro yesterday n I saw this lol😂

    • @ForbiddenWellness
      @ForbiddenWellness 5 หลายเดือนก่อน +1

      At least you have time to return it unlike somebody who bought it 4 months ago😪

    • @pyramidetrader
      @pyramidetrader 5 หลายเดือนก่อน +1

      ​@@ForbiddenWellness I got to keep it my 2017 MacBook Pro too fking slow. No time to wait for the m4 lol

    • @ForbiddenWellness
      @ForbiddenWellness 5 หลายเดือนก่อน

      @@pyramidetrader feel that

    • @ForbiddenWellness
      @ForbiddenWellness 5 หลายเดือนก่อน

      @@pyramidetrader the m3 max 16 inch is a beast. Just shitty if there really will be a significant jump. We’ll be alright lol.

  • @chambre466
    @chambre466 4 หลายเดือนก่อน +1

    we can talk about hardware and bench marks for hours, remembers this, I migrated to Mac for the OS, much more intuitive and less glitchy

  • @MYNAME_ABC
    @MYNAME_ABC 5 หลายเดือนก่อน +5

    5 nm to 3 nm is not 5/3 the number of transistors per area, but (5/3)^2, obviously!

  • @krzysztofpelon5633
    @krzysztofpelon5633 5 หลายเดือนก่อน +1

    6:35 Smallest diemenson on the die structure is 3nm, single transitor is much bigger.
    7:44 4. Optimistaion in current architecture

  • @danielalba7651
    @danielalba7651 5 หลายเดือนก่อน +7

    Ai stuff is bloatware. Microsoft is shooting itself on the foot. Apple will as well if they keep that Apple intelligence stuff. Why run bad machine learning models in underpowered chips when you can do it in a browser, it's beyond my mind. M1 will do everything you want to do on the go. An actual PC desktop will do the rest.

    • @roccociccone597
      @roccociccone597 5 หลายเดือนก่อน +3

      why do it to begin with, it's not like it really does anything mind blowing anyway.

    • @danielalba7651
      @danielalba7651 5 หลายเดือนก่อน

      @@roccociccone597 These companies want to appear like they are doing something innovative for the red line to go up and satisfy investors, same way it was done for crypto and blockchain a year ago, but more invasive and unnecessary

    • @GraveUypo
      @GraveUypo 2 หลายเดือนก่อน

      i'd go farther and just call it malware.
      windows 11 is unusable.

  • @frankwalder3608
    @frankwalder3608 5 หลายเดือนก่อน +1

    That is a very impressive chart in your video at 6:04 showing the evolution of CPU power. Where did you find that? You did the opposite of many creators on TH-cam. They make their 10-minute video 20-minutes long. You made your 30-minute video 19-minutes long. The only problem with that is I had to frequently stop and replay segments of the video, with multiple pauses to absorb the material you presented. You over-achieved on your production engineering. BTW, my daily driver is an M1 Mini with Cinema display, which I used to watch this video.

  • @katinasoutherland7693
    @katinasoutherland7693 5 หลายเดือนก่อน +3

    Thank you for this video. It was very informative. I seriously appreciate this. I'm hoping to see more like this.

  • @giorgos7six
    @giorgos7six 5 หลายเดือนก่อน

    Do you believe that an 16" MBPro M1max is the best bargain compared to the rest Mchips to work with (design apps) nowadays? Should i just keep it, or replace it with a different Apple Mchip laptop?

    • @nicolasstrawberry4148
      @nicolasstrawberry4148 4 หลายเดือนก่อน +1

      It all depends on what you're doing on your Mac..it might be over powered for what you do or it might now be...are you doing audio production? Then you might not benefit getting a new Mac because you might not be using all of its power...Are you doing stuff like 3d animation. modeling and animation...or After effects work? Then you would benefit from having more GPU cores and more power...it really depends on what you do.

    • @giorgos7six
      @giorgos7six 4 หลายเดือนก่อน

      Mostly working with Adobe CC apps like Illustrator, Photoshop and Indesign.

    • @nicolasstrawberry4148
      @nicolasstrawberry4148 4 หลายเดือนก่อน +1

      @@giorgos7six Thats definitely a good Mac for doing that kind of stuff as long as you have enough ram, Im hoping you definitely bought it with at least 16 gb of memory. How does it feel to you though? Does it feel like you need something with more performance ? If you bought an M2 or M3 Max there would be performance gains but not as much as going from and Intel Mac to an M1 Max...honestly almost everyone I know or have talked to online are never in a rush to upgrade from their M1 Max, Apple just made them too good for most people to feel the need to upgrade them anytime soon.

    • @giorgos7six
      @giorgos7six 4 หลายเดือนก่อน

      @@nicolasstrawberry4148 i don't feel like i need more power, but this cpu race Apple is into making new chips every year, kinda makes us buyers feel kinda inadequate in the power we have, but i guess its just a mind game.

  • @bfjoutdoors
    @bfjoutdoors 3 หลายเดือนก่อน

    I’m a pc guy and even I think this video is a literal argument for blocking misinformation on the internet

  • @mactek6033
    @mactek6033 3 หลายเดือนก่อน

    You are forgetting one thing. Apple has full control of the board real estate.

  • @asadanik5987
    @asadanik5987 5 หลายเดือนก่อน +1

    still i am not feeling bad with intel macbook pro 13 inch 16gb ram.. and its worth for 2024/2025/2026 i believe.. nothing said about hype. its real i am using it for my main daily machine as a software engineer.

  • @citywitt3202
    @citywitt3202 5 หลายเดือนก่อน +1

    Man so many thoughts, here’s the top three.
    1. You’re a good presenter, but you need to focus on getting stuff right, over it looking right.
    2. Yeah M3 was filler and it’s clear they aren’t sticking with it, but the iMac has always used laptop components since at least as far back as the aluminium intel iMac days in 2007, I don’t know about prior intel and definitely not the G5. But with that in mind it makes complete sense it uses the M3.
    3. Completely wild how Apple builds an insane chip with insane graphics but I get a perfectly respectable performance on my AMD 5700g with integrated graphics at less than ¼ the cost. This battle will not be won on specs. It will be won on real world stuff like battery life, can my games run well enough for me, and what happens when it breaks.
    Bonus point: what happens when my pc or non Mac laptop breaks is I get new parts and fit them for under £100. When my Mac breaks, it’s time for a new machine at a cost of over £4000 to match the spec.

  • @qwertyzxaszc6323
    @qwertyzxaszc6323 5 หลายเดือนก่อน

    The reason I immediately bought an M1 Mac was because of how amazing the iPad Pro was. It was much more responsive and faster than my desktop Windows machine for most things. Now with the same chip designers we have new Qualcomm chips and Windows finally has decent laptops. It was definitely worth it.

  • @xzs432
    @xzs432 5 หลายเดือนก่อน

    what is the difference in the commands and the architecture type and bit type, x86, x64, ARM, 32 bit, 64 bit?

  • @KubanKevin
    @KubanKevin 3 หลายเดือนก่อน +1

    You have absolutely no idea what you’re talking about. The M series chips have made the Mac turn into something that stands out once again. Nothing comes close to matching it at performance with the same amount of power draw. It made me, an avid hater of Mac’s, come to respect them. You say that 15-30% isn’t much of an improvement? What are you babbling about? Intel was on a roll with 10% or less year over year improvements from their 3rd gen until the 9th gen when AMD started waxing them in nearly every category. And their response was making a chip that literally cooks itself to death just to slightly outperform the competition albeit at almost double the power draw.

  • @garryhall9660
    @garryhall9660 4 หลายเดือนก่อน

    You totally miss one massive point, the OS. apple has truly the most user friendly and does not charge for Office with subscriptions month on month. Apple gives you a new feature OS every year which just works, while Microsoft gives you bloatware even after you delete the Bloatware, and the articles no one wants in its search box. Apple strength is defiantly its OS.

  • @tub0ne
    @tub0ne 3 หลายเดือนก่อน +2

    Why TH-cam suggesting this garbage video? Talking about chips, not even mentioning ASML

  • @AmitVerma-cw9pv
    @AmitVerma-cw9pv 26 วันที่ผ่านมา

    I watched full AD for supporting you. Nice video

  • @LV-ii7bi
    @LV-ii7bi 5 หลายเดือนก่อน +1

    18 minutes to convey absolutely no criticism at all, as if they're shit was flawless

  • @jimcabezola3051
    @jimcabezola3051 5 หลายเดือนก่อน +1

    Elite is pronounced "ee-LEET." It's not pronounced "ee-LIGHT." French is hard, but...not that hard.

  • @TheDenix8
    @TheDenix8 2 หลายเดือนก่อน

    As magical as M1 seems it primarily fed from "throwing away legacy and backwards compatibility". Of course, also from having everything under your control to tweak. But you can only do that once

  • @clintmiller88
    @clintmiller88 5 หลายเดือนก่อน +2

    M4 is the first real 3Nm chip