I really wish you guys would start doing benchmarks for the audio crowd. Testing DAW performance. Edit: but also thank you for everything you guys already do!
For audio it mostly boils down to this: Can you afford a Mac? If yes it's your best bet - their drivers are just that good. (and that's coming from someone who doesn't like Apple)
@@LinusTechTips Woo! It would be nice to see discussion of MADI PCI-E interfaces, and edge use cases (Endless Analog's CLASP tape machine syncing, for example) and how they work with 2020s machines.
Here for this comment!!! I remember when this dude was so shy Linus had to push it in front of a camera and he was so smart but so awkward. But finally, here he is, better than never.
except a dedicated GPU is more powerful in general, it does way more, there's a reason why dedicated cards are quite large, SoC haven't hit the capability yet
would be interesting comparing the system Power consumption with a lower tier gpu, something with the performance of Apple silicon. that i9 is hungry for sure, but the gpu eats a lot too!
We had a massive hike in power prices where I live last July, and the power draw alone is enough for me to not even bother considering a 40 series GPU. I'd be paying more for power than I pay for rent just running that damn thing.
I'd care if it was a laptop 😂, a prius might get good mpg compared to its 0-60 but its still slow and that's not what people that care about 0-60 look at
Anthony is really good at reacting on the fly and his overall hardware and software knowledge is good enough that he can improvise as necessary, which is a very useful skill for video hosting. He's also a very authentic personality, like many people at LMG, which makes his videos quite entertaining.
Small suggestion: I know your graphs always include a "lower/higher is better" legend. But if you'd somehow visualy represent which is better, it would be way easier to grasp the data quickly. Something like having the "better" side green or whatever.
@@dallinsonntag3160 it is not about boosting views, it is about making better quality content, small intuitive things make a lot of difference, you may not think about it but it does for a lot of people, i watch linus tech tip's videos in things im not even interested in just because it is fun to watch, like home appliances, why would i want to watch some guy build his house, because it's just fun, that's how intuitive it already is, so proving my point that adding small things over time pile up to make better content for everyone
The biggest issue I have with M1 in general is that programs are either exceptionally performant or wildly behind Windows computers, and it all boils down to whether or not they're designed with M1 in mind. That wouldn't be so bad if not for the fact that the speed at which developers have tried to optimize their programs for M1 has been so slow that, by the time Apple silicon is properly utilized, we'll be at M3 at the very least. Like the iPad Pro - it's a lot of power and not a lot of ways to use it.
@@Teluric2 unrealistic, it’s clear with apples own video editing software that the new technology can be optimised to get insane performance , editing 16 raw timelines at once with no lag is insane and mind blowing. it’s in its First gen but by gen 4 i think optimisation of the chip will be done and the true power can be unleashed
@@Teluric2 You can see in this video that Applications and workloads developed specifically to run on this specific ARM hardware run exponentially better and faster than their windows counter parts, its not the chip/architecture it is literally just developers lagging behind (as always) to push updated app versions (which in some cases is understandable because it may require a complete application rewrite but still its been damn near 3 years). It. Is. Not. The. Chip.
This comment seems really inaccurate? I can never tell if something is running on x86 or arm64, Rosetta does a pretty good job and I've never been able to tell the difference on my Mac.
At ~4:20, I experience the artifacts you are referencing on my PC version of that fight as well. The fight had some recent bugs that I am told are fixed now, but I haven't been able to verify. I have been experiencing those buildings disappearing since day 1 of the raid. This is not just a Mac issue.
Maybe it only showed up for them during the PC testing since they can only comment on what they see and it wouldn't be too surprising given how potty Apple Silicon CPU's integrated GPUs are that they would have weird and bespoke graphical glitching.
Yeah there are a lot of people in my raid team who also saw that pillar in particular flickering in and out and it seemed to be based on camera position and angle
That boss in World of Warcraft was bugged recently with the assets blinking in and out of existence. I experienced it on PC, so can't be sure that it was the Mac's issue.
Was going to comment this, it’s still broken now, one of the only fights with this kind of glitch currently in the game, very unlucky LTT, good review as always however!
@David V Sure you run all those but it doesn't mean you know what a slanted review looks like. - Me: the guy who has a m1 laptop and gaming pc with windows and a linux vm running on it.
Genuinely appreciate the breadth of coverage. A breath of fresh air after seeing everyone just cover video rendering and move on, as if everyone watching was doing just that.
I really hate that part about youtube reviews. I understand that it's their daily bread and what they understand, but it's kind of a bubble that probably not more than 0.01% of the audience is in. Cinebench is cool as a reliable multicore benchmark, but video encoding doesn't really give us a real world usage metric.
Only time I've noticed it was on setups that were Linux or Mac native systems, or Windows systems with some non-up to date drivers. The Linux users even managed to patch the glitching themselves with their witchcraft.
I've never had that issue in windows. And i use a huge array of pcs. Everything from nvidia to amd, and even play on laptops and desktops with igpus. Never seen that. I could list you all the diferent configurations of pcs but we've be here all day lol.
side note for the Prores encoding section: prores still has a significant encoding overhead, and is not fully limited by the RW speed of your media, but still more limited by the CPU/GPU performance. to measure pure/raw render-write performance, render to a tiff-sequence (uncompressed) to see the performance finally being limited by the media, and not the cpu
I'm really missing any testing on DPC latency on these systems. It's such a hidden specs that only comes into the picture for most people when they start making/playing music or anything that relies on real-time processing of data. And these Mac's aren't just targeted at video editors, but also music producers. 19ms round about latency is just enough to notice with audio. I know LTT isn't into making music and only does video stuff. But it's somewhat important as more people starting to use their computer for multimedia purposes, including streaming. Often wireless (because they can) and that's where the problems comes into the play: high DPC latency caused by either GPU or WiFi drivers. So please, start considering testing laptops and desktops systems on their DPC latencies!
I would also like to see some audio tests, even if very simple ones but for that you have to go to specialized channels, LTT generally only cares about gaming and a bit of video. Isn't that dependent on what audio interface you use? if you have a 4$k computer I assume you will have something like an RME, UAD Apollo, Apogee, etc... aka decent interface with very optimized drivers.
DPC latency is more of a function of the OS optimization and drivers than the hardware, In context of DAW's it's used more to resolve issues than to assess performance. About your 19ms remark, the DPC latency numbers are related to audio buffer size but they are not the same thing, any modern system today can go below 5ms latency round trip.
I'd certainly like to hear more about DPC latency with apple silicon. I imagine that the link between audio glitches in systems with the T2 security chip and USB2 audio devices would have shown itself in DPC latency results.
@@coolinmac Millions of people throughout the world make music. There are more people making music and owning a home studio than people editing videos. So... no one cares? Really?
The artifacting mentioned at 4:16 also occurs exactly the same way on my Ryzen + RTX 3080 machine running Windows. Anecdotal, but it may not be exclusive to the M1
@@josejuanandrade4439 seems to be a dx12 issue. My guild had to pause a glory run because it made some of the orbs needed for halondrus achievement disappear.
Essentially the main thing bogging down Apple Silicon is the lack of third-party support. Really hoping that Apple start incentivizing devs to expedite M1 apps, because Rosetta isn't as near as fast as Apple claims
Apple's plan to incentivize app devs isn't likely to change from what it's more or less been since iPhone first released (possibly sooner). Push their stuff onto as many consumers as possible and sign exclusivity deals so app devs have to support it if they want to reach the largest possible user base. The reason iPhones are as common as they are is because of the way Apple pushed them onto the masses by making it more affordable to get their fancy phone (by paying monthly) when other manufacturers still expected people to buy their phones out right (which I always do). Of course now everyone does it this way but that is, from my understanding, a large part of how they got such a huge market share (they also used other forms of psychological warfare like making SMS an ugly green and encouraging that exclusivity (there's gotta be a better word) type of mindset).
The problem is that every time devs get used to a particular framework apple go and change everything on a whim, they have absolutely 0 regard for backward compatibility and its nearly impossible to keep up. This is how they lost the scientific computing market, used to be 10-15 years back you'd go to a conference and everyone would have macbooks but over time with Ubuntu becoming more user friendly and WSL-2 being genuinely very good, people just got fed up with accidently letting the OS update and finding all their open source software broken. Apple have been steadily painting themselves into a corner saved only by their brand image and an almost militant approach to exclusivity, that doesn't mix well with genuinely productive third party relationships.
i think so as well, all ML-Stuff, recently pytorch added supports for gpu on M1 which is really nice because of all that unified memory, you could really nice prototyp some DeepLearning on the m1
@@dzjuben2794 depends a lot on the workload, if your doing tasks that are very bandwidth heavy the massive bandwidth that the M1 Max/Ultra have for the GPU alone is very impressive and if your using a build of Numpy/SciPi that uses the AMX units then you also have a LOT of extra unto 8 TFLops of FP32 and 4TFLops FP 64 (and I think 1 TFLOP fp80) for matrix ops. The massive memory bandwidth and rather large on die cache can make a massive difference in data sci workflows were one is very commonly matching hashes, joining and filtering. things were the raw compute cycles is a tiny fraction of the work compared to pumping data through the pipe.
Right now matlab and simulink run very poorly, especially simulink. Matlab should be getting an apple silicon update in 2023a, but simulink probably won't get updated for quite some time. Solidworks runs fine on my m1 pro via parallels, but not noticeably different from the m1 macbook air. Fusion should be getting an AS update soon, so that should be interesting to see.
@@alexstubbings_ For sure apps that have not been updated in this space like matLab etc are not good. But the python data-sci space is now finally fully native stack and is looking very good.
so wonderfully thorough.. wtg.. loved that you figured out the one benchmark was faster because of the drive speed difference. i wish there was more coverage about ssd speed differences between current mac models and the size differences too, since apple only talks about the 8tb models and most reviewers spec out base model only.
Not only that but I think it should have been compared to compile time measured in a Linux distro more than in windows. The windows kernel is not really good for IO performance. Even more so with a lot of small files
As a real world test, editing content in FCP is insanely fast and efficient, my short form verticals export in mere seconds. It’s nutty for video creators
the only people that would need to spend 4k on a productivity machine. graphics people...maybe, but could get by with less. everyday people who might use a mac laptop of some sort who are jsut pushing excel sheets around, won't need to spend this much. So basically, they've made a machine with potentially/theoretically the power to run games as well, but due to being dumb about how they support stuff, basically made a machine only video editors should buy. How does Apple even exist.
My work laptop is an HP zbook with a RTX 3000 and vs my personal m1 pro laptop, which honestly is a bit more modern and more expensive, but the main impact is none of the weird hitches the windows laptop has without any clear performance bottleneck, they both behave differently, even when I'm doing very low-impact stuff like editing lower-res files in photoshop. The mac is of course pushing more pixels with the internal display too, and I've never once turned the speakers up to override fan noise.
Developer here with the angry (not really) comment. My primary complaint/issue would be comparing the M1 Ultra against the 12900K instead of the 5950X. Which is actually kinda the complaint I'd have in general. Per LTT's own review, the 5950X was the stronger overall "productivity" CPU. Given the focus of the M1 Max & Ultra, it then seems odd to use the 12900K instead of the 5950X.
I think it is because apple used the 12900k as a comparison for the M1 Ultra. But yeah, here it should be "fastest 4k$ pc vs M1 Ultra", so a 5950x would make sense.
I want a full amd build comparison. While intel's cpu might be slightly faster, it's much more efficient. Power usage would show it's true potential. Also, I have yet to see benchmarks for amd's HIP support in blender.
Dont agree, with the lower idle power the 12900K and better in almost everything compared to the 5950 (performance, boards, PCIe) is for sure the better value for developers. But in the end both are the same league. Just as the Ultra is, except for the price league where it is unbelievable 300% more expensive, and 2500 against 7000 Euro hurts.
4:14 - That's an issue with the game, not the system. At certain camera positions large props like that tower can disappear. I can only assume why it happens, but I can tell you how to reliably reproduce it if you want. It happens on my system all the time (3600X, 2070S, B450 MB).
They brushed Anthony eyes brows. Hit him with some blush and warmed his face. Some roseyness on his lips might be a little much but I like his hair a lot better now . Overall 8/10 I'd smash.
Anthony is so freaking talented but I can’t stop worrying about his health when I see his videos. He’s on his way to an early grave and that would be a great loss.
Imagine something like GTA 6 getting ported to the Apple Ecosystem. They've got some serious performance with those CPU-Flash-GPU and their Semi-Native software in iOS/macOS Swift and Metal API are snazzy. With someone like Rockstar who spend years and do many optimisations, you could have XB1-level on an A11/iPhone Max, with XB1X-level on an A12Z/iPad Pro, with XsS-level on a M1/MacBook Air, with XsX-level on M1 Pro/MacBook Pro, and lastly Gaming PC-level with that M1 Max and M1 Ultra chipsets on the Mac Studio. Just imagine taking all your next-gen gaming with you wherever you are, without needing to Cloud Stream them. And it scaling up it's graphical fidelity depending on the thermal profile of your device. That would be so cool.
Regarding the CPU bottleneck in Dolphin: that’s normal. In Dolphin, the only demanding tasks that can run well in parallel are the CPU, GPU and DSP. Breaking up any of these into smaller tasks just to run it on more cores is likely to just make it run slower, so they don’t.
As said before, Apple's sillicon is a great breakthrough without a doubt, however what makes them great is their weakness: having a computer that I use for work with a single point of failure for RAM, CPU and GPU is a no go for my needs (other people mileage may vary) Same for the mac only SSD.
But doesn't the fact that it's an SoC mostly prevent such elements (which are no longer separate components) to fail? Like, when did your phone or tablet's CPU, GPU and RAM last fail you?
Not only that. The silicon itself is kinda too specific. If anything even if minimal changes. It will depends in its CPU cores which are kinda week for general purposes
Would love to see M1 benchmarks for industry level post-production software such as Avid Media Composer and Pro Tools. Don't think a single channel on TH-cam has done this yet.
They haven't, and none will unfortunately. It's realistically because no one who does TH-cam will ever touch Avid Media Composer (i can't speak to Pro Tools though). Media Composer is pretty strictly only ever used at industry level like you said. No one on TH-cam is operating at unscripted tv/scripted tv/feature film level. And any who would are young enough where they came up on FCPX, Premiere, and Resolve. I've used Media Composer on my M1 Pro Macbook Pro though and long as you have enough RAM (it still runs through Rosetta 2, Avid hasn't coded it natively for Apple Silicon yet) it runs almost perfectly fine. You very easily forget it's running through emulation. Avid did a big rewrite of Media Composer almost from the ground up with Media Composer 2020, so it's more streamlined to be able to run through emulation. Avid lists themself that if your system only has 16 GB of RAM to turn off certain features like the background phonetic indexer, since it running through Rosetta 2 has a RAM utilization cost, but otherwise it runs great especially if you have 32 GB+ of RAM.
Usually, on the industry-level software, you go through the corporate-level or enterprise customer service to ask for those numbers and benchmark on hardware.
I'm glad you mentioned the suitability for hot climates because this is often glossed over in the reviews. Living in Brazil, an Intel notebook in a small room with second display and intensive usage means that I need AC on almost all the time if I want to be comfortable in the room, while using a M1 machine is as if it was nothing. This was one of the main reasons why I switched from a Dell XPS 13 to a Macbook Air M1
Same. Here in Phoenix, AZ where it has already hit 114F(44.5C) this summer, my gaming PC will heat up an entire room noticeably hotter than the rest of the house and besides the PCs own power consumption, the AC runs harder further raising the power bill. Yet, with my M1 Max I can literally work on it outside*. My heavier workloads consist of running Altium(EDA software for PCB design) or PathWave ADS/EMpro for EM field solving, in Windows 11(Parallels VM). *Why work outside? When my kids are swimming/playing in the yard.
Such a useful review/comparison. I came to the same conclusion that you did, a really loaded 14” M1 Max MacBook Pro makes the most sense between all the “pro” M1 machines, and the super low power draw pays off the most in that application, too. An astonishing display makes it even better.
I think it depends if you need the mobility of a laptop, if you only ever use it at a desk then I personally woudn't buy a laptop for that, but other people's mileage may vary here.
LTT, I have said it before, I love me some Anthony performance breakdown vids. It is fun, calming (linus) and informative. Can't wait for the new vids from the new department!
Not gonna lie Anthony, you just educated me on a bunch of different things. The difference between ssd speed and CPU speed, thermal modules, names of different platforms to test your own PC, Copper cooling, heat output, and power supply testing. Thanks again. Once again, your calm voice has made me understand bench marks I wouldn't be able to understand by myself. Thanks.
Something few ppl have mentioned: M1 Ultra has 114 billion transistors, while RTX3090 has 28.3 billion and a Ryzen 5950x has 19.2 billion (transistor count of 12900 is not known). What Apple's doing is kinda trading transistor count for power efficiency, especially in targeted workloads such as encoding.
Its an absolutely HUGH MONGUS chip. When you get a reduced instruction set, some fixed function hw and are 100% in control of both HW and OS, you can do these kinda things on low power budgets.
Apple prefers to increase the TDP of their chips for higher TDP-supporting products (products with bigger thermals envelopes) by going wider with silicon, instead of increasing frequency on the same die. (M1 Ultra's total die surface area is ~920mm2). That is what affords them that efficiency. For the past 10 years they've always chosen to keep frequency low and silicon size big. The M1 Ultra's performance cores boost to a mere 3.23GHz, and yet their performance is roughly in line with Zen 3 cores clocked at ~5GHz. The GPU cores boost to only 1.3GHz. Their IPC on all fronts is off the charts. Granted, as Anthony says, the prices match what they're offering. And yet, seems like they're not stopping with the Ultra as a chip that has exactly the hardware of 4 M2 Max dies (codenamed Rhodes-4C) is rumored for the Apple silicon Mac Pro, and if we assume linear transistor scaling from M2->M2 Pro->Max, that would have *270 billion* transistors. At this point it seems like Apple will be the first chip maker to hit the 1 trillion transistor mark.
@@utubekullanicisi IDK which approach is better - more transistor (cost) and lower wattage or less transistor and more wattage. Some of my friend in electrical and computer engineering think Apple M1 is "wasting sand" LOL
@@marisakirisame8543 With Apple's approach you will get more efficiency but higher price, with most others' approach you will get lower efficiency but lower price. Apple's motto has always been "don't care about the BoM (bill of materials), but build the best thing that the current technology allows", so at least they're doing something that match their motto. I'd say neither approach is "the best", but depending on what you care about the most, one of them can be better for you. In any case, I would try to be open minded about both approaches.
@@SpartanArmy117How is their approach "build the most average thing in the market and charge the most you can" when the newest MacBook Pros have class leading miniLED displays that have more dimming zones, brightness, contrast ratio, etc. than any other laptop display, class leading speakers that have more dynamic range than any other laptop speaker, one of the best if not the best webcam hardware with the best image processing thanks to the M-series chips deriving the class leading ISP from the A-series chips for the iPhone, and a class leading trackpad? I could go on and on about the hardware advantages that Apple has in various product platforms that don't necessarily show up in spec sheets. They might have higher profit margins on their products than anybody else (though Apple's reported profit margins that are supposedly around 30% seemingly suggest otherwise), but I definitely disagree that their approach is "build the most average thing and charge as much as you can", it's more like "build the most over the top thing you can with unnecessarily expensive components and charge as much as you can for it".
Would be great to see tensorflow benchmarks: cpu mode, gpu mode and cuda, alot of us deep learning engineers like to run small runs on our macs before we push it onto the cloud.
For video encode you probably want to test Intel's QuickSync too, because it should handle weird stuff like 2:3 a lot better than the notoriously constrained NVENC.
@@Prophes0r Nope, a lot of businesses and people use them for VoDs. For example TH-cam. It's all hardware. Yes CPU encoding is better (by far less that the 95% you quoted, more like probably 20-30%, also you are comparing H265 and H264 and the same QP values wich are different for every encoder which is bullsh*it) and should be preferred, but that only holds true as long as you have the time and the resources (more like a Netflix use case, where you don't have that much content but everything is being watched a lot), if you are in a hurry or have terabytes of content it's basically all you've got. Hell Intel just announced a GPU for for example CDNs called arctic sound m and google builds similar hardware themselves for TH-cam. Similarly many content creator, especially on the go, use QuickSync to not wait for ages. (Remember especially with QuickSync there's parameters and presets you can tweak to make it look better than the default settings)
I don't think NVENC was really ever meant to be more than a decent way to stream content consumed on the same PC at a relatively low cost. For offline rendering it doesn't really seem worth it to sacrifice the image quality (unless that's somehow different, I never really used NVENC for that purpose).
@@MLWJ1993 Well that is only a fair point as long as you don't have any pressure on time (certain delivery dates for example) in that case you just crank up the bitrate by a few percent and you're fine (for offline usage the tradeoff isn't quality usually but just size).
@@cromefire_ It's not like CPU encoding takes enormous amounts of time either (unless you really go ham on both the quality preset & bitrate). Maybe relevant if you own a dated CPU though.
@@MLWJ1993 Well if you go for x264 fast or so (when it'll be fast) hardware might actually be a better quality and if you go for slower x264 you can also use hardware h265. Of course you could also use x265 then, but x265 is way slower (and everything is amplified on mobile). And you can do the same logic for VP9 and (soon, when Arc is out one day) AV1 which will get you even better quality and even lower speed, software AV1 is < 1fps with 16 Cores and a bunch of RAM.
Always awesome, Anthony. I’m glad to see you tested Macs in environments with software that they are used for or at least have been known to be used for. Intensive audio processing in multitrack studio applications or just high resolution Wacom Cintique digital painting are where my semi pro interests lie.
Thanks for taking on this difficult task! I was super nervous when the ultra came out that my fully built 5950X and 3080 Ti was obsolete. This makes me feel much better about my render machine's solid performance... Just wish adobe would optimize for Zen 3.
For that price, would it make sense to compare it to a Threadripper 5000 series? Would be interesting to see how ARM compares to both "x86" vendors. Also x265 is quite performant on Radeon cards so also interesting to compare both GPU vendors
yeah configuring smth similar with a threadripper clearly is the more fair test. BUT apple claims to be faster than there 12900k so they gotta test it agains tthat
Seeing how close the 16" M1 Max MBP is to these, I'm very happy with my purchase of the 16" M1 Max MBP :) These things are incredibly powerful for their power consumption levels though. Cannot get over the fact that it's basically an LED bulb at 'idle'.
Same here! I contemplated the Ultra for future-proofing but I'm so glad I went with the M1 Max for the Studio. I would 100% go with the 16" M1 Max MBP if I was doing more mobile work but glad to hear you're happy with your purchase!
Anthony! You can't compare compilation for x86/AMD64 and ARM because a compiler does completely different tasks. x86 has a much more complicated instruction set, and the compiler should also do much more actions, for x86 exist much more optimizations! ARM machine code has a different size than x86. To compare, you must build x86 Chrome by cross-compiling on an M1 chip.
Id personally put the m1 against the 5950x for the compile test, that thing is consumer and a beast for compiling in my experience, then agin of youre talking about prosumer or professional workstations why not test it aginst a threadripper?
I'd be curious to see an underclocked pc Vs Mac on power draw. if intel have shown us anything, it's half of your power is going in to the last few percent points of performance
This, all day, every day. 12900k is definitely a quick chip but, its power budget and money budget are way in excess of what most would need. A lower priced, lower power chip could yield some deeply interesting stuff. Half the price of the PC and run it stock and see where you stand. Probably 85% of the performance for half the cost. It is all well and good and makes good videos when you test the crazy expensive against the crazy expensive but most cannot come close to affording that. The priority of entertainment over information needs to change. I'd be very happy to see more realistic stuff with a conclusion saying... This is twice the price of this but you'd be mental to buy the more expensive one as you can get most of the performance and buy a second one for giggles for the same price.
true. a 12700k would be more than enough, still 8 pcores just half the ecores (4). and im not sure how usefull those 8 ecores are. but most of the power budget goes to turbo boost from ~4.3 to 5 and 5.2 ghz. with carefull settings in the bios, you can have most if not all of the performance and way lower power consumption. but, again they compare cpus as they come from the factory so, meh. m1 would still win in power draw but the difference wouldn't be so dramatic. but for the gpu, there is not much they can do...
Hardware itself seems really nice, just still waiting on software support/optimization on a lot of things. Hopefully developers of all softwares pick up the pace on that.
@@marcogenovesi8570 unreal engine and unity already fully support apple silicon. But the market for Mac gamers is so tiny most devs still don't bother. Especially if they use a in house engine. From what I can tell there are literally two games that are both macOS native and Apple silicon
@@marcogenovesi8570 If all you're focused on is gaming, then sure. But I pretty much any other software, support is either already there, coming soon, or being worked on.
Apple has media engine which do good in video editing. Since most TH-camrs are video editors its not surprise the hype on youtube. But video editing is just a fraction of productivity task in actual computer world. In any graphical computing tasks they will never match ray tracing cores embedded in modern graphics cards.
Mr. Sebastian, can we get a vid where some of the engineers at LMG are tasked with taking a console/NUC and improving its dimensions, cooling and performance? just saw a vid of DIY Perks' slim Playstation 5, and I think you guys could do the same thing! great vid btw props to Anthony for being great as always
I truly cannot overstate how utterly incredible that power draw is. Especially in a world with fast increasing electricity costs. I’m a video editor who uses Final Cut Pro and the Adobe suite, so i’m SO pumped for Apple Silicon.
I bought the M1-ultra Mac Studio today because I needed to upgrade my 14 year old Mac trashcan. I only run a few things, but often use all 64G of memory, and all 8 cores on the trashcan (really, all 16 hyperthreads), so I'm hoping this new machine is faster for those workloads. I'm expecting to have to (after more than 15 years!) recompile a bunch of utilities I wrote myself, and reinstall lots and lots and lots of software, so it may be a while before I actually try to move over to the new box.
The 'trash can' Mac Pro is from 2013. That is 9 years ago, not 14. I type this on 14 year old Mac Pro. That is the 2008 model aka the Mac Pro 3,1. Is that what you meant?
Anthony is just awesome. Intelligent, deeply knowledgeable and his reviews are very fair/unbiased. I honestly love his content and personality - great guy overall and a good engineer too!
Same, music production is the least talked about tech-related field and it just gets ignored constantly. lmfao. Speaking of Ableton tho, given the way it assigns threads per-track, It'd probably do really well, but i don't see it out-performing heavily hypertreaded processors like the 5900x/5950x or the 12700k/12900k as the extra threads really count when you're loading up as many tracks as possible. I'd love to see that comparison
Ableton is so light I'm not sure how you'd benchmark it without dozens and dozens of VSTs. My '08 Mac Pro is my music workstation, and it has never stuttered or gone above 20% CPU, even on full-album projects with 50+ tracks. Even my 2gb MBP runs Live extremely well.
@@YearsOfLeadPoisoning yeah I think the best way to test cpu performance would be large amounts of high number, full polyphony synths like phase plant, serum, or Omnisphere. I've got a ryzen 5 3600 based system and I can easily get 64+ tracks with moderate polyphony but it'll cap out if I take Pigments and increase the granular voices to 256. Lmao. It's not a perfect methodology but the bones are there for a solid test of raw cpu grunt
That would be nice, and a cleaner comparison. Would also like to see a complex program with both memory and cpu bottlenecks at different parts of the simulation (e.g. weather model like WRF or ocean model like GETM). That could pinpoint some limitations and/or strengths of the platform.
I came to the same conclusion about the Mac Studio with Ultra. It was overkill for anything but the most intensive 3d modelling/rendering workloads. This is why I went with the Mac Studio with M1 Max. Saved a bunch of money, added the OWC Gemini which gave me 24TB of RAID 0 and more Thunderbolt/USB and Displayport expansion. Didn't buy this for games but for content creation, TH-cam and Zoom. The software does have to catch up though. I find many cores not being utilized so we sadly have unused resources when I'm running Resolve, Photoshop and LightRoom all at once (with email , browser, etc...running as well). But, I love this little thing. Simple, elegant and does the job. Forget gaming, it's a MAc, not a gaming rig. For that, Gimmie a good PC with NVIDIA graphics and I'm a happy camper. Good review and I agree on many points here!
Wait, couldn't you also make compile benchmarks compiling for the same architecture? I mean otherwise this lead could be architecture specific using different optimization flags or even ignoring parts of the code because of architecture specific macros.
They're running Intel native code on the Mac and the Intel machine is still getting thrashed at 10% of the power draw. in 3 more Mchip iterations, it'll be worth it.
I'm both a Mac and Windows/PC user. I use my Mac (MacMini and MacBook Pro) for productivity/work since I'm a software engineer and I like having a proper terminal and shell environment. There are many aspects of MacOS I prefer over Windows, Finder isn't one of those. Also, the virtualization support on PC is much better still. Yes, I use Qemu/MTU on M1 but Vagrant support is basically broken at this point - Parallels works but good luck finding an arm64 vagrant box. Docker was (and still is) a bit of a mess on M1. It basically didn't work at all for about year, but now that it does, you are still stuck with arm64 images. Things are getting better and the iOS + MacOS integrations are downright magical. If you needed a system to do Xcode development on, then an M1 Mac is awesome. It is also my preferred video editing system since I'm using Davinci Resolve. I've always kept a PC build for virtualization and gaming.
Hopefylly the M1's Macbook Pros will be better than the 16" Intel I9 ones regarding thermals. Software engineer here, too, and after 1 year, all of my colleagues experience burnt fingers, total freeze & reboot when building the bigger project (~5 minutes). This can happen several times a day. It is horrible. Sometimes, Docker will start Oracle no more as it would get corrupt files from this :/ It's client-provided, so no thermal paste change for us :)))
Frankly, I'm only interested in Apple Silicon for being a powerful ARM machine available at a relatively affordable price (as compared to something like an Ampere Altra machine). So, as it's more of a curiosity than a solution for me, I'll very likely wait till something like the Max Studio is available on the second-hand market for a more reasonable price. By then, Ashai Linux will have likely matured nicely as well.
Ubuntu on Nvidia Orin (12 ARM cores + 2048 CUDA cores) is more powerful than M1 Ultra (20 ARM cores + 1024 GPU cores) with smaller form factor. Also, Orin has the same 60 W TDP as M1 Ultra
@@Haskellerz I hadn't hear about this. I would be inclined to wait for benchmarks (I can't find any general computing benchmarks ATM), see how they stack up, but I'll definitely also keep an eye on the Orin as a potential option. Shame my options are Nvidia and Apple, two historically.. shall we say, consumer unfriendly companies. Edit: And again, I'm more interested in these as a curiosity, so I likely wouldn't spend over $1k in either case.
@@daedalusspacegames There are benchmarks of Nvidia Orin on Medium Orin AGX : Model Name FPS 0 inception_v4 1008.721131 1 vgg19_N2 948.223214 2 super_resolution_bsd500 711.692852 3 unet-segmentation 482.814855 4 pose_estimation 1354.188814 5 yolov3-tiny-416 2443.601094 6 ResNet50_224x224 3507.409752 It seems machine learning tasks on GPU is faster than M1 Ultra But M1 Ultra CPU might be faster than Nvidia Orin
The power useage when M1 MAx is in the 16" MacBook is the key, no loss of performance when not plugged in and at least twice the battery life of any PC laptop, and in some cases three times, and as a result of the low power no fan noise and it stays cool on your lap, even when editing 4k Video and rendering, its movie making heaven.
I wish there was an easy way to differentiate the "Higher is better" vs "Lower is better" graphs. By the time I look down to check which one I'm looking at, we're already onto the next bullet point.
Hm, I wonder if you could make them left/right bound respectively. Ver yobviously different, and you could read them the same as a quick glance. Aka no matter if higher or lower is better, the 'better' bar is the ones with the endpoint further to the left (or right, depending on which one you make left/right bound)
Yeah... But when you factor in the fact that the air cooler for the desktop is about 50% the size of the entire mac mini, it puts things into perspective.
No it doesn't. If you want a fast computer, get a fast computer. If you want a small computer get a small computer. And power draw is mostly irrelevant here for a desktop computer. You don't need to worry about battery life and if you're worried about the cost of electricity, don't buy a freaking 4k computer
@@StuermischeTage True... But a 5800X or a 5700X is fine for most 2D/3D tasks I know of (in middle range freelancing, or other pro work at a studio), ie, a 5600X - *Edit* : I meant 5700X- (same than 5600 in single core) already is as fast in Photoshop (it is mostly single core) as an intel i9 10900K, and it really uses very low energy (quite less than intel Alder Lake), around 65 watts. Of course, not the prodigy of low consumption of a Mac Studio, but one can get a decent PC work machine for around 1200 - 1400 euros. How many years would one need to recover the cost difference with a 5400 euros Mac, even if saving a lot of electricity cost? Even more... getting a laptop reduces by a lot the energy consumption. If your tasks are not super heavy, getting a (Windows) 12700 intel laptop, or one of the latest Ryzen 6800H laptops will get you a low consumption yet a very usable desktop replacement (if connected to an external monitor). Of course, not ideal for heavy tasks that need many cores during a long time (as a laptop has worse cooling, and it throttles; even a macbook pro has been reported to throttle), like 3D rendering. But fine for pretty much any other thing. It'd depend on each work's type of work load. Mostly on your professional profile, indeed. I know digital 2D artists (I'm one) can do very well with a good laptop + external good monitor.
I just bought the Final Fantasy vii remake and my old ancient homebuilt PC runs it at max settings without a single hiccup. There are others with newer RTX 2000 series cards complaining about stutter and lost frames. I'm still using a gtx 1080Ti, i7-8700k, AsRock Z 370 Extreme4, 34" Alienware monitor using the builtin overclock settings. The 1080Ti does come with a Aorus software app to change from silent to overclock gaming modes. No stutter, no glitches, everything looks great.
Thanks for a great review with attention to detail and depth of knowledge. A fair and square battle between the titans pc & mac. Really enjoyed it. I expected to see faster read/write speeds for the ssds on the 12 gen intel pc. Aprox 20% faster. Perhaps the ssd class was a bit low for this config.
Apple's M1 Ultra chip in a rack mount version would be a game-changer. As a tech enthusiast, I've been eagerly waiting for a powerful and compact solution that can seamlessly integrate with my Unify network equipment, and this seems to be the perfect fit. The M1 Ultra's performance and efficiency are off the charts, and having it in a rack mount form factor allows for easy installation and management alongside my existing infrastructure. Kudos to Apple for continuing to push boundaries and deliver innovative products that cater to the needs of professionals. Can't wait to get my hands on one and take my setup to the next level...
Code compilation likes more memory bandwidth (source is Hardware Unboxed) than core count. Intel currently wins against AMD and loses against Apple. If Zen 4 supports quad channel memory for all platforms, it will beat intel (if they keep dual channel for the CORE series).
I don't know why people claim Ryzen has more cores than an I9 when all they really did was merge a high end ryzen with their epyc chips. Threadripper is not a gaming cpu, and has WAY more capabilities than ANY workstation user would need... it literally outperforms their own Epyc server chips in pure performance, and only lacks in some minimal features such as cache and allowable ram (though 2tb of ram seems plenty for any workstation user) Beyond that, there are rumors that next gen threadrippers will support dual cpu setups, meaning the workstation user, 99% of them NOT simulating the physics of our solar system, will have the abilities of a small supercomputer. Tldr, stop comparing threadripper, a server CPU they purposely put in the ryzen family for shits and gigs, to an I9, a properly named CPU for the categories of work it's able to do.
As a developer that builds embedded code with GCC and the Rust compiler and also runs SfM software that requires a large amount of memory, I can say that the more CPU cores, GPU cores, memory bandwidth, and memory they throw into these things, the happier I get. I am using the MacBook Pro M1 Max, but I would definitely be able to make use of the Ultra. That being said, I can't imagine buying yet another expensive M1 chip since I already have one and it works great. I am excited for more software to get ported natively to Apple Silicon, especially games.
As a developer that has both a Mac mini and a windows laptop I can say that I like the quietness on the Mac, but there are some features on there that require downloading through its kernel or through visual studio terminal which is fine, but I prefer grabbing my code wherever I go rather than leaving it at home and come back so I can continue it. Who knows, maybe I have an idea I wanna do, I just seem to use my laptop more and no not much difference between platforms in compiling speed (as a front end and Java developer)
On the other hand it keeps value better. So upgrading while selling the old machine is not as painful. But yeah, this makes time inbetween upgardes longer for the Apple stuff. That's a valid criticism.
@@JZWPikaNozna yeah but selling pc used on ebay carries risk, even if you write no refund sometimes people play games and they grant refund anyways even with the item working fully
@@Ultrajamz Yup, there's that. Still, this is people being terrible. It's not Apple's fault. All in all Macs in general make up a little by what I have mentioned in the financial department.
To Apple's credit, they've made a great attempt over the years to ensure that older devices are kept supported for as long as their hardware will allow for it. This means that, compared to a PC which might need to upgrade its specs after a few years following numerous Windows updates, Macs can continue to run updated software even if they are a few years old.
By the way! the WoW visual artifacting on that fight occurs on windows too, it's a pretty major bug introduced in a recent patch somehow, and kills you a decent amount in that raid fight!
I've seen WoW do the flicker thing on windows occasionally as well. Apple silicon has big potential, but as the owner of a m1, I'm thinking M3/M4 and better software support before I pull the trigger on apple silicon again.
I’m glad I read this. Been playing wow since Vanilla. Been wanting a MacBook but so far it’s been like getting into an ice bath lol. I’m waiting to see what M3 has to offer.
@@Crowski I too been playing WoW since vanilla lol. Anyways, WoW runs awesome on M1 cause blizz decided to support MacOS/ARM fully. Its buttery smooth on max settings. Unfortunately, the mass majority of games out there do not or dont work at all, and you have to go around with parallels or something similar. Hence the second part of my post "better software support". I'm only really considering M3/4 cause M2 is really just an improvement of M1 sorta like how intel was doing with its processors(tik/tok). Still, enormous potential if apple would just pull its head out of its ass with the platform and open it up a bit.
i love the ltt reviews! and how there is not bias at the products your all review. But one thing that does bother me is how people in the music industry get kinda ignored in all of these reviews. Music production actually take alot of processing power, due to having 2 to 9 tracks per instrument (if you are recording) and tons of plugins that strain the crap out of the computer. I use to work at a music store and youd be surprised at how many people buy computer for this application. Id LOVE to see this somehow added to the benchmarks. i know its not easy, but it would be super helpful (and fun) to see what would be optimal for this application.
LTT reviews always neg on macs, no matter what model. This review complains about performance of Apple Silicon when they admit the software used is not optimized for M1. Then they finish off with the tired old 'PC's are cheaper' argument. Mac users don't buy macs because they are cheap. They buy macs because they do the job needed without all the nerd crap PC's wallow in.
So the Mac is a somewhat better option for my editing station, albeit with varying results from different tasks. But if you do any gaming, PC is the only option. Apple are really limiting their demographic there.
Yeah that's the sad part, PC is very flexible markets it to alot of targets gamers, creators and so on. While Apple only targets the creative and business market. Hopefully one day they can make something for gamers and other parts of the tech community.
We have a couple of these at work, and it's inconsistent to say the least. Sometimes it absolutely crushes it on render times, while others it pales in comparison to a 2017 iMac pro. Really wish Windows was compatible with Apple ProRes, then I wouldn't be forced to use Apple
Video editor here. Picked up a spec’d out Studio M2 max for about 2k a few days ago. It’s nearly as fast as my windows animation machine and runs premiere even better because of native silicon updates. If you run FCPX on an intel machine it will be night and day on any M chips
When the time difference is low, you use percentage like Apple to make it seem that M1 Ultra is way ahead. 40 or so seconds transform in 25% faster for Ultra. No consistency.
So if going from 1 to 2 tb of storage is $500, could you just use a thunderbolt port to do an external 2tb nvme drive for half the cost? Or will it not recognize any external high-speed storage?
anthony, i love and appreciate the detail in your reviews. thank you! especially that excellent point about prores and the storage speed. but next time, please settle with an eyebrows max instead of and eyebrows ultra ;p
I really wish you guys would start doing benchmarks for the audio crowd. Testing DAW performance. Edit: but also thank you for everything you guys already do!
We have plans, but we haven't had the opportunity to put something together yet. It's probably going to be a job for the Labs to develop. -AY
How the hell does a reply get more likes then the actual comment-
@@viper627 🤫🤫🤫
For audio it mostly boils down to this: Can you afford a Mac? If yes it's your best bet - their drivers are just that good. (and that's coming from someone who doesn't like Apple)
@@LinusTechTips Woo! It would be nice to see discussion of MADI PCI-E interfaces, and edge use cases (Endless Analog's CLASP tape machine syncing, for example) and how they work with 2020s machines.
Hiring Anthony was the best thing Linus ever did. Love his reviews!
@Christian Michael huh?
For sure
Agreed, I like his no nonsense tech nerd style.
👍 agreed
Here for this comment!!! I remember when this dude was so shy Linus had to push it in front of a camera and he was so smart but so awkward. But finally, here he is, better than never.
anthony is easily my favorite host of LTT. he just seems like such a chill, friendly guy who loves what he does. keep it up!
I am no Mac fanboy, but _damn_ that power draw is impressive. Makes the rumoured 40 series GPUs look even more ridiculous now.
except a dedicated GPU is more powerful in general, it does way more, there's a reason why dedicated cards are quite large, SoC haven't hit the capability yet
Um, no. Maybe if Apple silicon could compare in performance to those GPUs, but we're not even close.
would be interesting comparing the system Power consumption with a lower tier gpu, something with the performance of Apple silicon. that i9 is hungry for sure, but the gpu eats a lot too!
We had a massive hike in power prices where I live last July, and the power draw alone is enough for me to not even bother considering a 40 series GPU. I'd be paying more for power than I pay for rent just running that damn thing.
I'd care if it was a laptop 😂, a prius might get good mpg compared to its 0-60 but its still slow and that's not what people that care about 0-60 look at
I never feel like Anthony is reading a script; he's getting better and better at hosting. Kudos man.
Anthony is really good at reacting on the fly and his overall hardware and software knowledge is good enough that he can improvise as necessary, which is a very useful skill for video hosting. He's also a very authentic personality, like many people at LMG, which makes his videos quite entertaining.
Small suggestion: I know your graphs always include a "lower/higher is better" legend. But if you'd somehow visualy represent which is better, it would be way easier to grasp the data quickly. Something like having the "better" side green or whatever.
What you can’t read a graph lol. That’s such a nit picky thing to worry about
@@dallinsonntag3160 to make better content, everything matters, small things pile up to make better content
@@itaintitpewds you CANNOT tell me that switching colors would boost more views my friend
@@dallinsonntag3160 it is not about boosting views, it is about making better quality content, small intuitive things make a lot of difference, you may not think about it but it does for a lot of people, i watch linus tech tip's videos in things im not even interested in just because it is fun to watch, like home appliances, why would i want to watch some guy build his house, because it's just fun, that's how intuitive it already is, so proving my point that adding small things over time pile up to make better content for everyone
@@itaintitpewds it sounds like you need to find a hobby my friend
Yay! Anthony video! I feel like I haven't seen him awhile. Probably cause he was preparing this review.
Was thinking the same 😊
Was preparing his brows
@@falagarius ☠️☠️☠️
He’s looking fresh like a newly budding flower 🌸
How can you miss him? Lol
Note: The artifacts blinking in and out of existence on that boss on WOW was an issue on one of the patches. It was happening on PC as well.
It's clearly because they're in the area "Shimmering Cliffs" /s
Yeah I was about to comment this halondrus boss room buggy af lmao
wrrr
Yeah I get the same thing on my 3080ti in a few places in Korthia and in SotFO
The biggest issue I have with M1 in general is that programs are either exceptionally performant or wildly behind Windows computers, and it all boils down to whether or not they're designed with M1 in mind. That wouldn't be so bad if not for the fact that the speed at which developers have tried to optimize their programs for M1 has been so slow that, by the time Apple silicon is properly utilized, we'll be at M3 at the very least. Like the iPad Pro - it's a lot of power and not a lot of ways to use it.
Still, the speed of optimizing on Apple is the fastest compared to other platforms.
Now the fans blame dev because M1 runs poorly they re not aware that M1 is overrated no matter what devs can do it wont run faster.
@@Teluric2 unrealistic, it’s clear with apples own video editing software that the new technology can be optimised to get insane performance , editing 16 raw timelines at once with no lag is insane and mind blowing. it’s in its First gen but by gen 4 i think optimisation of the chip will be done and the true power can be unleashed
@@Teluric2 You can see in this video that Applications and workloads developed specifically to run on this specific ARM hardware run exponentially better and faster than their windows counter parts, its not the chip/architecture it is literally just developers lagging behind (as always) to push updated app versions (which in some cases is understandable because it may require a complete application rewrite but still its been damn near 3 years). It. Is. Not. The. Chip.
This comment seems really inaccurate? I can never tell if something is running on x86 or arm64, Rosetta does a pretty good job and I've never been able to tell the difference on my Mac.
At ~4:20, I experience the artifacts you are referencing on my PC version of that fight as well. The fight had some recent bugs that I am told are fixed now, but I haven't been able to verify. I have been experiencing those buildings disappearing since day 1 of the raid. This is not just a Mac issue.
Maybe it only showed up for them during the PC testing since they can only comment on what they see and it wouldn't be too surprising given how potty Apple Silicon CPU's integrated GPUs are that they would have weird and bespoke graphical glitching.
do you have DX12 enabled? i have had some weird artifacting with a intel nvidia system artifacting. When i switched back to DX11 it was fixed
@@flugeldd I have DX12. I'll give DX11 a shot when we get back there.
Nice
Yeah there are a lot of people in my raid team who also saw that pillar in particular flickering in and out and it seemed to be based on camera position and angle
That boss in World of Warcraft was bugged recently with the assets blinking in and out of existence. I experienced it on PC, so can't be sure that it was the Mac's issue.
yeah especially with WoW being one of the first m1 native games
Was going to comment this, it’s still broken now, one of the only fights with this kind of glitch currently in the game, very unlucky LTT, good review as always however!
@David V We get it, Microsoft touched your no no when you were a kid, let it go now
@David V oh you, untrickable David.
@David V Sure you run all those but it doesn't mean you know what a slanted review looks like. - Me: the guy who has a m1 laptop and gaming pc with windows and a linux vm running on it.
Genuinely appreciate the breadth of coverage. A breath of fresh air after seeing everyone just cover video rendering and move on, as if everyone watching was doing just that.
I really hate that part about youtube reviews. I understand that it's their daily bread and what they understand, but it's kind of a bubble that probably not more than 0.01% of the audience is in. Cinebench is cool as a reliable multicore benchmark, but video encoding doesn't really give us a real world usage metric.
Reviewers should give it the Minecraft test.
The assets dissapearing in WoW is a common problem, i experience it sometimes too. Doubt it's related to Mac.
Only time I've noticed it was on setups that were Linux or Mac native systems, or Windows systems with some non-up to date drivers. The Linux users even managed to patch the glitching themselves with their witchcraft.
I've never had that issue in windows. And i use a huge array of pcs. Everything from nvidia to amd, and even play on laptops and desktops with igpus. Never seen that. I could list you all the diferent configurations of pcs but we've be here all day lol.
have you ever talked to a girl without giving your credit card number first
@@babaganoosh7020 caption if the week 🤣
I'd also like to see how it compares to a fully spec'd-out 2019 Mac Pro.
The 2019 isnt gonna do well 😂
Much much better for most things... Intel chips were run super hot and would throttle.
You can extrapolate that with the information already available to you
Literally became obsolete the year it came out. Can't even upgrade the chip.
@@leorawesome9518 Not true a fully maxed out 50k mac pro will beat it out. Watch maxtechs video he has already done this
side note for the Prores encoding section: prores still has a significant encoding overhead, and is not fully limited by the RW speed of your media, but still more limited by the CPU/GPU performance. to measure pure/raw render-write performance, render to a tiff-sequence (uncompressed) to see the performance finally being limited by the media, and not the cpu
I'm really missing any testing on DPC latency on these systems. It's such a hidden specs that only comes into the picture for most people when they start making/playing music or anything that relies on real-time processing of data. And these Mac's aren't just targeted at video editors, but also music producers. 19ms round about latency is just enough to notice with audio.
I know LTT isn't into making music and only does video stuff. But it's somewhat important as more people starting to use their computer for multimedia purposes, including streaming. Often wireless (because they can) and that's where the problems comes into the play: high DPC latency caused by either GPU or WiFi drivers.
So please, start considering testing laptops and desktops systems on their DPC latencies!
I would also like to see some audio tests, even if very simple ones but for that you have to go to specialized channels, LTT generally only cares about gaming and a bit of video.
Isn't that dependent on what audio interface you use? if you have a 4$k computer I assume you will have something like an RME, UAD Apollo, Apogee, etc... aka decent interface with very optimized drivers.
No one cares
DPC latency is more of a function of the OS optimization and drivers than the hardware, In context of DAW's it's used more to resolve issues than to assess performance. About your 19ms remark, the DPC latency numbers are related to audio buffer size but they are not the same thing, any modern system today can go below 5ms latency round trip.
I'd certainly like to hear more about DPC latency with apple silicon. I imagine that the link between audio glitches in systems with the T2 security chip and USB2 audio devices would have shown itself in DPC latency results.
@@coolinmac Millions of people throughout the world make music. There are more people making music and owning a home studio than people editing videos. So... no one cares? Really?
The artifacting mentioned at 4:16 also occurs exactly the same way on my Ryzen + RTX 3080 machine running Windows. Anecdotal, but it may not be exclusive to the M1
It's actually a bug on halondrus' fight due to the fight using a phased version of his arena that goes away once the encounter is done.
@@fynale2049 How come i've never seen that bug then?
@@josejuanandrade4439 The bug simply doesn't apply to everyone. Works fine on some computers but glitches out on other.
@@josejuanandrade4439 seems to be a dx12 issue. My guild had to pause a glory run because it made some of the orbs needed for halondrus achievement disappear.
Essentially the main thing bogging down Apple Silicon is the lack of third-party support. Really hoping that Apple start incentivizing devs to expedite M1 apps, because Rosetta isn't as near as fast as Apple claims
So all the claim that M1 will makena revolution was a bluff. Many companies are not developing for M1. No audodesk no catia no nx
Apple's plan to incentivize app devs isn't likely to change from what it's more or less been since iPhone first released (possibly sooner). Push their stuff onto as many consumers as possible and sign exclusivity deals so app devs have to support it if they want to reach the largest possible user base. The reason iPhones are as common as they are is because of the way Apple pushed them onto the masses by making it more affordable to get their fancy phone (by paying monthly) when other manufacturers still expected people to buy their phones out right (which I always do). Of course now everyone does it this way but that is, from my understanding, a large part of how they got such a huge market share (they also used other forms of psychological warfare like making SMS an ugly green and encouraging that exclusivity (there's gotta be a better word) type of mindset).
I don’t even think 3ds max runs on a mac so games devs won’t buy them even if they are good unless they load windows on to them
The problem is that every time devs get used to a particular framework apple go and change everything on a whim, they have absolutely 0 regard for backward compatibility and its nearly impossible to keep up. This is how they lost the scientific computing market, used to be 10-15 years back you'd go to a conference and everyone would have macbooks but over time with Ubuntu becoming more user friendly and WSL-2 being genuinely very good, people just got fed up with accidently letting the OS update and finding all their open source software broken.
Apple have been steadily painting themselves into a corner saved only by their brand image and an almost militant approach to exclusivity, that doesn't mix well with genuinely productive third party relationships.
They only claimed it's faster than the intel Mac the Apple Silicon one replaces
I would love to see how engineering workloads like spice, matlab, numpy, and CAD, perform on the mac.
i think so as well, all ML-Stuff, recently pytorch added supports for gpu on M1 which is really nice because of all that unified memory, you could really nice prototyp some DeepLearning on the m1
Probably not too well, unfortunately
@@dzjuben2794 depends a lot on the workload, if your doing tasks that are very bandwidth heavy the massive bandwidth that the M1 Max/Ultra have for the GPU alone is very impressive and if your using a build of Numpy/SciPi that uses the AMX units then you also have a LOT of extra unto 8 TFLops of FP32 and 4TFLops FP 64 (and I think 1 TFLOP fp80) for matrix ops.
The massive memory bandwidth and rather large on die cache can make a massive difference in data sci workflows were one is very commonly matching hashes, joining and filtering. things were the raw compute cycles is a tiny fraction of the work compared to pumping data through the pipe.
Right now matlab and simulink run very poorly, especially simulink. Matlab should be getting an apple silicon update in 2023a, but simulink probably won't get updated for quite some time. Solidworks runs fine on my m1 pro via parallels, but not noticeably different from the m1 macbook air. Fusion should be getting an AS update soon, so that should be interesting to see.
@@alexstubbings_ For sure apps that have not been updated in this space like matLab etc are not good. But the python data-sci space is now finally fully native stack and is looking very good.
so wonderfully thorough.. wtg.. loved that you figured out the one benchmark was faster because of the drive speed difference. i wish there was more coverage about ssd speed differences between current mac models and the size differences too, since apple only talks about the 8tb models and most reviewers spec out base model only.
Not only that but I think it should have been compared to compile time measured in a Linux distro more than in windows. The windows kernel is not really good for IO performance. Even more so with a lot of small files
As a real world test, editing content in FCP is insanely fast and efficient, my short form verticals export in mere seconds. It’s nutty for video creators
In the real world people are using FCP?
the only people that would need to spend 4k on a productivity machine.
graphics people...maybe, but could get by with less.
everyday people who might use a mac laptop of some sort who are jsut pushing excel sheets around, won't need to spend this much.
So basically, they've made a machine with potentially/theoretically the power to run games as well, but due to being dumb about how they support stuff, basically made a machine only video editors should buy.
How does Apple even exist.
@@mariankallinger7984 it's cheaper than Adobe licence, so yea suprise suprise, people actually use FCP and Logic for professional works
My work laptop is an HP zbook with a RTX 3000 and vs my personal m1 pro laptop, which honestly is a bit more modern and more expensive, but the main impact is none of the weird hitches the windows laptop has without any clear performance bottleneck, they both behave differently, even when I'm doing very low-impact stuff like editing lower-res files in photoshop. The mac is of course pushing more pixels with the internal display too, and I've never once turned the speakers up to override fan noise.
I use mine for 3D modeling/rendering and its amazing.
Developer here with the angry (not really) comment. My primary complaint/issue would be comparing the M1 Ultra against the 12900K instead of the 5950X. Which is actually kinda the complaint I'd have in general. Per LTT's own review, the 5950X was the stronger overall "productivity" CPU. Given the focus of the M1 Max & Ultra, it then seems odd to use the 12900K instead of the 5950X.
I think it is because apple used the 12900k as a comparison for the M1 Ultra. But yeah, here it should be "fastest 4k$ pc vs M1 Ultra", so a 5950x would make sense.
It's not called AMD extreme makeover.
I want a full amd build comparison. While intel's cpu might be slightly faster, it's much more efficient. Power usage would show it's true potential. Also, I have yet to see benchmarks for amd's HIP support in blender.
Dont agree, with the lower idle power the 12900K and better in almost everything compared to the 5950 (performance, boards, PCIe) is for sure the better value for developers. But in the end both are the same league. Just as the Ultra is, except for the price league where it is unbelievable 300% more expensive, and 2500 against 7000 Euro hurts.
I imagine despite the audience of the M1 Max/Ultra equipped machines, the viewers of the channel are predominantly consumers
4:14 - That's an issue with the game, not the system. At certain camera positions large props like that tower can disappear. I can only assume why it happens, but I can tell you how to reliably reproduce it if you want. It happens on my system all the time (3600X, 2070S, B450 MB).
he didn’t say it was an issue with the system
@@dylangarcia9468 He said he only experienced it on the M1 Ultra variant.
Looks like wonky code for object culling to me. Well, what can you expect from such a relic of a codebase that this is likely still running. 😅
@@VectorGaming4080 he said it was the developers fault tho for not having planned it out
Thank you Anthony! Great review! I'm in a cold climate so I'll stick with a PC for the heat and repairability.
Lol the heat
They brushed Anthony eyes brows. Hit him with some blush and warmed his face. Some roseyness on his lips might be a little much but I like his hair a lot better now . Overall 8/10 I'd smash.
Anthony is so freaking talented but I can’t stop worrying about his health when I see his videos. He’s on his way to an early grave and that would be a great loss.
@@KXKKX yeah man, hope he is loosing weight .. at least bit by bit.
Really unfair to ask Firaxis to port their game over when there would be no one to play it. I'm sure Civ VII will have native support.
Hope Civ 7 will be out soon! Really looking forwards to it
Imagine something like GTA 6 getting ported to the Apple Ecosystem.
They've got some serious performance with those CPU-Flash-GPU and their Semi-Native software in iOS/macOS Swift and Metal API are snazzy.
With someone like Rockstar who spend years and do many optimisations, you could have XB1-level on an A11/iPhone Max, with XB1X-level on an A12Z/iPad Pro, with XsS-level on a M1/MacBook Air, with XsX-level on M1 Pro/MacBook Pro, and lastly Gaming PC-level with that M1 Max and M1 Ultra chipsets on the Mac Studio.
Just imagine taking all your next-gen gaming with you wherever you are, without needing to Cloud Stream them. And it scaling up it's graphical fidelity depending on the thermal profile of your device. That would be so cool.
Regarding the CPU bottleneck in Dolphin: that’s normal. In Dolphin, the only demanding tasks that can run well in parallel are the CPU, GPU and DSP. Breaking up any of these into smaller tasks just to run it on more cores is likely to just make it run slower, so they don’t.
As said before, Apple's sillicon is a great breakthrough without a doubt, however what makes them great is their weakness: having a computer that I use for work with a single point of failure for RAM, CPU and GPU is a no go for my needs (other people mileage may vary) Same for the mac only SSD.
Irrelevant argument when you remember laptops exist
@@skyhigheagleer6 Soo do you want it to be the status quo now?
But doesn't the fact that it's an SoC mostly prevent such elements (which are no longer separate components) to fail? Like, when did your phone or tablet's CPU, GPU and RAM last fail you?
@@excarnator you are comparing parts that do not have massive heat. Ulp parts cant compare to desktop components. Nice terrible analogy.
Not only that. The silicon itself is kinda too specific. If anything even if minimal changes. It will depends in its CPU cores which are kinda week for general purposes
Would love to see M1 benchmarks for industry level post-production software such as Avid Media Composer and Pro Tools. Don't think a single channel on TH-cam has done this yet.
They haven't, and none will unfortunately. It's realistically because no one who does TH-cam will ever touch Avid Media Composer (i can't speak to Pro Tools though). Media Composer is pretty strictly only ever used at industry level like you said. No one on TH-cam is operating at unscripted tv/scripted tv/feature film level. And any who would are young enough where they came up on FCPX, Premiere, and Resolve. I've used Media Composer on my M1 Pro Macbook Pro though and long as you have enough RAM (it still runs through Rosetta 2, Avid hasn't coded it natively for Apple Silicon yet) it runs almost perfectly fine. You very easily forget it's running through emulation. Avid did a big rewrite of Media Composer almost from the ground up with Media Composer 2020, so it's more streamlined to be able to run through emulation. Avid lists themself that if your system only has 16 GB of RAM to turn off certain features like the background phonetic indexer, since it running through Rosetta 2 has a RAM utilization cost, but otherwise it runs great especially if you have 32 GB+ of RAM.
Usually, on the industry-level software, you go through the corporate-level or enterprise customer service to ask for those numbers and benchmark on hardware.
I'm glad you mentioned the suitability for hot climates because this is often glossed over in the reviews. Living in Brazil, an Intel notebook in a small room with second display and intensive usage means that I need AC on almost all the time if I want to be comfortable in the room, while using a M1 machine is as if it was nothing. This was one of the main reasons why I switched from a Dell XPS 13 to a Macbook Air M1
Same. Here in Phoenix, AZ where it has already hit 114F(44.5C) this summer, my gaming PC will heat up an entire room noticeably hotter than the rest of the house and besides the PCs own power consumption, the AC runs harder further raising the power bill.
Yet, with my M1 Max I can literally work on it outside*. My heavier workloads consist of running Altium(EDA software for PCB design) or PathWave ADS/EMpro for EM field solving, in Windows 11(Parallels VM).
*Why work outside? When my kids are swimming/playing in the yard.
O difícil é comprar da Apple com o dólar a 5 conto e a Apple vendendo por quase o dobro do valor da conversão direta
Such a useful review/comparison. I came to the same conclusion that you did, a really loaded 14” M1 Max MacBook Pro makes the most sense between all the “pro” M1 machines, and the super low power draw pays off the most in that application, too. An astonishing display makes it even better.
I think it depends if you need the mobility of a laptop, if you only ever use it at a desk then I personally woudn't buy a laptop for that, but other people's mileage may vary here.
Yeah I love my 14” M1 Max MacBook Pro. Best combination of power and portability.
LTT, I have said it before, I love me some Anthony performance breakdown vids. It is fun, calming (linus) and informative. Can't wait for the new vids from the new department!
Man…
I love having Anthony as a host. He‘s just so calming to listen to.
Not gonna lie Anthony, you just educated me on a bunch of different things. The difference between ssd speed and CPU speed, thermal modules, names of different platforms to test your own PC, Copper cooling, heat output, and power supply testing.
Thanks again.
Once again, your calm voice has made me understand bench marks I wouldn't be able to understand by myself.
Thanks.
I was just going to say the same thing.
Agreed 💯
Can we call him Tony?
@@chrispatterson9689 no. Absolutely not.
Anytime I see Anthony in the Thumbnail, I always click. Never disappoints.
Something few ppl have mentioned: M1 Ultra has 114 billion transistors, while RTX3090 has 28.3 billion and a Ryzen 5950x has 19.2 billion (transistor count of 12900 is not known). What Apple's doing is kinda trading transistor count for power efficiency, especially in targeted workloads such as encoding.
Its an absolutely HUGH MONGUS chip. When you get a reduced instruction set, some fixed function hw and are 100% in control of both HW and OS, you can do these kinda things on low power budgets.
Apple prefers to increase the TDP of their chips for higher TDP-supporting products (products with bigger thermals envelopes) by going wider with silicon, instead of increasing frequency on the same die. (M1 Ultra's total die surface area is ~920mm2). That is what affords them that efficiency. For the past 10 years they've always chosen to keep frequency low and silicon size big. The M1 Ultra's performance cores boost to a mere 3.23GHz, and yet their performance is roughly in line with Zen 3 cores clocked at ~5GHz. The GPU cores boost to only 1.3GHz. Their IPC on all fronts is off the charts. Granted, as Anthony says, the prices match what they're offering.
And yet, seems like they're not stopping with the Ultra as a chip that has exactly the hardware of 4 M2 Max dies (codenamed Rhodes-4C) is rumored for the Apple silicon Mac Pro, and if we assume linear transistor scaling from M2->M2 Pro->Max, that would have *270 billion* transistors. At this point it seems like Apple will be the first chip maker to hit the 1 trillion transistor mark.
@@utubekullanicisi IDK which approach is better - more transistor (cost) and lower wattage or less transistor and more wattage. Some of my friend in electrical and computer engineering think Apple M1 is "wasting sand" LOL
@@marisakirisame8543 With Apple's approach you will get more efficiency but higher price, with most others' approach you will get lower efficiency but lower price. Apple's motto has always been "don't care about the BoM (bill of materials), but build the best thing that the current technology allows", so at least they're doing something that match their motto. I'd say neither approach is "the best", but depending on what you care about the most, one of them can be better for you.
In any case, I would try to be open minded about both approaches.
@@SpartanArmy117How is their approach "build the most average thing in the market and charge the most you can" when the newest MacBook Pros have class leading miniLED displays that have more dimming zones, brightness, contrast ratio, etc. than any other laptop display, class leading speakers that have more dynamic range than any other laptop speaker, one of the best if not the best webcam hardware with the best image processing thanks to the M-series chips deriving the class leading ISP from the A-series chips for the iPhone, and a class leading trackpad? I could go on and on about the hardware advantages that Apple has in various product platforms that don't necessarily show up in spec sheets. They might have higher profit margins on their products than anybody else (though Apple's reported profit margins that are supposedly around 30% seemingly suggest otherwise), but I definitely disagree that their approach is "build the most average thing and charge as much as you can", it's more like "build the most over the top thing you can with unnecessarily expensive components and charge as much as you can for it".
Would be great to see tensorflow benchmarks: cpu mode, gpu mode and cuda, alot of us deep learning engineers like to run small runs on our macs before we push it onto the cloud.
Apple: "We don't cherry pick out data we just ignore everything that we don't put our badge on"
In fairness they never said they don’t cherry pick
@@ultraL2 in fact they literally say they do. They call it “selected industry metrics”
@@kalmenbarkin5708 re-read what I wrote
@@ultraL2 I read it right the first time. I’m saying not only do they not say they don’t they literally say they do
"But... but... look at all of our synthetic benchmarks! You can fit so many synthetic benchmarks into this bad boy." *slaps cube*
great work… you’re the GOATS of all things tech. Thanks for all the ridiculously hard work you put in.
These people run three tech channels, posting quality content constantly on all of them, they are amazing.
This video was pretty good but many other LTT reviews need work to be technically on par with some other channels. I hope the lab fixes that.
@@Dionyzos they are a entertainment tech channel rather than a technical tech channel like GN.
Anthony has such a pleassant voice, I really enjoy watching his videos.
For video encode you probably want to test Intel's QuickSync too, because it should handle weird stuff like 2:3 a lot better than the notoriously constrained NVENC.
@@Prophes0r Nope, a lot of businesses and people use them for VoDs. For example TH-cam. It's all hardware. Yes CPU encoding is better (by far less that the 95% you quoted, more like probably 20-30%, also you are comparing H265 and H264 and the same QP values wich are different for every encoder which is bullsh*it) and should be preferred, but that only holds true as long as you have the time and the resources (more like a Netflix use case, where you don't have that much content but everything is being watched a lot), if you are in a hurry or have terabytes of content it's basically all you've got. Hell Intel just announced a GPU for for example CDNs called arctic sound m and google builds similar hardware themselves for TH-cam. Similarly many content creator, especially on the go, use QuickSync to not wait for ages. (Remember especially with QuickSync there's parameters and presets you can tweak to make it look better than the default settings)
I don't think NVENC was really ever meant to be more than a decent way to stream content consumed on the same PC at a relatively low cost. For offline rendering it doesn't really seem worth it to sacrifice the image quality (unless that's somehow different, I never really used NVENC for that purpose).
@@MLWJ1993 Well that is only a fair point as long as you don't have any pressure on time (certain delivery dates for example) in that case you just crank up the bitrate by a few percent and you're fine (for offline usage the tradeoff isn't quality usually but just size).
@@cromefire_ It's not like CPU encoding takes enormous amounts of time either (unless you really go ham on both the quality preset & bitrate).
Maybe relevant if you own a dated CPU though.
@@MLWJ1993 Well if you go for x264 fast or so (when it'll be fast) hardware might actually be a better quality and if you go for slower x264 you can also use hardware h265. Of course you could also use x265 then, but x265 is way slower (and everything is amplified on mobile). And you can do the same logic for VP9 and (soon, when Arc is out one day) AV1 which will get you even better quality and even lower speed, software AV1 is < 1fps with 16 Cores and a bunch of RAM.
Them brows on fleek Anthony! ♥️
Aye was thinking the same thing that's why I'm here 😂😂😂
Is that what changed? :o
Everything is on fleek when Anthony is on camera.
@@reptilez13 trueee
true hhh
Always awesome, Anthony.
I’m glad to see you tested Macs in environments with software that they are used for or at least have been known to be used for. Intensive audio processing in multitrack studio applications or just high resolution Wacom Cintique digital painting are where my semi pro interests lie.
Thanks for taking on this difficult task! I was super nervous when the ultra came out that my fully built 5950X and 3080 Ti was obsolete. This makes me feel much better about my render machine's solid performance... Just wish adobe would optimize for Zen 3.
Just use Davinci and never go back lol
The reasons are entirely self evident and obvious. He literally said "render machine". Plus it's a joke
Just because a newer model comes out doesn't mean your current model suddenly becomes bad.
@@joshuareveles vinci doesn’t have good support for Ryzens
@@starrims says who? I’ve seen countless high end editing rigs run flawlessly with Davinci
Would have loved to see an R9 system for comparisons, including power draw. Mostly to satisfy my love of graphs and data but still, other reasons too.
Great deep dive. Thank you Anthony!
Would like to see the comparisons to AMD CPUs for productivity.
I second this. APUs in my case though.
Why??
@@directlinkrexx4409 because AMD fans always wants to see AMD being superior than everything else.
Sure. For some of these loads and at this high price point, I wonder how Threadripper/Threadripper pro compares.
Yeah, i would love to compare this to threadripper pro+quadro cards which are more power efficient (a lot) would he totally different results
We love Anthony!
Always more Anthony!
“We only have one Anthony” -Linus
“Anthony tech tips!!!”
ATT - always there
For that price, would it make sense to compare it to a Threadripper 5000 series? Would be interesting to see how ARM compares to both "x86" vendors.
Also x265 is quite performant on Radeon cards so also interesting to compare both GPU vendors
yeah configuring smth similar with a threadripper clearly is the more fair test. BUT apple claims to be faster than there 12900k so they gotta test it agains tthat
Seeing how close the 16" M1 Max MBP is to these, I'm very happy with my purchase of the 16" M1 Max MBP :)
These things are incredibly powerful for their power consumption levels though. Cannot get over the fact that it's basically an LED bulb at 'idle'.
Same here! I contemplated the Ultra for future-proofing but I'm so glad I went with the M1 Max for the Studio. I would 100% go with the 16" M1 Max MBP if I was doing more mobile work but glad to hear you're happy with your purchase!
i mean at these prices i dont think power consumption is a worry😂
Yeah for them prices it better be impressive. The M1 software incompatibilities really put me off though.
M1 a LED Bulb?... lol now tell me M1 will save humankind.
Have you build a steve jobs shrine in your house ?
Anthony! You can't compare compilation for x86/AMD64 and ARM because a compiler does completely different tasks. x86 has a much more complicated instruction set, and the compiler should also do much more actions, for x86 exist much more optimizations! ARM machine code has a different size than x86.
To compare, you must build x86 Chrome by cross-compiling on an M1 chip.
Just speculation. Can you compile both to run a real life test?
Anthony's eyebrows are looking FIERCE in this video. Idk if there's a new makeup team but A+!
Id personally put the m1 against the 5950x for the compile test, that thing is consumer and a beast for compiling in my experience, then agin of youre talking about prosumer or professional workstations why not test it aginst a threadripper?
@@nobodylmportant its just they claim its the fastest computer for $4,000 then you have to test against the very best that you can get for 4k
@@TheMotlias a half decent threadripper is gonna cost you $2000+
Yeah... and the M1 Ultra they used in these tests is already $5800.
Because it would make M1 look like trash.
It's great to see more Chip makers. The more the better.
I missed Anthony, glad to see him back! One of my favourite hosts!
I'd be curious to see an underclocked pc Vs Mac on power draw. if intel have shown us anything, it's half of your power is going in to the last few percent points of performance
This, all day, every day. 12900k is definitely a quick chip but, its power budget and money budget are way in excess of what most would need. A lower priced, lower power chip could yield some deeply interesting stuff. Half the price of the PC and run it stock and see where you stand. Probably 85% of the performance for half the cost. It is all well and good and makes good videos when you test the crazy expensive against the crazy expensive but most cannot come close to affording that. The priority of entertainment over information needs to change. I'd be very happy to see more realistic stuff with a conclusion saying... This is twice the price of this but you'd be mental to buy the more expensive one as you can get most of the performance and buy a second one for giggles for the same price.
true. a 12700k would be more than enough, still 8 pcores just half the ecores (4). and im not sure how usefull those 8 ecores are. but most of the power budget goes to turbo boost from ~4.3 to 5 and 5.2 ghz. with carefull settings in the bios, you can have most if not all of the performance and way lower power consumption. but, again they compare cpus as they come from the factory so, meh. m1 would still win in power draw but the difference wouldn't be so dramatic. but for the gpu, there is not much they can do...
Well just go look at pc laptops then.
Hardware itself seems really nice, just still waiting on software support/optimization on a lot of things. Hopefully developers of all softwares pick up the pace on that.
Devs won't do shit lol.
@@marcogenovesi8570 unreal engine and unity already fully support apple silicon. But the market for Mac gamers is so tiny most devs still don't bother. Especially if they use a in house engine. From what I can tell there are literally two games that are both macOS native and Apple silicon
@@marcogenovesi8570 If all you're focused on is gaming, then sure. But I pretty much any other software, support is either already there, coming soon, or being worked on.
@Zack Smith You sound super defensive. We weren't making the comparison, but go off though.
Apple has media engine which do good in video editing. Since most TH-camrs are video editors its not surprise the hype on youtube. But video editing is just a fraction of productivity task in actual computer world. In any graphical computing tasks they will never match ray tracing cores embedded in modern graphics cards.
Mr. Sebastian, can we get a vid where some of the engineers at LMG are tasked with taking a console/NUC and improving its dimensions, cooling and performance? just saw a vid of DIY Perks' slim Playstation 5, and I think you guys could do the same thing! great vid btw props to Anthony for being great as always
I truly cannot overstate how utterly incredible that power draw is. Especially in a world with fast increasing electricity costs. I’m a video editor who uses Final Cut Pro and the Adobe suite, so i’m SO pumped for Apple Silicon.
What did the make-up department do to our beloved Anthony?
I was wondering the same thing
he look gay
I was wondering if it was makeup or different lighting maybe?
Idk but he looks great
His eyebrows are different
Anthony’s skin is glowing. He looks great :)
He looks like a transgender...not great weirdo
I bought the M1-ultra Mac Studio today because I needed to upgrade my 14 year old Mac trashcan. I only run a few things, but often use all 64G of memory, and all 8 cores on the trashcan (really, all 16 hyperthreads), so I'm hoping this new machine is faster for those workloads. I'm expecting to have to (after more than 15 years!) recompile a bunch of utilities I wrote myself, and reinstall lots and lots and lots of software, so it may be a while before I actually try to move over to the new box.
The 'trash can' Mac Pro is from 2013. That is 9 years ago, not 14. I type this on 14 year old Mac Pro. That is the 2008 model aka the Mac Pro 3,1. Is that what you meant?
Not looking great for M1 Ultra when next gen GPUs are soon to be released 👀
Nvidia has a 4 year cycle so every 4 years they drop a new generation card same will happen again we are now at 2 so 2years left for release
Nvidia has a 4 year cycle so every 4 years they drop a new generation card same will happen again we are now at 2 so 2years left for release
@@BrucyJuicy They are releasing the new cards-4000 series this fall. Plus Nvidia releases every 2 years, so your statement was totally wrong.
They don't really compete with each other. Plus of course apple is already working on their next gen products
Anthony is just awesome. Intelligent, deeply knowledgeable and his reviews are very fair/unbiased. I honestly love his content and personality - great guy overall and a good engineer too!
I’d personally love it if you could add music production as a benchmark, whether that’s in ableton or Logic Pro.
Second this. I’d love to see how many plugins/tracks in Logic the M1 Ultra can handle. I am sure it’s absolutely insane.
Same, music production is the least talked about tech-related field and it just gets ignored constantly. lmfao. Speaking of Ableton tho, given the way it assigns threads per-track, It'd probably do really well, but i don't see it out-performing heavily hypertreaded processors like the 5900x/5950x or the 12700k/12900k as the extra threads really count when you're loading up as many tracks as possible. I'd love to see that comparison
Ableton is so light I'm not sure how you'd benchmark it without dozens and dozens of VSTs. My '08 Mac Pro is my music workstation, and it has never stuttered or gone above 20% CPU, even on full-album projects with 50+ tracks. Even my 2gb MBP runs Live extremely well.
@@YearsOfLeadPoisoning yeah I think the best way to test cpu performance would be large amounts of high number, full polyphony synths like phase plant, serum, or Omnisphere. I've got a ryzen 5 3600 based system and I can easily get 64+ tracks with moderate polyphony but it'll cap out if I take Pigments and increase the granular voices to 256. Lmao. It's not a perfect methodology but the bones are there for a solid test of raw cpu grunt
M1 max handles digital performer and Omnisphere like a champ
I'd love to see Linux datapoints for the Intel platform, especially for the dev related tests
That would be nice, and a cleaner comparison. Would also like to see a complex program with both memory and cpu bottlenecks at different parts of the simulation (e.g. weather model like WRF or ocean model like GETM). That could pinpoint some limitations and/or strengths of the platform.
Costly experiment, lol.
Yes, especially since windows is much slower when closing active processes.
Yeah, using Intel’s Clear Linux distro.
I came to the same conclusion about the Mac Studio with Ultra.
It was overkill for anything but the most intensive 3d modelling/rendering workloads. This is why I went with the Mac Studio with M1 Max. Saved a bunch of money, added the OWC Gemini which gave me 24TB of RAID 0 and more Thunderbolt/USB and Displayport expansion. Didn't buy this for games but for content creation, TH-cam and Zoom.
The software does have to catch up though. I find many cores not being utilized so we sadly have unused resources when I'm running Resolve, Photoshop and LightRoom all at once (with email , browser, etc...running as well). But, I love this little thing. Simple, elegant and does the job. Forget gaming, it's a MAc, not a gaming rig. For that, Gimmie a good PC with NVIDIA graphics and I'm a happy camper.
Good review and I agree on many points here!
Anthony, the walking knowledgebase.....just incredible....I love his reviews, I never miss a Anthony review
Wait, couldn't you also make compile benchmarks compiling for the same architecture? I mean otherwise this lead could be architecture specific using different optimization flags or even ignoring parts of the code because of architecture specific macros.
I have no idea. I don’t use Apple products, play no video games, and I am very unfamiliar with this kind of technology.
I’m curious what their compiler flags were
They're running Intel native code on the Mac and the Intel machine is still getting thrashed at 10% of the power draw.
in 3 more Mchip iterations, it'll be worth it.
@@jimemmonstein847 In some application, sure.
I'm both a Mac and Windows/PC user. I use my Mac (MacMini and MacBook Pro) for productivity/work since I'm a software engineer and I like having a proper terminal and shell environment. There are many aspects of MacOS I prefer over Windows, Finder isn't one of those. Also, the virtualization support on PC is much better still. Yes, I use Qemu/MTU on M1 but Vagrant support is basically broken at this point - Parallels works but good luck finding an arm64 vagrant box. Docker was (and still is) a bit of a mess on M1. It basically didn't work at all for about year, but now that it does, you are still stuck with arm64 images. Things are getting better and the iOS + MacOS integrations are downright magical. If you needed a system to do Xcode development on, then an M1 Mac is awesome. It is also my preferred video editing system since I'm using Davinci Resolve. I've always kept a PC build for virtualization and gaming.
Hopefylly the M1's Macbook Pros will be better than the 16" Intel I9 ones regarding thermals. Software engineer here, too, and after 1 year, all of my colleagues experience burnt fingers, total freeze & reboot when building the bigger project (~5 minutes).
This can happen several times a day.
It is horrible. Sometimes, Docker will start Oracle no more as it would get corrupt files from this :/
It's client-provided, so no thermal paste change for us :)))
Damn, I just read about Docker... too bad :/
Frankly, I'm only interested in Apple Silicon for being a powerful ARM machine available at a relatively affordable price (as compared to something like an Ampere Altra machine). So, as it's more of a curiosity than a solution for me, I'll very likely wait till something like the Max Studio is available on the second-hand market for a more reasonable price. By then, Ashai Linux will have likely matured nicely as well.
Ubuntu on Nvidia Orin (12 ARM cores + 2048 CUDA cores) is more powerful than M1 Ultra (20 ARM cores + 1024 GPU cores) with smaller form factor.
Also, Orin has the same 60 W TDP as M1 Ultra
@@Haskellerz I hadn't hear about this. I would be inclined to wait for benchmarks (I can't find any general computing benchmarks ATM), see how they stack up, but I'll definitely also keep an eye on the Orin as a potential option. Shame my options are Nvidia and Apple, two historically.. shall we say, consumer unfriendly companies.
Edit: And again, I'm more interested in these as a curiosity, so I likely wouldn't spend over $1k in either case.
@@daedalusspacegames
There are benchmarks of Nvidia Orin on Medium
Orin AGX : Model Name FPS
0 inception_v4 1008.721131
1 vgg19_N2 948.223214
2 super_resolution_bsd500 711.692852
3 unet-segmentation 482.814855
4 pose_estimation 1354.188814
5 yolov3-tiny-416 2443.601094
6 ResNet50_224x224 3507.409752
It seems machine learning tasks on GPU is faster than M1 Ultra
But M1 Ultra CPU might be faster than Nvidia Orin
@@Haskellerz and Orin devkit I/O and storage is severely lacking compared to Studio at the $2K price point.
Good for Apple users. Impressive but I'd still not pay that much for an Apple product either limited to no upgradeability/repairability
Yeah that’s the biggest downfall.
The power useage when M1 MAx is in the 16" MacBook is the key, no loss of performance when not plugged in and at least twice the battery life of any PC laptop, and in some cases three times, and as a result of the low power no fan noise and it stays cool on your lap, even when editing 4k Video and rendering, its movie making heaven.
Movie making heaven at half speed
I wish there was an easy way to differentiate the "Higher is better" vs "Lower is better" graphs. By the time I look down to check which one I'm looking at, we're already onto the next bullet point.
Maybe an arrowhead overlay on the graph's bars pointing in the direction of better...?
Hm, I wonder if you could make them left/right bound respectively. Ver yobviously different, and you could read them the same as a quick glance. Aka no matter if higher or lower is better, the 'better' bar is the ones with the endpoint further to the left (or right, depending on which one you make left/right bound)
So the takeaway is that a $5800 Mac is rarely/sometimes better than a $4000 PC
Yeah... But when you factor in the fact that the air cooler for the desktop is about 50% the size of the entire mac mini, it puts things into perspective.
@@christosbinos8467 Also the power draw will be significantly lower.
No it doesn't. If you want a fast computer, get a fast computer. If you want a small computer get a small computer. And power draw is mostly irrelevant here for a desktop computer. You don't need to worry about battery life and if you're worried about the cost of electricity, don't buy a freaking 4k computer
@@reimusklinsman5876 Cost of electricity does become a factor, when you live in Europe and pay 0.5 €/kWh.
@@StuermischeTage True... But a 5800X or a 5700X is fine for most 2D/3D tasks I know of (in middle range freelancing, or other pro work at a studio), ie, a 5600X - *Edit* : I meant 5700X- (same than 5600 in single core) already is as fast in Photoshop (it is mostly single core) as an intel i9 10900K, and it really uses very low energy (quite less than intel Alder Lake), around 65 watts. Of course, not the prodigy of low consumption of a Mac Studio, but one can get a decent PC work machine for around 1200 - 1400 euros. How many years would one need to recover the cost difference with a 5400 euros Mac, even if saving a lot of electricity cost?
Even more... getting a laptop reduces by a lot the energy consumption. If your tasks are not super heavy, getting a (Windows) 12700 intel laptop, or one of the latest Ryzen 6800H laptops will get you a low consumption yet a very usable desktop replacement (if connected to an external monitor). Of course, not ideal for heavy tasks that need many cores during a long time (as a laptop has worse cooling, and it throttles; even a macbook pro has been reported to throttle), like 3D rendering. But fine for pretty much any other thing. It'd depend on each work's type of work load. Mostly on your professional profile, indeed. I know digital 2D artists (I'm one) can do very well with a good laptop + external good monitor.
I just bought the Final Fantasy vii remake and my old ancient homebuilt PC runs it at max settings without a single hiccup. There are others with newer RTX 2000 series cards complaining about stutter and lost frames. I'm still using a gtx 1080Ti, i7-8700k, AsRock Z 370 Extreme4, 34" Alienware monitor using the builtin overclock settings. The 1080Ti does come with a Aorus software app to change from silent to overclock gaming modes. No stutter, no glitches, everything looks great.
Thanks for a great review with attention to detail and depth of knowledge. A fair and square battle between the titans pc & mac. Really enjoyed it. I expected to see faster read/write speeds for the ssds on the 12 gen intel pc. Aprox 20% faster. Perhaps the ssd class was a bit low for this config.
You are one of the best electronics reviewers I have watched. Well done!
Apple's M1 Ultra chip in a rack mount version would be a game-changer. As a tech enthusiast, I've been eagerly waiting for a powerful and compact solution that can seamlessly integrate with my Unify network equipment, and this seems to be the perfect fit. The M1 Ultra's performance and efficiency are off the charts, and having it in a rack mount form factor allows for easy installation and management alongside my existing infrastructure. Kudos to Apple for continuing to push boundaries and deliver innovative products that cater to the needs of professionals. Can't wait to get my hands on one and take my setup to the next level...
Curious to see what that chrome compile would be like on a high end Ryzen with more cores than the i9.
Oh yea. That's one benefit of PC they didn't show, other brands of hardware. An R9 5950X would be interesting to see in place of the I9
Code compilation likes more memory bandwidth (source is Hardware Unboxed) than core count. Intel currently wins against AMD and loses against Apple. If Zen 4 supports quad channel memory for all platforms, it will beat intel (if they keep dual channel for the CORE series).
ryzen more core™ meme is dead.
I don't know why people claim Ryzen has more cores than an I9 when all they really did was merge a high end ryzen with their epyc chips. Threadripper is not a gaming cpu, and has WAY more capabilities than ANY workstation user would need... it literally outperforms their own Epyc server chips in pure performance, and only lacks in some minimal features such as cache and allowable ram (though 2tb of ram seems plenty for any workstation user)
Beyond that, there are rumors that next gen threadrippers will support dual cpu setups, meaning the workstation user, 99% of them NOT simulating the physics of our solar system, will have the abilities of a small supercomputer.
Tldr, stop comparing threadripper, a server CPU they purposely put in the ryzen family for shits and gigs, to an I9, a properly named CPU for the categories of work it's able to do.
@@ryanthompson3737 cope harder intel
That wow clip of the flickering building happens on pc too, think it’s more of a wow problem than a pc/mac thing
As a developer that builds embedded code with GCC and the Rust compiler and also runs SfM software that requires a large amount of memory, I can say that the more CPU cores, GPU cores, memory bandwidth, and memory they throw into these things, the happier I get. I am using the MacBook Pro M1 Max, but I would definitely be able to make use of the Ultra. That being said, I can't imagine buying yet another expensive M1 chip since I already have one and it works great. I am excited for more software to get ported natively to Apple Silicon, especially games.
As a developer that has both a Mac mini and a windows laptop I can say that I like the quietness on the Mac, but there are some features on there that require downloading through its kernel or through visual studio terminal which is fine, but I prefer grabbing my code wherever I go rather than leaving it at home and come back so I can continue it. Who knows, maybe I have an idea I wanna do, I just seem to use my laptop more and no not much difference between platforms in compiling speed (as a front end and Java developer)
You had me thinking Apple had silently released an M1 mini ultra with that thumbnail
My favourite thing about your videos is the honest sponsorship plugs.
I can't imagine ever needing one of these, because my Mac Mini does everything I need it to perfectly fine, but damnit I want one.
Big issue is with pc you can upgrade parts more easily, better lifetime value.
On the other hand it keeps value better. So upgrading while selling the old machine is not as painful.
But yeah, this makes time inbetween upgardes longer for the Apple stuff. That's a valid criticism.
@@JZWPikaNozna yeah but selling pc used on ebay carries risk, even if you write no refund sometimes people play games and they grant refund anyways even with the item working fully
@@Ultrajamz Yup, there's that. Still, this is people being terrible. It's not Apple's fault.
All in all Macs in general make up a little by what I have mentioned in the financial department.
To Apple's credit, they've made a great attempt over the years to ensure that older devices are kept supported for as long as their hardware will allow for it. This means that, compared to a PC which might need to upgrade its specs after a few years following numerous Windows updates, Macs can continue to run updated software even if they are a few years old.
By the way! the WoW visual artifacting on that fight occurs on windows too, it's a pretty major bug introduced in a recent patch somehow, and kills you a decent amount in that raid fight!
I've seen WoW do the flicker thing on windows occasionally as well.
Apple silicon has big potential, but as the owner of a m1, I'm thinking M3/M4 and better software support before I pull the trigger on apple silicon again.
I’m glad I read this. Been playing wow since Vanilla. Been wanting a MacBook but so far it’s been like getting into an ice bath lol.
I’m waiting to see what M3 has to offer.
@@Crowski I too been playing WoW since vanilla lol. Anyways, WoW runs awesome on M1 cause blizz decided to support MacOS/ARM fully. Its buttery smooth on max settings. Unfortunately, the mass majority of games out there do not or dont work at all, and you have to go around with parallels or something similar. Hence the second part of my post "better software support". I'm only really considering M3/4 cause M2 is really just an improvement of M1 sorta like how intel was doing with its processors(tik/tok). Still, enormous potential if apple would just pull its head out of its ass with the platform and open it up a bit.
i love the ltt reviews! and how there is not bias at the products your all review. But one thing that does bother me is how people in the music industry get kinda ignored in all of these reviews. Music production actually take alot of processing power, due to having 2 to 9 tracks per instrument (if you are recording) and tons of plugins that strain the crap out of the computer. I use to work at a music store and youd be surprised at how many people buy computer for this application. Id LOVE to see this somehow added to the benchmarks. i know its not easy, but it would be super helpful (and fun) to see what would be optimal for this application.
LTT reviews always neg on macs, no matter what model. This review complains about performance of Apple Silicon when they admit the software used is not optimized for M1. Then they finish off with the tired old 'PC's are cheaper' argument. Mac users don't buy macs because they are cheap. They buy macs because they do the job needed without all the nerd crap PC's wallow in.
Andy: I've never played EVE
I knew I liked you for a reason.
I was waiting for this content 😍😍
So the Mac is a somewhat better option for my editing station, albeit with varying results from different tasks. But if you do any gaming, PC is the only option. Apple are really limiting their demographic there.
Yeah that's the sad part, PC is very flexible markets it to alot of targets gamers, creators and so on. While Apple only targets the creative and business market. Hopefully one day they can make something for gamers and other parts of the tech community.
We have a couple of these at work, and it's inconsistent to say the least. Sometimes it absolutely crushes it on render times, while others it pales in comparison to a 2017 iMac pro. Really wish Windows was compatible with Apple ProRes, then I wouldn't be forced to use Apple
@@daizeemi Apple doesn't target the business market either.
@@Furluge Oh it absolutely does, Literally every organisation in Software or IT industry bulk orders MacBooks for their employees.
@@the_crypter Lol, bullshit they do not. XD I've been in IT for years at multiple companies and none of them do that. XD
Video editor here. Picked up a spec’d out Studio M2 max for about 2k a few days ago. It’s nearly as fast as my windows animation machine and runs premiere even better because of native silicon updates. If you run FCPX on an intel machine it will be night and day on any M chips
There s a test where M1 is trashed by an intel i7 6 core 14nm mac in fcp.
@@Teluric2 send the link sounds interesting. My intel Mac doesn’t hold a candle to the M2 studio.
When the time difference is low, you use percentage like Apple to make it seem that M1 Ultra is way ahead. 40 or so seconds transform in 25% faster for Ultra. No consistency.
One Word: OPTIMIZATION . wow. Cant imagine the Mac Pro performance with M1
14:44 "That ring on the bottom will never look the same again, unless you're careful!"
#outofcontext Anthony
So if going from 1 to 2 tb of storage is $500, could you just use a thunderbolt port to do an external 2tb nvme drive for half the cost? Or will it not recognize any external high-speed storage?
External SSDs work great. Not quite as fast as internal but still plenty fast.
Great video as always. Never not happy to see Anthony popping up again.
Anthony is okay, but I prefer watching Linus...
How could you know? Didn’t even have time to watch it yet.
@@speedysam0624 It's called being a like beggar.
@@speedysam0624 It's like a sixth sense. If I see Anthony, I know it will be an S-tier video.
At about 4:22, that's not the Mac's fault - that graphical error on that encounter is a known bug and exists on Windows/Linux as well.
anthony, i love and appreciate the detail in your reviews. thank you! especially that excellent point about prores and the storage speed.
but next time, please settle with an eyebrows max instead of and eyebrows ultra ;p
I thought I was the only one who noticed this 🤣