Hey everyone! This is a fun but less formal video. Hope you like the simple format. We're going to be mixing Alder Lake & other coverage going forward now. We always do a big push that's focused on new architectures for about a week or two, but we've covered a lot of the main topics now. More to come, but expect more content variety the next few days. It's been refreshing to have an actually interesting silicon launch! Thanks for the interest and for making the job fun! Grab a GN Tear-Down Toolkit on back-order now to guarantee you get one in the next run! store.gamersnexus.net/products/gamersnexus-tear-down-toolkit We reviewed the i7-12700K (and KF) here: th-cam.com/video/B14h25fKMpY/w-d-xo.html Alder Lake Windows 11 vs. 10 benchmarks: th-cam.com/video/XBFTSej-yIs/w-d-xo.html We also reviewed the 12600K here: th-cam.com/video/OkHMh8sUSuM/w-d-xo.html
Love the format! - you guys are simply the best at tech video's, simple and easy to jump to any section you want, well thought out and all split up into nice bite size chunks.
Variable bit slug.. what does it do Steve what does it do? Where is it going and why does it always want to get on my porch where it's probably not gonna have the best hydration?
As AMD and Microsoft work on patches, Win11 gets more and more hotfixes needed and benchmarks between AMD and Intel kinda has to be re-tested constantly. Also, it seems that Intel doesn't care that much about 'gamers' but more to the businesses who are scared of ransomware and other security issues.
Would really be interested in mounting pressure maps for the new mounting kits that are gonna be made by the various companies to adapt older coolers. Maybe 1 vid testing multiple kits?
We can do something like that! Not 100% sure if we have enough kits to do anything with yet, but will try to accumulate enough of them to do something if there is enough interest (and upvote the top comment of this thread if you're interested!)
@@GamersNexus +1 The Noctua NH-D15 mounting kit I ordered for my 12900k seems to work well enough - Temps are in the 70s with a quite air constricted case, along with a 3090 not helping the slightest during gaming loads. Even so I'd be interested in seeing if any other coolers, especially Noctua's replacement coolers for the 1700 with broader base plate, rivals that of their original ones with mounting kits.
One important note on this would be that the new Power Limits, "without TAU", only apply to the 'K' SKUs. Non-'K' SKUs will still remain on the old limit (by default (as per Intel spec), but might be able to be enabled in the BIOS).
When I was fiddling around with Windows 10 features at the time, I saw the Memory Integrity feature for the Zen+ CPU I was using at the time as my main system and I thought, "Neat! A new Windows security feature." Turns out that turning on said feature on made my system performance worse with gaming and some basic productivity applications so I had to turn it off. I really wish I had read that article before turning the feature on and not intentionally degrading my performance. VBS is a useful feature in productivity environments where security is essential, especially with malware that utilizes system resources without user intervention. Thanks for the VBS article!
When MS introduces a new security feature, my first thought is always a) "how much performance does it cost" and b) "what will they break". I just dont trust MS not to screw stuff up, particuarly the the Win10 updating process was bordering on the malicious....
@@termitreter6545 That is an excellent assessment when evaluating new Windows features. There are many I have yet to use for my workflow but it will be a little while longer before I can get there.
13:00 On the matter of DRM: 1 year ago Escape from Tarkov was kicking my brother out of online public games for having VM software installed. Naturally, it was a hybrid work/games pc for home so he could not delete this software.
@AlexUsman now imagine the tech support moderators of Tarkov shrugging off the issue like some other devs do :D The internet is a small place, isn't it, Mr. TrickZZter?
Virtual machines are used for cheating in video games, iirc Tarkov had a large issue with that and devs had to blacklist everything what’s related to it. R6S does the same when playing online games
S'like, virtualization & containerization software is so commonly used among developers that I could list hundreds of coworkers/ex-coworkers/acquaintances who would need a dedicated gaming and work setup as a result (not even including those who use Wine instead of Windows). Guilty until proven innocent, eh?
I remember seeing some benchmarks showing no difference between PCIe 3.0 and 4.0 on GTX 3090 with conclusion that 4.0 was needed only for some super fast PCIe based SSDs and to be future-proof for GPUs that do not exist yet. Now PCIe 5 is even faster so its mostly a marketing gimmick at this point but I guess being future-proof doesn't hurt.
You can't use the 16x PCIE 5.0 for stuff like SSDs unless you give up having a discrete GPU tho. So not much in the way of useful future proofness. Maybe RDNA4 or nVidia next-next gen high end GPUs will see a 1% benefit on 5.0 vs 4.0 in 2024+ lol
PCIe 5's wins seem to be mostly a thing for servers, computer graphic design, software development, big data computation, etc. Things where _every_ bottleneck widened will help; you do need as much as you can throw at it. For normal consumers or gamers, it's pretty mostly transition costs for no benefit (potentially faster SSD transfers with next gen SSDs seems to be the only immediately plausible benefit). It is still good to introduce PCIe5 to consumer space, if slowly, for the sake of standards uniformity. But the consumer targeted marketing shouldn't making a big deal out of it.
Generally new desktop CPUs have a limited amount of PCIe channels going to the CPU directly, with the rest running through the chipset. Having faster/newer PCIe generation channels going to the CPU with a matching GPU means you can run a GPU in an 8 channel configuration and use the other channels for another high-bandwidth device without bottlenecking the GPU or having to go to an expensive HEDT platform instead.
@@robertstan298 It doesnt matter, you dont get it. A device can use a set amount of lanes and these lanes are faster with each generation. Using 4 lanes of PCIe 5 is as fast as using 16 in pcie 3.....
Where we may see it making a difference is when direct storage becomes more important. At that point the gpu will be wanting to directly connect with the ssd. We're not there yet however... The other potential benefits are probably going to be on higher end motherboards Where they could offer lots more double the number of nvme slots and still have the same speeds as half the number on a gen4 board by splitting lanes. We'll probably see gen6 in a couple of years or so as its just been ratified I believe, but obviously the enterprise market is the primary aim there.
The price of the PCI specifications varies from $50-$9,800 for “members” and from $2,000-NA for everyone else. It appears that the prices are used to restrict the information to manufacturers.
19:33 For AVX-512, it's important to know the official position of Intel on its support. Currently, their last official statement was to AnandTech, when they said it's not supported and fused off. Which probably means it's not validated and may potentially misbehave, which e.g. will not be a valid reason for RMA. Also, it is possible that Intel releases a firmware that completely blocks AVX-512 for board manufacturers. It would be nice if media pressed Intel for an official clarification for AVX-512 support. Is it validated? Is it supported when Hybrid Technology (E-cores) is disabled? Is it not going to be disabled with a future BIOS update?
I’m just going to say that these draconian DRM solutions, that are so poorly designed as to not anticipate asymmetric multiprocessing topologies (e.g. big-little), when it’s been discussed for years in general purpose computing, and has been a thing in mobile computing for a decade, is just absurd. Furthermore, making profiling assumptions around cpu/core topology in a world of SMT, chip-multiprocessing, MCM’s, NUMA and now asymmetric shared-memory multiprocessing, is freakin stupid.
The actual games can implement actual anti cheat in code, but they just make games and won't do that. These 3rd parties are not legit and mine your data. Some are worse and are root kits. Again not good. In early 2000s my team developed anti cheat in game for half-life 2 mods successfully. Game manufacturers didn't want anything to do with it. We didn't have any hackers.
@@tacticalcenter8658 In the early 2000s, no one, let DRM stop them. I had a PC shop, inside a private club (think: liquor) which was also mine, from, 1998-2008. We used to circumvent Valve's, then STEAM's DRM, until I got tired of arguing ethics with, that greedy tub of lard, Gabe. I eventually quit playing games that had DRM. It got too convoluted, by 2005, or 2006. Just wasn't worth the time, to mess with. (AAA Games started to suck too. I'm not a fan of first person shooters, nor mobas.) I may build a new system, just to see how ridiculous DRM has gotten. Now that Intel is finally pulling their heads out of their asses. I'd really like to see how hard it really is, to stick it to that pos, again. For old time's sake. I figured, by now, the fat bastard would be dead, from a heart attack. For the record: I circumvented DRM, in games, I had purchased, as should by our right. I don't want their crap data mining code, getting it's tentacles entangled, in my registry.
@@tacticalcenter8658 Sadly, it's been that way, for a long time. They're hugely successful too. Western gaming culture seems to have given up. The way I see it, if you don't fight, you deserve what you get.
On the point about Gen5, there is something to be said about its support on the platform as a whole, which is that we can add more Gen3 or Gen4 slots with less lanes used. Some of the Z690 motherboards provide 2x PCIE5 x16 slots running at x8 when both in use. This means you essentially have two full speed x16 4.0 slots, which is very useful to many workstation users who may have multiple GPUs, or need to support blazing faster storage in that extra slot. (Not to mention the increased count of Gen4 NVMe slots on this generation)
It also helps them as they don’t need to be forced to pay extra for HEDT line up which is expensive and core maybe too much for people who only want extra lane
@@dirkmanderin it wouldn't be "magic", 5.0 lanes offer double the bandwidth, so it's a reasonable assumption to think when operating at half speed "x8" it would provide enough bandwidth for 4.0 x16. But apparently this is only possible with some expensive lane switching chips the aren't generally on consumer boards. Regardless, two 4.0 x8 slots is still better than the single 5.0 x16 and 3.0 x4 or x8 most of the boards have.
Intel with 12th Gen is looking like AMD with Zen 1; a lot of growing pains for early adopters. Godspeed y'all. I'm going to be sticking to my current system and Windows 10 for now.
Same... kindda was expected. Also, the 5600x/5800x are incredibly frugal in energy usage in comparison... and I personally care about that a lot (now not only or the environment; in my country the electricity bill is beyond crazy (and we had it a lot cheaper before... so, even for those country with an OK price right now... trust me, u never know... We've seen it go 4x times more expensive... That ends up being a problem).
@@kaapuuu Nice... that matches quite with the benchmarks I've seen... 63W max in full throttle 3D rendering (it's more demanding than gaming, I've tested that many times)... In any case, amazingly frugal. I keep thinking most people don't need Alder Lake level of performance . And if needed, it is even more the case, as while a 12600K (probably the best Alder Lake performance/cost ratio) uses at least not omega crazy amount of power, the 12900K does, while a 5950X is very efficient. But then again, one must pay for what one needs, we have been gotten to used to go to the ultimate beast in processing, while most people don't need it in their workflow, even less in games. As I see it, hte 5600X/5800X, or for office work, a 5600G/5700G, and on intel, the i5s, like 12600 and 12400 (once those are out) are what kind of make sense for the larger user base. Indeed, the 3600 and even a 10400 still have quite some years of functional usage, imo.
Game DRM breaking is 100% the fault of the game publishers who included DRM in the first place. Publishers should remove that crap from their games, or customers should stop being customers and go pirate games instead. Piracy is a service problem, DRM is poor service for paying customers, DRM encourages piracy. The way you reduce piracy is to not have DRM, so that paying customers get a working game and feel confident spending their money on a game. Proof: people buy games from GOG, a DRM free store, all the time. They haven't gone out of business despite how easy it would be it pirate every game on their store. Once again, piracy is a service problem. The more invasive DRM gets and the more games DRM breaks, the more I want to pirate instead of paying for games. Related: pirate the GTA trilogy, don't pay for it. Rockstar is using DRM to block paying customers from playing the game they paid for because Rockstar screwed up and left Hot Coffee in the game. Again. Meanwhile pirates are free to play as much as they like. Once again, the message from the game industry is very clear: don't be a paying customer, be a pirate.
Piracy is a pricing problem. No game is actually worth the price you see in the shops or online these days, period. More so EA's staple FIFA, year after year.
@@buggerlugz6753 Making a good game takes a lot of time, a lot of effort and a lot of money, no, you cannot pay all the developers, QA team, designers, and voice actors with chicken nuggets. And there are absolutely games worth the price. Piracy is a service problem, not price.
@@aceofhearts573 GOG is CDPR though? Things would probably have been better for them if only they kept their promises, and made Cyberpunk 2077 good. Also if they kept DRM off of their DRM free store too, since they still have games with DRM on there.
Everytime we talk about the power consumption of the new intel CPUs, it's such a good trip down memory lane to the times when AMD needed small nuclear reactor to run and could heat your room through the winter. I like that role reversal A LOT! We needed something like this to finally happen.
We just got in 12 gen cpus and we have found some funny issues with windows server and the e cores when vertilizaton turned on. You have to desable the e cores to boot server when vertilizaton is on. Also half the boards only let you go down to 1 e core not all off.
That doesn't make any sense lmao. And that is sad, the 12900K just destroyed the entire Xeon and HEDT lineup from Intel's history. AMD is obliterating Intel on server, and now the Xeon and HEDT lineup suck compared to CORE series. Intel has to make a better interconnect than Ring Bus ASAP.
@@saricubra2867 They haven't used ringbus on their HEDT or Server lineups for quite a while. Bear in mind their Desktop chips are the first of their new generation out the door. We have no idea what same gen same process node HEDT or Server will look like. At the same time it's probably a process node issue. As soon as TSMC could yield desktop CPU chiplets for AMD AMD had their entire lineup ready to go out the door. Intel with monolithic die's needs a much more mature process node before they can yield HEDT or Server CPU's. Still where expecting server chips early next year i believe.
I am not surprised. All of Z690 motherboards and 12th gen CPUs shipped to Gamers Nexus, Hardware Unbox, kitgurutech, and so on we’re scrubbed thoroughly for any bios or hardware bugs. The production 12th gen CPUs and Z690 motherboards in stores will probably have bugs and issues until the manufacturers audit their process.
Most virtualization software and Linux distributions are not setup with alderlake yet. Probably won't be until next year. Would avoid it for this type of work. Not sure if Xeon's and hedt platforms will use big little, but if they do, they will have more incentive to roll it out faster.
I've lost all interest in hobbying in casual PC building. Scalpers have made it impossible to obtain anything remotely affordable for GPUs and now it's slowly leaking into CPUs. Thank you for making these videos so I can at least stay up to date in the latest trends.
*Denuvo DRM* is the problem here with their faulty poorly written DRM, it's not Intel's fault in the slightest. *VBS* should be turned-on for both AMD and Intel.
Thanks for mentioning coolers. I’ve been at a complete loss trying to find information on whether new designs are actually needed because of the larger IHS, or whether we should trust older designs with a new mounting kit. Further complicated by the fact that it seems that most of the ITX boards have clearance issues with VRM height, etc.… Not even sure if there are any valid AIO or air options for ITX. (Open case, so for me total height doesn’t matter, but fitment into the motherboard space does)
Buildzoid has a video today that shows that turning off the e cores improves performance that does not require as many cores as it can get. He shows performance scaling is diminishing returns above 1.15v, which with e cores disabled does not get past 60c on a 12900K.
By disabling e-cores, the p-cores get a little IPC increase because you can enable AVX 512 (in some motherboards), and the L3 cache space that the E-cores use are available for the P-cores. Still the IPC and singlethread speed of the p-cores are overkill. I watched 12900K rekt the 5950X in RPCS3 emulator, 40% difference on FPS, (11900K also beats the 5950X but not by a lot).
I would love a way to turn off the E-cores without using the BIOS, so you get free IPC and more speed for Golden Cove without doing a reset. Adaptative undervolting would be insane for the 12900K at PL2.
New architecture, new problems. Didn't see that DRM issue coming, nice to know. Thanks for these details that many don't find out about until after a purchase.
Great video, I just upgraded to the 12900k and Windows 11 & was wondering why Virtualization was Enabled so I searched it up & found this video. Thanks for making great content you have yourself a new subcriber.
Oooooooohhhhhhhh. VBS is Virtualization-Based Security now. Not Visual Basic Script. Ooooh. Okay. I get it. Phew! For a second there I was afraid people were opening Excel spreadsheets from people they don't trust. Because reusing an acronym that already existed AND had security and performance implications was by no means confusing or anything. :-\ Thanks for thinking THAT ONE through, whoever named it that. :-P
This video couldn’t have come out at a better time for me. Right now my PC is in pieces with a fresh motherboard and i9-12900K install in the works. Thanks, Steve!
The community demands performance coverage of Tourist Bus Simulator! I have a 3900x and GTX1080 water cooled system and am desperately trying to justify throwing it into the trash to upgrade to a system that can handle 300+ FPS in TBS. In all seriousness, J/K, I will cling onto this system until it dies, thanks for all the great coverage!!!
Same here, my 3700x is only a little over a year old, next upgrade will be a Zen 3D. And that should work fine until at least Zen 5 comes out, and the supply shortage will be over by then, and DDR5 will have better supply, faster speeds, lower cost. I'm in no rush to play guinea pig on a new platform.
Digital Restriction Management. It's nothing else anymore, so please don't use word "rights" when talking about it, it has nothing to do with rights. You have rights too and companies don't care about them.
@@tacticalcenter8658 No. It's capitalism. The free market forces see a need that can be fulfilled in exchange for money, and does so without government regulation. This is, by definition, what capitalism is about. "If enough people don't want it, there will cease to be a market need for it." Communism is when the government tells you that you must have this installed on your computer or you'll be taken out behind the chemical shed and shot. Calling unregulated corporate greed "communism" is a level of confusion I will never comprehend....
I think that you are well positioned to actually perform the pressure map testing for coolers for the 12th gen Alder Lake CPUs (since, it appears, no other tech TH-camr has that purchased/invested in that capability). I think that can go a LONG way towards helping people making informed choices about which CPU HSF to buy (or not to buy). Thanks.
I watched the Windows 11 test. I didn't see any indication in that video that VBS was on. It was a source of confusion as your lower results for Windows 11 were without explanation. NOW, they make sense. HWUB, was very clear in their Windows 11 testing to demonstrate VBS on and off.
On my X570 Dark Hero virtualization is OFF in BIOS defaults, so VBS is off after a clean install of Win11 on a 5900x/32GB RAM/formated 1TB NVME. I was confused by why everyone said VBS is on by default but this video shred some more light on the requirements!
Denuvo should be banned altogether for the damage it causes to the gaming ecosystem. I'll keep riding the high seas for more and more games when this malware is implemented into games.
I guess people weren’t kidding when they said this is Intel’s “Zen Moment”. Zen 1 was a freaking mess of issues! Bodes somewhat well for their next generation or maybe “14” if they stick to their naming scheme, but only if they can make further gains in performance or efficiency, preferably both.
Not really, only had issues with XMP profiles. I used standard speeds and had no problems. Zen was actually not that great at the time, but they offered more cores. Now I'm on intel side, no issues either.
@@kanta32100 Zen had scheduling issues too. It took a while for Windows to figure out that it should commit threads that shares memory pages to the same CCD to maximize cache hits.
Didn't the 3090 barely saturate PCI-E Gen 3? I thought that meant only around 50% load on gen4 x16, shouldn't it be good for at least 2 more graphic card generations?
True, but imagine the benefits of motherboard makers splitting off the lanes. If the rtx 4xxx comes with gen5, but can't saturate gen4, we can get full speed out of a pcie 5x8 slot, That could be more thunderbolt, or more USB ports. Maybe 2 more m.2 gen 4 drives with direct Lanes to cpu.
@@MemesDex You only have 16 lanes of 5.0, and 4 lanes of 4.0 from the CPU (576GT/s total bandwidth), I don't know where you are getting 64+ lanes from. If you are talking the other lanes provided by the chipset, then you are very mistaken. They are all connected to the CPU by only 8 lanes of DMI gen 4, for 128GT/s total bandwidth shared between all other devices on the motherboard. So, optimally, especially for anyone who wants to have more high speed nvme drives connected; splitting the pcie5x16 into an x8x4x4 would allow 3 top end nvme drives to be directly connected to the CPU, while not hurting GPU performance at all. And also having the benefit of not having any of your storage taking up the limited lanes running over DMI. Not really a big deal for your average user, but definitely a benefit if you need it.
There are certainly a lot of growing pains with this new architecture. I hope that all the quirks with software are worked out by the time later hybrid x86 CPUs join the market.
Hey Steve, it would have been nice if you also discussed Asus LGA 1200 mounting holes that comes with their Alder Lake motherboards. Maybe do a testing with popular coolers as well such as the Galahad and Arctic. I think this is relevant because I believe a lot of people are thinking ways to save cost before they migrate over to the new platform.
yes, you could use multiple GPUs (that also support the interface) but that's pretty much it, or slot m.2 drives via pcie-m.2 converter. As long as there are no devices supporting pcie5 (aside the cpu), it's useless.
Regarding the PCIe 5 feature for this gen CPU/chipset, the easiest way I found to explain to people why it is currently redundant is by telling them that there is no devices that can use it. In the future, maybe but not now and there is no way to know if it will get used anytime soon.
@@TVAlien totally agree with you. It is more a future proof feature than anything else, PCIe 4 just got mainstream for SSD for example with PS5 enabling its internal slot and most SSD manufacturers offering NVMe Gen4 options.
I'm a bit confused. It seems that Windows 10 VBS has 2 different levels. One is turned on by default whenever the V instructions are enabled in BIOS. This is shown in Windows "System Information" summary as item "Virtualization-based security" - "Running", and at the same time, 'Memory integrity" under "Core isolation" setting can be Off.
In this state (where VBS is on but Memory integrity is still off), some applications like AMD Ryzen Master cannot start up. It seems that the basic VBS still blocks some direct access to CPU hardware.
@@qfan8852 I wonder does that refer to just the Windows Hypervisor platform. That is the backend component that powers Windows WSL2 and Hyper-V by running the operating system itself under hypervisor. It doesn't really do much itself, but it allows more efficient virtualization which is required for VBS. It also breaks some alternative virtualization methods as the hypervisor blocks them.
Only piracy. Not GOG, because publishers don't put their games on GOG, because they want to include DRM. The games will continue to be on Steam, Epic, and/or publisher specific launchers, and will continue to include DRM. So piracy it is.
I have just picked up a new computer at the beginning of the month (May 2022) and the MB / CPU combo was ROG Maximus z690 Hero with i9-12900k with a fresh install of Windows 11 Pro and VBS was not enabled through System Information.
Really awesome coverage of these lesser-explored issues as usual. My question is: will the larger VRM heatsinks on the motherboards interfere with top-town coolers like the Noctua NH-C14S? I'm running an ITX build and worried about getting blocked by those big blocks around the cpu socket.
You can make the thermaltake original 939/775 waterblock kit to fit most sockets , plus its still great cooling. Some of the p500 pumps have been going for decades. I will add that I switched to EK cooling blocks back on socket 3930k (which thermaltake original didn't fit) . The most important thing to know about water cooling is that there is more cooling to be had when you use custom parts to build your own loops.
You're killin it man :) I really appreciate these informative videos. Also thanks for breaking down acronyms for us. Had no idea what IHS was, even though I've watched all your videos with Kingpin. Plus I'm going to record how many times you bash on the 11900k haha, even though you didnt mention it here. It was an impulse buy as I just came back to the PC world. IM SORRY! Keep it up man!
Great piece, I wanted to know, on my Ryzen 5800x, if clean install windows 11 with Virtualization off in bios, it will set VBS disabled, just want to check
*LGA 1200 standoffs* .. my Corsair iCUE H115i cools my Alder Lake i7-12900K perfectly well in the 30° C range and sub-70° C gaming with an RTX 3090 250+ FPS 1440p. Corsair is also sending me replacement LGA 1700 standoffs for about $4 but I didn't want to wait after reading Corsair forums showing successful LGA 1200 standoffs being used. I'll probably change out the 0.8mm shorter LGA 1700 standoffs.
@@masteron481 .. I had purchased M3 screws just in case and then I went to Corsair's forums. You'd think just to avoid all this hysteria that they (Intel or whomever designed the LGA 1700 socket) would have kept the same height and spacing.
@@DJaquithFL I Agree, but at the time of build I had no idea they were different heights till the next day reading. Lucky ours works great with original. I will change down the road when I repaste but right now I am very happy with my temps. Enjoy!
I wish you'd buy a throne to sit on while doing these round-ups. The red and gold would look great in the new set. Then you can make and sell GN goblets.
Can you guys talk about the ASUS Z690 motherboards having LGA 1200 cooler mounts as well as LGA 1700? I just want to know if you guys can find a difference between native lga 1700 bracket vs 1200
The 1200 has a larger gap so if you have a noctua-esq cooler with hard plastic stand-offs then you will have a problem. But if you have a spring tension mount then it will be totally fine.
I have such mobo, but since I don't have LGA 1700 mounts I can't tell anything apart from obvious size difference. Btw cheapest (35 EUR incl. VAT in Europe) CM masterliqud lite 120 does the job pretty well - 12600k OCed to 5 on P/4 on E, keeps it in mid 80ies C under C23 load and 180W draw
LOL when I hear VBS I think Vacation Bible School... Context is everything when using acronyms. For instance: UTI could be Universal Technical Institute or Urinary Tract Infection... which is closer to what Win 11 gives you right now.
I think VBS has been a thing in one form or another going back to around year zero when Steve was spreading "the word", lol. Thanks Steve, Back to you Steve.
Ive got everything to run my 12900k except a cooler. Had one, but wasn’t compatible with my mobo. Now my concern is covering enough space to make sure it’s running optimally. Didn’t realize lga 1700 compatible just meant the mount…. All my fault. 😓
12th gen die is mostly centered and smaller than 11th gen, as long as you have good contact to the IHS, it doesnt need full coverage. A native LGA1700 cooler will be better, but you can absolutely use an LGA1200 cooler if it mounts properly
I had so much stuff related to coolers, I managed to get a arctic liquid freezer II and coolermaster liquid 240 lite(or something like that) to work on my 12600k without any official kits.
You should be able to use process lasso to limit Denuvo games to only the E-Cores without waiting for BIOS or game updates. DRM should not exist though.
@@shukterhousejive depending on crack DRM may potentially still break games. For example if crack only spoofs licensing info etc and not intervenes after all the checks.
I dunno bro but my 12600k is kicking some serious but right now, extremely happy with it and I’ll be getting another. Good luck with the drm and stuff, I don’t use windblows so all that news is just noise. I also use all the cores as needed in linux. I guess it’s just more flexible in that sense. As for cooling, deep cool gmaxx 400 v1 is working great with a simple 1/8 machine nuts and bolts and washers mod.
But... Alder Lake doesn't have proper scheduling on Linux yet. It's completely random what type of core a process gets assigned to... So are you manually adjusting affinity every time you start a program?
11:52 that is the first time I have thought about the Scroll Lock key in a while. Sys Req and Pause Break are also on the list of vestigal keyboard keys
I'm thinking Intel had a look at a keyboard to see if they could find a key that almost no-one use, to map this function to. Not a bad move imo, definitely easier than going into BIOS (UEFI) each time you want to turn e-cores on/off.
could you please make some efficiencies tests? so basically performance/watt over a few CPUs and generations. I don't really want my CPU to use 200W constantly so I would limit the power. energy costs skyrocket currently in Europe, 38c/kWh is just insane. so how would performance of different CPUs look if they are all limited to 65 or 100W?
That would actually be a interesting test. Have the last 2 generations(skipping 11xxx series and going with 10xxx series) from both Intel and AMD. Have each CPU run at: 25 Watt 65 Watt 100 Watt 125 Watt 250 Watt And then note down their performances over 2-3 workstation task and 2-3 gaming task. Will take a long time to do and properly not worth it in man hours. But would be interesting to see who is actually king of efficiency.
From EU here too, and in one of the countries with the worst electricity price hikes. Alder Lake (and....mostly nothing intel, unless talking about a 10400 or the like) is not for us. While a Ryzen 5600X or 5800X is amazingly frugal in power usage (and while consumes more, the level of efficiency of a 5950X is crazy, for all what it is capable of). Whoever is looking for energy efficiency, for now it's AMD. And of course Apple and their M1 solutions, but A) As I am only interested on desktop (for work), the only sensible (on price) solution is a mac mini, and those can't have more than 16GB of RAM (small for graphic work, even in Mac OS). B) The price you save in energy, you are wasting in incredibly inflated price, specially for any memory or disk upgrade. C) Software, workflow and drivers compatibility. Including many 3D apps that are crucial not existing for Mac OS. This is much of a bigger problem in my loved Linux... but still. So, while I am no brand fanboy at all, I'm stuck with AMD for a while... But next year has many releases of a bunch of companies... let's see.
It depends on what your doing if your using a cpu heavy work load (like transcoding video) then it will be closer to the maximum TDP, if your just gaming your going to be closer to the minimum TDP. Also keep in mind AMD's TDP is not very accurate as it is an average across multiple use cases (most of which are cherry picked to get that number lower) I actually wish amd would adopt this TDP advertising model as it is way more transparent than what intel and amd were doing before.
Thanks as always for the great video! ~ Especially for giving us a better understanding of VBS...sigh...guess security will always have a speed/convenience cost somewhere... I hope microsoft can work on fine tuning it so it does not become a forced nuisance/performance nightmare later. Already with windows 11, TPM and drive encryption will have to be enabled for most standard installs...enough security for the time being maybe? lol.
i recently bought a 12900k and msi z690 edge, ethernet while watching youtube has been bad but wifi for some reason good, my friend also reported similar issues with his 12900k, i returned my 12900k for a new one, as well as a new mobo: msi z690 unify, same exact issue occurs, we both use the contact plate, not sure if its related but this is driving me bananas!
Thanks Steve great video, Almost nobody is talking about the limitation of 12 gen Intel processors. Hopefully Intel can work pass theses problems. But There is still the problem of high power use. I am waiting for the 65 watt parts. I hope you can do a video of Window 11 compatibility issues with older platforms 7th gen intel, 8 gen intel, 1 gen ryzen and any older GPU window 11 will not work with. Thanks
historically a windows upgrade rather than a clean install always ends up with a security issue as it uses a weakened security template to facilitate the use of pre installed apps... forcing DRM on the end user at the expense of stability and increased processing overhead is wrong, it should be in the laps of the media providers
@@GamersNexus you should cover the laptop space. Maybe not full on but it is a market that does matter more and more and maybe at least covering one or two each generation with your expertise and methodology would be awesome. Besides, if you don't wind up keeping them they would make great fan giveaways, hint hint
Ahh vbs, another software based hardware supported band aid to fix actual hardware flaws and exploit. It will soon be mitigated by even more advanced malware or newer detected flaws introduced by always active virtualization. More security = more nooks and crannies to hide malicious stuff. Anyone remember the ASLR flaws, which managed to predict where and when a certain process value is in ram via only a few java script functions hosted by a web browser? ASLR was actually meant to prohibit exactly that. Another layer of security and always on virtualization... this begs for new rootkits and exploitation of intels management engine and amds psp.
@@Hexxagone850 : Honestly, if you ask me, I would go the old fashioned way. A proper antivirus software and an up to date web browser, almost all attacks happen through the web browser anyway, so making sure that this one thing has all the latest patches is one part of the equation. Second part is to not trust and download every thing you see on the web. Being sceptic towards most "to good to be true" software solution is another way to go. Limiting your own web traffic and trusting only few websites is another helpful way. Having a proper router which can be configured to block traffic with suspicious websites and hosts goes along. In the end there is no 100% safety, browsing the web and using foreign applications will always be a gamble of trust, no software based security will patch hardware based flaws which run really deep into the core architecture. If something like this VBS sandbox mode actually hinders perfectly legit although drm'd software and or services, it's already a failed security mechanism that impedes and dictates a users behavior as well as forces developers to patch their software or even write it a certain way. Anyone remember the piledriver architecture, amd wanted to force developers to code in a specific for their architecture optimized way. That did so happen... didn't it???
Lmao, the love how i was watching this video on a second screen while playing a pirated game on that list; and because its pirated i have experienced 0 issues, unlike the legit copy XD
Currently apart from some benchmarks and certain GPUs that are x8 with instead of x16, there is hardly any measurable difference between PCIe gen 3 and PCIe gen 4. Thus currently PCIe gen 5 also would have no measurable effect.
Keep in mind that some 3rd gen Ryzen mobile chips don't support virtualization so VBS won't work out of the box and has to be disabled on win 11 for your chip to work. Side note, if you use containers or VM's i would just stay away from Ryzen mobile chips altogether. PS. Alternatively, to a fresh install of an OS, you can upgrade or downgrade to an OS and then run an integrity check on the Windows files through Command Prompt by running "sfc /scannow" (without the quotes) This will scan the OS files and repair and replace any damaged or missing files.
We are finding the VBS is not enable automatically like we were initially told on Intel 11th generation. Microsoft quietly update documentation with a note.
Something I've wondered for a while with all the Spectre and Meltdown stuff. As a gamer with nobody else using my laptop (don't laptop shame me!) is the performance loss from all the patches worth it or would I benefit from disabling them? Obviously I'm thinking in wider terms than just my laptop since I want to get a new build together when I can.
The only way I could see PCIE5 being beneficial today would be if a GPU used PCIE5 x8 instead of x16, freeing up 8 PCIE lanes to something else. But for most users, this would never really be an issue, maybe some edge cases where you want to run a high number of M.2 NVME drives.
I know with cooler alignment and the cold plate, I had got an LGA 1700 kit from Artic for my LF 2 420mm but it was still about a single millimeter too high, which then meant it wasn't mounted right and my 12900k would shoot to 90c in seconds on boot. The thermal paste still had the pattern I applied on the IHS. Had to lower it by a millimeter, which meant not using the metal washers they provided for the standoffs, and instead use the flat, sticky ones turned upside-down. Maybe I didn't mount it right in the first place, but either way lowering the kit by 1mm has worked wonders for me and the temps on my 12900k. I also applied a flat, even spread on the paste this time instead of an X method.
On the cooler I was given a sythe fumu 2 for free it covers the whole CPU so got the adapter to mount it looks like a good fit unfortunately waiting on my ram before I can boot the pc and see if it works well
Hey everyone! This is a fun but less formal video. Hope you like the simple format. We're going to be mixing Alder Lake & other coverage going forward now. We always do a big push that's focused on new architectures for about a week or two, but we've covered a lot of the main topics now. More to come, but expect more content variety the next few days. It's been refreshing to have an actually interesting silicon launch! Thanks for the interest and for making the job fun! Grab a GN Tear-Down Toolkit on back-order now to guarantee you get one in the next run! store.gamersnexus.net/products/gamersnexus-tear-down-toolkit
We reviewed the i7-12700K (and KF) here: th-cam.com/video/B14h25fKMpY/w-d-xo.html
Alder Lake Windows 11 vs. 10 benchmarks: th-cam.com/video/XBFTSej-yIs/w-d-xo.html
We also reviewed the 12600K here: th-cam.com/video/OkHMh8sUSuM/w-d-xo.html
Love the format! - you guys are simply the best at tech video's, simple and easy to jump to any section you want, well thought out and all split up into nice bite size chunks.
Variable bit slug.. what does it do Steve what does it do? Where is it going and why does it always want to get on my porch where it's probably not gonna have the best hydration?
As AMD and Microsoft work on patches, Win11 gets more and more hotfixes needed and benchmarks between AMD and Intel kinda has to be re-tested constantly.
Also, it seems that Intel doesn't care that much about 'gamers' but more to the businesses who are scared of ransomware and other security issues.
@@Souscheff pretty sure amd would've faced the same problems if they released their own big-little cpus. DRM sucks
I don't care. I need to know how an i9 performs with googly eyes on it.
Would really be interested in mounting pressure maps for the new mounting kits that are gonna be made by the various companies to adapt older coolers. Maybe 1 vid testing multiple kits?
We can do something like that! Not 100% sure if we have enough kits to do anything with yet, but will try to accumulate enough of them to do something if there is enough interest (and upvote the top comment of this thread if you're interested!)
Yes, that is something I would like to see as well, both with air coolers and aio water coolers.
Would be intresting!
@@GamersNexus +1 The Noctua NH-D15 mounting kit I ordered for my 12900k seems to work well enough - Temps are in the 70s with a quite air constricted case, along with a 3090 not helping the slightest during gaming loads.
Even so I'd be interested in seeing if any other coolers, especially Noctua's replacement coolers for the 1700 with broader base plate, rivals that of their original ones with mounting kits.
This would be great! I just received my Corsair Lga 1700 adapter kit today for my h115i.
"Quick post-mortem" posts a 27 minute video.
As expected of a GN video.
Great job on the coverage!
29 minutes below is short for a GN video, 30 mins. and above is their forte
Here must be a loooong list of negatives to go through 😜
Real post-mortems in medicine can take 1-2 hours. This is relatively quick.
SponsorBloack says 26:36
God I love it. Way more informative than any techie that only creates clickbait videos that barely hit the 10 minute mark. *cough* linus
One important note on this would be that the new Power Limits, "without TAU", only apply to the 'K' SKUs. Non-'K' SKUs will still remain on the old limit (by default (as per Intel spec), but might be able to be enabled in the BIOS).
When I was fiddling around with Windows 10 features at the time, I saw the Memory Integrity feature for the Zen+ CPU I was using at the time as my main system and I thought, "Neat! A new Windows security feature." Turns out that turning on said feature on made my system performance worse with gaming and some basic productivity applications so I had to turn it off. I really wish I had read that article before turning the feature on and not intentionally degrading my performance. VBS is a useful feature in productivity environments where security is essential, especially with malware that utilizes system resources without user intervention. Thanks for the VBS article!
When MS introduces a new security feature, my first thought is always a) "how much performance does it cost" and b) "what will they break". I just dont trust MS not to screw stuff up, particuarly the the Win10 updating process was bordering on the malicious....
@@termitreter6545 That is an excellent assessment when evaluating new Windows features. There are many I have yet to use for my workflow but it will be a little while longer before I can get there.
13:00 On the matter of DRM: 1 year ago Escape from Tarkov was kicking my brother out of online public games for having VM software installed. Naturally, it was a hybrid work/games pc for home so he could not delete this software.
And these guys are surprised when people pirate their games
@AlexUsman now imagine the tech support moderators of Tarkov shrugging off the issue like some other devs do :D
The internet is a small place, isn't it, Mr. TrickZZter?
Virtual machines are used for cheating in video games, iirc Tarkov had a large issue with that and devs had to blacklist everything what’s related to it. R6S does the same when playing online games
S'like, virtualization & containerization software is so commonly used among developers that I could list hundreds of coworkers/ex-coworkers/acquaintances who would need a dedicated gaming and work setup as a result (not even including those who use Wine instead of Windows). Guilty until proven innocent, eh?
@@MunyuShizumi Yeah sadly this is true.
I remember seeing some benchmarks showing no difference between PCIe 3.0 and 4.0 on GTX 3090 with conclusion that 4.0 was needed only for some super fast PCIe based SSDs and to be future-proof for GPUs that do not exist yet. Now PCIe 5 is even faster so its mostly a marketing gimmick at this point but I guess being future-proof doesn't hurt.
You can't use the 16x PCIE 5.0 for stuff like SSDs unless you give up having a discrete GPU tho. So not much in the way of useful future proofness. Maybe RDNA4 or nVidia next-next gen high end GPUs will see a 1% benefit on 5.0 vs 4.0 in 2024+ lol
PCIe 5's wins seem to be mostly a thing for servers, computer graphic design, software development, big data computation, etc. Things where _every_ bottleneck widened will help; you do need as much as you can throw at it.
For normal consumers or gamers, it's pretty mostly transition costs for no benefit (potentially faster SSD transfers with next gen SSDs seems to be the only immediately plausible benefit).
It is still good to introduce PCIe5 to consumer space, if slowly, for the sake of standards uniformity. But the consumer targeted marketing shouldn't making a big deal out of it.
Generally new desktop CPUs have a limited amount of PCIe channels going to the CPU directly, with the rest running through the chipset. Having faster/newer PCIe generation channels going to the CPU with a matching GPU means you can run a GPU in an 8 channel configuration and use the other channels for another high-bandwidth device without bottlenecking the GPU or having to go to an expensive HEDT platform instead.
@@robertstan298 It doesnt matter, you dont get it. A device can use a set amount of lanes and these lanes are faster with each generation. Using 4 lanes of PCIe 5 is as fast as using 16 in pcie 3.....
Where we may see it making a difference is when direct storage becomes more important. At that point the gpu will be wanting to directly connect with the ssd. We're not there yet however...
The other potential benefits are probably going to be on higher end motherboards Where they could offer lots more double the number of nvme slots and still have the same speeds as half the number on a gen4 board by splitting lanes. We'll probably see gen6 in a couple of years or so as its just been ratified I believe, but obviously the enterprise market is the primary aim there.
The price of the PCI specifications varies from $50-$9,800 for “members” and from $2,000-NA for everyone else. It appears that the prices are used to restrict the information to manufacturers.
19:33 For AVX-512, it's important to know the official position of Intel on its support. Currently, their last official statement was to AnandTech, when they said it's not supported and fused off. Which probably means it's not validated and may potentially misbehave, which e.g. will not be a valid reason for RMA. Also, it is possible that Intel releases a firmware that completely blocks AVX-512 for board manufacturers. It would be nice if media pressed Intel for an official clarification for AVX-512 support. Is it validated? Is it supported when Hybrid Technology (E-cores) is disabled? Is it not going to be disabled with a future BIOS update?
I’m just going to say that these draconian DRM solutions, that are so poorly designed as to not anticipate asymmetric multiprocessing topologies (e.g. big-little), when it’s been discussed for years in general purpose computing, and has been a thing in mobile computing for a decade, is just absurd. Furthermore, making profiling assumptions around cpu/core topology in a world of SMT, chip-multiprocessing, MCM’s, NUMA and now asymmetric shared-memory multiprocessing, is freakin stupid.
Basically boils down to DRM is bad and should get removed at all times at all places.
The actual games can implement actual anti cheat in code, but they just make games and won't do that. These 3rd parties are not legit and mine your data. Some are worse and are root kits. Again not good. In early 2000s my team developed anti cheat in game for half-life 2 mods successfully. Game manufacturers didn't want anything to do with it. We didn't have any hackers.
@@tacticalcenter8658 In the early 2000s, no one, let DRM stop them. I had a PC shop, inside a private club (think: liquor) which was also mine, from, 1998-2008.
We used to circumvent Valve's, then STEAM's DRM, until I got tired of arguing ethics with, that greedy tub of lard, Gabe.
I eventually quit playing games that had DRM. It got too convoluted, by 2005, or 2006. Just wasn't worth the time, to mess with. (AAA Games started to suck too. I'm not a fan of first person shooters, nor mobas.)
I may build a new system, just to see how ridiculous DRM has gotten. Now that Intel is finally pulling their heads out of their asses.
I'd really like to see how hard it really is, to stick it to that pos, again. For old time's sake. I figured, by now, the fat bastard would be dead, from a heart attack.
For the record: I circumvented DRM, in games, I had purchased, as should by our right. I don't want their crap data mining code, getting it's tentacles entangled, in my registry.
@@whyis45stillalive its all about money. They don't care about you. China owns the gaming industry.
@@tacticalcenter8658 Sadly, it's been that way, for a long time. They're hugely successful too.
Western gaming culture seems to have given up. The way I see it, if you don't fight, you deserve what you get.
On the point about Gen5, there is something to be said about its support on the platform as a whole, which is that we can add more Gen3 or Gen4 slots with less lanes used.
Some of the Z690 motherboards provide 2x PCIE5 x16 slots running at x8 when both in use. This means you essentially have two full speed x16 4.0 slots, which is very useful to many workstation users who may have multiple GPUs, or need to support blazing faster storage in that extra slot.
(Not to mention the increased count of Gen4 NVMe slots on this generation)
It also helps them as they don’t need to be forced to pay extra for HEDT line up which is expensive and core maybe too much for people who only want extra lane
Think you're wrong here. Won't a pcie gen 4 device be limited to 8x gen 4 performance even if it's in a gen 5 8x slot?
@@Y0URGRANDMA You are correct. I don't know why people think plugging a 4.0 graphics card in a 5.0 slot magically makes it a 5.0 device.
@@dirkmanderin it wouldn't be "magic", 5.0 lanes offer double the bandwidth, so it's a reasonable assumption to think when operating at half speed "x8" it would provide enough bandwidth for 4.0 x16.
But apparently this is only possible with some expensive lane switching chips the aren't generally on consumer boards.
Regardless, two 4.0 x8 slots is still better than the single 5.0 x16 and 3.0 x4 or x8 most of the boards have.
Intel with 12th Gen is looking like AMD with Zen 1; a lot of growing pains for early adopters. Godspeed y'all. I'm going to be sticking to my current system and Windows 10 for now.
Same... kindda was expected. Also, the 5600x/5800x are incredibly frugal in energy usage in comparison... and I personally care about that a lot (now not only or the environment; in my country the electricity bill is beyond crazy (and we had it a lot cheaper before... so, even for those country with an OK price right now... trust me, u never know... We've seen it go 4x times more expensive... That ends up being a problem).
@@3polygons My 5600X hardly passes 50W during gaming , and thats being overclocked :D
I'm gonna be sticking to my Zen 3 for now!
@@kaapuuu Nice... that matches quite with the benchmarks I've seen... 63W max in full throttle 3D rendering (it's more demanding than gaming, I've tested that many times)... In any case, amazingly frugal. I keep thinking most people don't need Alder Lake level of performance . And if needed, it is even more the case, as while a 12600K (probably the best Alder Lake performance/cost ratio) uses at least not omega crazy amount of power, the 12900K does, while a 5950X is very efficient. But then again, one must pay for what one needs, we have been gotten to used to go to the ultimate beast in processing, while most people don't need it in their workflow, even less in games. As I see it, hte 5600X/5800X, or for office work, a 5600G/5700G, and on intel, the i5s, like 12600 and 12400 (once those are out) are what kind of make sense for the larger user base. Indeed, the 3600 and even a 10400 still have quite some years of functional usage, imo.
Your specs?
Game DRM breaking is 100% the fault of the game publishers who included DRM in the first place. Publishers should remove that crap from their games, or customers should stop being customers and go pirate games instead. Piracy is a service problem, DRM is poor service for paying customers, DRM encourages piracy. The way you reduce piracy is to not have DRM, so that paying customers get a working game and feel confident spending their money on a game. Proof: people buy games from GOG, a DRM free store, all the time. They haven't gone out of business despite how easy it would be it pirate every game on their store. Once again, piracy is a service problem. The more invasive DRM gets and the more games DRM breaks, the more I want to pirate instead of paying for games.
Related: pirate the GTA trilogy, don't pay for it. Rockstar is using DRM to block paying customers from playing the game they paid for because Rockstar screwed up and left Hot Coffee in the game. Again. Meanwhile pirates are free to play as much as they like. Once again, the message from the game industry is very clear: don't be a paying customer, be a pirate.
Piracy is a pricing problem. No game is actually worth the price you see in the shops or online these days, period. More so EA's staple FIFA, year after year.
@@buggerlugz6753 Making a good game takes a lot of time, a lot of effort and a lot of money, no, you cannot pay all the developers, QA team, designers, and voice actors with chicken nuggets. And there are absolutely games worth the price.
Piracy is a service problem, not price.
GOG barely makes money for CDPR. They have said this left and right and a few months ago they laid off some people
@@aceofhearts573 GOG is CDPR though?
Things would probably have been better for them if only they kept their promises, and made Cyberpunk 2077 good. Also if they kept DRM off of their DRM free store too, since they still have games with DRM on there.
Communist China owns the gaming industry. They use DRM for multiple reasons.
Those bots are getting out of hand
It's been going on for a few months...
Mahaps YT should focus on them, rather than the dislike button.
@@rexdink I completely agree
What's do these bots and proper hookers have in common?
As soon as you go to one, you probably have a virus after it. 🤣
Maybe one day TH-cam will prefer catering to the bots rather than human users.
@@rexdink they focus on the dislike button?
I have a feeling this change to ‘PL1 = PL2’ is purely to avoid reviewers waiting for Tau expiry to get a fair comparison
Everytime we talk about the power consumption of the new intel CPUs, it's such a good trip down memory lane to the times when AMD needed small nuclear reactor to run and could heat your room through the winter. I like that role reversal A LOT! We needed something like this to finally happen.
We just got in 12 gen cpus and we have found some funny issues with windows server and the e cores when vertilizaton turned on. You have to desable the e cores to boot server when vertilizaton is on. Also half the boards only let you go down to 1 e core not all off.
That doesn't make any sense lmao.
And that is sad, the 12900K just destroyed the entire Xeon and HEDT lineup from Intel's history.
AMD is obliterating Intel on server, and now the Xeon and HEDT lineup suck compared to CORE series.
Intel has to make a better interconnect than Ring Bus ASAP.
@@saricubra2867 They haven't used ringbus on their HEDT or Server lineups for quite a while. Bear in mind their Desktop chips are the first of their new generation out the door. We have no idea what same gen same process node HEDT or Server will look like. At the same time it's probably a process node issue. As soon as TSMC could yield desktop CPU chiplets for AMD AMD had their entire lineup ready to go out the door. Intel with monolithic die's needs a much more mature process node before they can yield HEDT or Server CPU's. Still where expecting server chips early next year i believe.
I am not surprised. All of Z690 motherboards and 12th gen CPUs shipped to Gamers Nexus, Hardware Unbox, kitgurutech, and so on we’re scrubbed thoroughly for any bios or hardware bugs. The production 12th gen CPUs and Z690 motherboards in stores will probably have bugs and issues until the manufacturers audit their process.
Most virtualization software and Linux distributions are not setup with alderlake yet. Probably won't be until next year. Would avoid it for this type of work. Not sure if Xeon's and hedt platforms will use big little, but if they do, they will have more incentive to roll it out faster.
I've lost all interest in hobbying in casual PC building. Scalpers have made it impossible to obtain anything remotely affordable for GPUs and now it's slowly leaking into CPUs.
Thank you for making these videos so I can at least stay up to date in the latest trends.
💔
*Denuvo DRM* is the problem here with their faulty poorly written DRM, it's not Intel's fault in the slightest. *VBS* should be turned-on for both AMD and Intel.
VBS should be the same for both systems if the aim is to compare apples with apples, and not dragonfruit.
I hope
I would like to see a cooler upgrade test video where you take a few coolers that support the new socket and how well it works with Alder Lake
Definitely, I could find NO native LGA1700 12th gen heatsinks, so they are all sub-optimal cooling solutions
Thanks for mentioning coolers. I’ve been at a complete loss trying to find information on whether new designs are actually needed because of the larger IHS, or whether we should trust older designs with a new mounting kit. Further complicated by the fact that it seems that most of the ITX boards have clearance issues with VRM height, etc.… Not even sure if there are any valid AIO or air options for ITX. (Open case, so for me total height doesn’t matter, but fitment into the motherboard space does)
This is the best tech channel on YT. Thank you for all the hard work and in depth testing. I never miss a video.
Long, brown haired dude starts doing postmortem:
Me: starts hedbanging profusely
Brown for noctua
@@D7mo0o0on actually this time, brown for araya
Got myself used 5950 x and croshair VIII formula for 700 bucks im happy with it hearing this :)
Steve, the real computer guy.
Yeah, Linus is a fucking joke.
Buildzoid has a video today that shows that turning off the e cores improves performance that does not require as many cores as it can get. He shows performance scaling is diminishing returns above 1.15v, which with e cores disabled does not get past 60c on a 12900K.
By disabling e-cores, the p-cores get a little IPC increase because you can enable AVX 512 (in some motherboards), and the L3 cache space that the E-cores use are available for the P-cores.
Still the IPC and singlethread speed of the p-cores are overkill. I watched 12900K rekt the 5950X in RPCS3 emulator, 40% difference on FPS, (11900K also beats the 5950X but not by a lot).
I would love a way to turn off the E-cores without using the BIOS, so you get free IPC and more speed for Golden Cove without doing a reset.
Adaptative undervolting would be insane for the 12900K at PL2.
New architecture, new problems. Didn't see that DRM issue coming, nice to know. Thanks for these details that many don't find out about until after a purchase.
Great video, I just upgraded to the 12900k and Windows 11 & was wondering why Virtualization was Enabled so I searched it up & found this video.
Thanks for making great content you have yourself a new subcriber.
Oooooooohhhhhhhh. VBS is Virtualization-Based Security now. Not Visual Basic Script. Ooooh. Okay. I get it. Phew! For a second there I was afraid people were opening Excel spreadsheets from people they don't trust. Because reusing an acronym that already existed AND had security and performance implications was by no means confusing or anything. :-\ Thanks for thinking THAT ONE through, whoever named it that. :-P
Yeah, and we also have variable block size (vbs) in video encoding.... not a VBS here (very big smile)
@@AndrewTSq stop. I cant take it. 5 acronym at once.
Trip down memory lane. Do you remember the loveletter virus written in visual basic scripting? That what I think about whenever I hear VBS.
This video couldn’t have come out at a better time for me. Right now my PC is in pieces with a fresh motherboard and i9-12900K install in the works. Thanks, Steve!
Of course I've heard of those three letters together, VBS is Visual Basic Script. It's been around since 1996 (according to Wikipedia).
Such an exhaustive investigation into Intel's new Alder Lake backdoors, thank you for your work in this area!
💕
The community demands performance coverage of Tourist Bus Simulator! I have a 3900x and GTX1080 water cooled system and am desperately trying to justify throwing it into the trash to upgrade to a system that can handle 300+ FPS in TBS.
In all seriousness, J/K, I will cling onto this system until it dies, thanks for all the great coverage!!!
Same here, my 3700x is only a little over a year old, next upgrade will be a Zen 3D. And that should work fine until at least Zen 5 comes out, and the supply shortage will be over by then, and DDR5 will have better supply, faster speeds, lower cost. I'm in no rush to play guinea pig on a new platform.
Steve, maybe you could try a cracked game exe file and see if that still has the same DRM issue? ;)
CPU simply not work good..
It's satisfying how the segment bars fill the space between the edge of the screen and the peg board.
Digital Restriction Management. It's nothing else anymore, so please don't use word "rights" when talking about it, it has nothing to do with rights. You have rights too and companies don't care about them.
Its communism.
@@tacticalcenter8658 No. It's capitalism. The free market forces see a need that can be fulfilled in exchange for money, and does so without government regulation. This is, by definition, what capitalism is about. "If enough people don't want it, there will cease to be a market need for it."
Communism is when the government tells you that you must have this installed on your computer or you'll be taken out behind the chemical shed and shot.
Calling unregulated corporate greed "communism" is a level of confusion I will never comprehend....
I think that you are well positioned to actually perform the pressure map testing for coolers for the 12th gen Alder Lake CPUs (since, it appears, no other tech TH-camr has that purchased/invested in that capability).
I think that can go a LONG way towards helping people making informed choices about which CPU HSF to buy (or not to buy).
Thanks.
Thank god the paper towels are hung up properly in the background. The reverse TP thing that was going on was bugging me.
I watched the Windows 11 test. I didn't see any indication in that video that VBS was on. It was a source of confusion as your lower results for Windows 11 were without explanation. NOW, they make sense. HWUB, was very clear in their Windows 11 testing to demonstrate VBS on and off.
Honestly, there is clearly a subtle shot being taken here by both parties, if you watch both channels you can see whats going on.
On my X570 Dark Hero virtualization is OFF in BIOS defaults, so VBS is off after a clean install of Win11 on a 5900x/32GB RAM/formated 1TB NVME. I was confused by why everyone said VBS is on by default but this video shred some more light on the requirements!
Denuvo should be banned altogether for the damage it causes to the gaming ecosystem. I'll keep riding the high seas for more and more games when this malware is implemented into games.
Great marketing, ASUS. "Dark." That's a new one. They must have spent "days" on that.
I guess people weren’t kidding when they said this is Intel’s “Zen Moment”. Zen 1 was a freaking mess of issues! Bodes somewhat well for their next generation or maybe “14” if they stick to their naming scheme, but only if they can make further gains in performance or efficiency, preferably both.
Not really, only had issues with XMP profiles. I used standard speeds and had no problems. Zen was actually not that great at the time, but they offered more cores. Now I'm on intel side, no issues either.
@@kanta32100 Zen had scheduling issues too. It took a while for Windows to figure out that it should commit threads that shares memory pages to the same CCD to maximize cache hits.
Didn't the 3090 barely saturate PCI-E Gen 3? I thought that meant only around 50% load on gen4 x16, shouldn't it be good for at least 2 more graphic card generations?
Who cares we'll still need and spend extra for gen 5
SSD and usb ports
True, but imagine the benefits of motherboard makers splitting off the lanes. If the rtx 4xxx comes with gen5, but can't saturate gen4, we can get full speed out of a pcie 5x8 slot, That could be more thunderbolt, or more USB ports. Maybe 2 more m.2 gen 4 drives with direct Lanes to cpu.
@@tassadarforaiur People who need these things already have 64+ Lanes from the CPU alone...
@@MemesDex You only have 16 lanes of 5.0, and 4 lanes of 4.0 from the CPU (576GT/s total bandwidth), I don't know where you are getting 64+ lanes from.
If you are talking the other lanes provided by the chipset, then you are very mistaken.
They are all connected to the CPU by only 8 lanes of DMI gen 4, for 128GT/s total bandwidth shared between all other devices on the motherboard.
So, optimally, especially for anyone who wants to have more high speed nvme drives connected; splitting the pcie5x16 into an x8x4x4 would allow 3 top end nvme drives to be directly connected to the CPU, while not hurting GPU performance at all.
And also having the benefit of not having any of your storage taking up the limited lanes running over DMI.
Not really a big deal for your average user, but definitely a benefit if you need it.
Thanks for the work on the lights to make the shadows a lot less.
There are certainly a lot of growing pains with this new architecture. I hope that all the quirks with software are worked out by the time later hybrid x86 CPUs join the market.
Hey Steve, it would have been nice if you also discussed Asus LGA 1200 mounting holes that comes with their Alder Lake motherboards. Maybe do a testing with popular coolers as well such as the Galahad and Arctic. I think this is relevant because I believe a lot of people are thinking ways to save cost before they migrate over to the new platform.
Brilliant reporting Steve. Thank you.
Love the video production and set angle. I want the new i5 but waiting on lower priced motherboards.
For 5 seconds I thought I misclicked a video from the times you were filiming in your house. All the vibes for the move in!
Would love to see an update video on all these issues.
I mean, it would still be neat if the GPU supported PCIe gen 5 because then you'd only need x8 and could use the other CPU lanes for something else.
The CPU's PEG lanes go to the first PCIe slot.
yes, you could use multiple GPUs (that also support the interface) but that's pretty much it, or slot m.2 drives via pcie-m.2 converter. As long as there are no devices supporting pcie5 (aside the cpu), it's useless.
Regarding the PCIe 5 feature for this gen CPU/chipset, the easiest way I found to explain to people why it is currently redundant is by telling them that there is no devices that can use it.
In the future, maybe but not now and there is no way to know if it will get used anytime soon.
I mean hardware support has to start somewhere. Gotta have motherboards with it for products to be made for it. It's a nice to have down the road.
@@TVAlien totally agree with you.
It is more a future proof feature than anything else, PCIe 4 just got mainstream for SSD for example with PS5 enabling its internal slot and most SSD manufacturers offering NVMe Gen4 options.
❤
I'm a bit confused. It seems that Windows 10 VBS has 2 different levels. One is turned on by default whenever the V instructions are enabled in BIOS. This is shown in Windows "System Information" summary as item "Virtualization-based security" - "Running", and at the same time, 'Memory integrity" under "Core isolation" setting can be Off.
In this state (where VBS is on but Memory integrity is still off), some applications like AMD Ryzen Master cannot start up. It seems that the basic VBS still blocks some direct access to CPU hardware.
@@qfan8852 I wonder does that refer to just the Windows Hypervisor platform. That is the backend component that powers Windows WSL2 and Hyper-V by running the operating system itself under hypervisor. It doesn't really do much itself, but it allows more efficient virtualization which is required for VBS. It also breaks some alternative virtualization methods as the hypervisor blocks them.
Piracy gonna boom big if drm gets worse, or gog games.
GOG be thriving in this market
And Cracking communities are catching notice of anti-DRM attention.
Only piracy. Not GOG, because publishers don't put their games on GOG, because they want to include DRM. The games will continue to be on Steam, Epic, and/or publisher specific launchers, and will continue to include DRM. So piracy it is.
@@PaperReaper GOG be doing exactly what it was doing in this market, because publishers don't put their games on GOG.
I have just picked up a new computer at the beginning of the month (May 2022) and the MB / CPU combo was ROG Maximus z690 Hero with i9-12900k with a fresh install of Windows 11 Pro and VBS was not enabled through System Information.
Really awesome coverage of these lesser-explored issues as usual. My question is: will the larger VRM heatsinks on the motherboards interfere with top-town coolers like the Noctua NH-C14S? I'm running an ITX build and worried about getting blocked by those big blocks around the cpu socket.
It's ready for a future thing....concise. Nice.
lighting and colors were better in previous videos, but maybe that could be the distinctive difference for "quick informal" formats :)
You can make the thermaltake original 939/775 waterblock kit to fit most sockets , plus its still great cooling. Some of the p500 pumps have been going for decades. I will add that I switched to EK cooling blocks back on socket 3930k (which thermaltake original didn't fit) . The most important thing to know about water cooling is that there is more cooling to be had when you use custom parts to build your own loops.
You're killin it man :) I really appreciate these informative videos. Also thanks for breaking down acronyms for us. Had no idea what IHS was, even though I've watched all your videos with Kingpin. Plus I'm going to record how many times you bash on the 11900k haha, even though you didnt mention it here. It was an impulse buy as I just came back to the PC world. IM SORRY!
Keep it up man!
Great piece, I wanted to know, on my Ryzen 5800x, if clean install windows 11 with Virtualization off in bios, it will set VBS disabled, just want to check
*LGA 1200 standoffs* .. my Corsair iCUE H115i cools my Alder Lake i7-12900K perfectly well in the 30° C range and sub-70° C gaming with an RTX 3090 250+ FPS 1440p. Corsair is also sending me replacement LGA 1700 standoffs for about $4 but I didn't want to wait after reading Corsair forums showing successful LGA 1200 standoffs being used. I'll probably change out the 0.8mm shorter LGA 1700 standoffs.
Exact same for me. Mine is staying 30c max 70's I also have the new standoffs in hand but I am not using unless something happens and I have to.
@@masteron481 .. I had purchased M3 screws just in case and then I went to Corsair's forums. You'd think just to avoid all this hysteria that they (Intel or whomever designed the LGA 1700 socket) would have kept the same height and spacing.
@@DJaquithFL I Agree, but at the time of build I had no idea they were different heights till the next day reading. Lucky ours works great with original. I will change down the road when I repaste but right now I am very happy with my temps. Enjoy!
TIL about VBS, thanks. Just enabled Memory Integrity in Windows.
Hope they fix a lot of them
I wish you'd buy a throne to sit on while doing these round-ups. The red and gold would look great in the new set. Then you can make and sell GN goblets.
Yes Steve who does an upgrade over a clean install. This is by far the best hardware site i've found well done.
Can you guys talk about the ASUS Z690 motherboards having LGA 1200 cooler mounts as well as LGA 1700? I just want to know if you guys can find a difference between native lga 1700 bracket vs 1200
The 1200 has a larger gap so if you have a noctua-esq cooler with hard plastic stand-offs then you will have a problem. But if you have a spring tension mount then it will be totally fine.
I have such mobo, but since I don't have LGA 1700 mounts I can't tell anything apart from obvious size difference. Btw cheapest (35 EUR incl. VAT in Europe) CM masterliqud lite 120 does the job pretty well - 12600k OCed to 5 on P/4 on E, keeps it in mid 80ies C under C23 load and 180W draw
LOL when I hear VBS I think Vacation Bible School... Context is everything when using acronyms. For instance: UTI could be Universal Technical Institute or Urinary Tract Infection... which is closer to what Win 11 gives you right now.
At Freightliner dealerships, it's a Used Truck Inspection. I'm not sure I want to a truck to have a UTI!
I think VBS has been a thing in one form or another going back to around year zero when Steve was spreading "the word", lol.
Thanks Steve, Back to you Steve.
I don’t do clean installs anymore. Only upgrades. It works because the bare-metal system is kept clean and all work is done in VMs.
Ive got everything to run my 12900k except a cooler. Had one, but wasn’t compatible with my mobo. Now my concern is covering enough space to make sure it’s running optimally. Didn’t realize lga 1700 compatible just meant the mount…. All my fault. 😓
12th gen die is mostly centered and smaller than 11th gen, as long as you have good contact to the IHS, it doesnt need full coverage. A native LGA1700 cooler will be better, but you can absolutely use an LGA1200 cooler if it mounts properly
I had so much stuff related to coolers, I managed to get a arctic liquid freezer II and coolermaster liquid 240 lite(or something like that) to work on my 12600k without any official kits.
@@tilburg8683 Are you uusing a Asus Mobo with the lga 1200 mounting hole ?
You should be able to use process lasso to limit Denuvo games to only the E-Cores without waiting for BIOS or game updates. DRM should not exist though.
Or just grab a crack and get some extra frames as a bonus
@@shukterhousejive not even gonna lie, I for almost every game I have bought, I also have the pirated version because they always seem to run better.
I pirated doom eternal, it ran amazing on a i7 2600k
@@ashleyjohansson230 WTF, how???
@@shukterhousejive depending on crack DRM may potentially still break games. For example if crack only spoofs licensing info etc and not intervenes after all the checks.
I dunno bro but my 12600k is kicking some serious but right now, extremely happy with it and I’ll be getting another. Good luck with the drm and stuff, I don’t use windblows so all that news is just noise. I also use all the cores as needed in linux. I guess it’s just more flexible in that sense. As for cooling, deep cool gmaxx 400 v1 is working great with a simple 1/8 machine nuts and bolts and washers mod.
But... Alder Lake doesn't have proper scheduling on Linux yet. It's completely random what type of core a process gets assigned to... So are you manually adjusting affinity every time you start a program?
When I hear VBS I think Visual Basic Script. We’re all nerds just different kinds
That did pop into my mind
11:52 that is the first time I have thought about the Scroll Lock key in a while. Sys Req and Pause Break are also on the list of vestigal keyboard keys
I'm thinking Intel had a look at a keyboard to see if they could find a key that almost no-one use, to map this function to. Not a bad move imo, definitely easier than going into BIOS (UEFI) each time you want to turn e-cores on/off.
Thanks Steve.
could you please make some efficiencies tests?
so basically performance/watt over a few CPUs and generations.
I don't really want my CPU to use 200W constantly so I would limit the power.
energy costs skyrocket currently in Europe, 38c/kWh is just insane.
so how would performance of different CPUs look if they are all limited to 65 or 100W?
That would actually be a interesting test.
Have the last 2 generations(skipping 11xxx series and going with 10xxx series) from both Intel and AMD.
Have each CPU run at:
25 Watt
65 Watt
100 Watt
125 Watt
250 Watt
And then note down their performances over 2-3 workstation task and 2-3 gaming task.
Will take a long time to do and properly not worth it in man hours. But would be interesting to see who is actually king of efficiency.
If power is an issue, go with proven technology with low wattage. I don't even see why your asking. You should avoid this CPU like the plague.
From EU here too, and in one of the countries with the worst electricity price hikes. Alder Lake (and....mostly nothing intel, unless talking about a 10400 or the like) is not for us. While a Ryzen 5600X or 5800X is amazingly frugal in power usage (and while consumes more, the level of efficiency of a 5950X is crazy, for all what it is capable of). Whoever is looking for energy efficiency, for now it's AMD. And of course Apple and their M1 solutions, but A) As I am only interested on desktop (for work), the only sensible (on price) solution is a mac mini, and those can't have more than 16GB of RAM (small for graphic work, even in Mac OS). B) The price you save in energy, you are wasting in incredibly inflated price, specially for any memory or disk upgrade. C) Software, workflow and drivers compatibility. Including many 3D apps that are crucial not existing for Mac OS. This is much of a bigger problem in my loved Linux... but still.
So, while I am no brand fanboy at all, I'm stuck with AMD for a while... But next year has many releases of a bunch of companies... let's see.
It depends on what your doing if your using a cpu heavy work load (like transcoding video) then it will be closer to the maximum TDP, if your just gaming your going to be closer to the minimum TDP. Also keep in mind AMD's TDP is not very accurate as it is an average across multiple use cases (most of which are cherry picked to get that number lower) I actually wish amd would adopt this TDP advertising model as it is way more transparent than what intel and amd were doing before.
Thanks as always for the great video! ~ Especially for giving us a better understanding of VBS...sigh...guess security will always have a speed/convenience cost somewhere... I hope microsoft can work on fine tuning it so it does not become a forced nuisance/performance nightmare later. Already with windows 11, TPM and drive encryption will have to be enabled for most standard installs...enough security for the time being maybe? lol.
i recently bought a 12900k and msi z690 edge, ethernet while watching youtube has been bad but wifi for some reason good, my friend also reported similar issues with his 12900k, i returned my 12900k for a new one, as well as a new mobo: msi z690 unify, same exact issue occurs, we both use the contact plate, not sure if its related but this is driving me bananas!
VBS is enabled by default in a fresh install on my 9 Gen 9900k system.
Thanks Steve great video, Almost nobody is talking about the limitation of 12 gen Intel processors. Hopefully Intel can work pass theses problems. But There is still the problem of high power use. I am waiting for the 65 watt parts. I hope you can do a video of Window 11 compatibility issues with older platforms 7th gen intel, 8 gen intel, 1 gen ryzen and any older GPU window 11 will not work with. Thanks
I'm just here for the moving VLOGs.
historically a windows upgrade rather than a clean install always ends up with a security issue as it uses a weakened security template to facilitate the use of pre installed apps... forcing DRM on the end user at the expense of stability and increased processing overhead is wrong, it should be in the laps of the media providers
Any news on when alder lake will come to laptops?
Not sure yet. We don't follow notebooks too closely, but not aware of a firm date yet.
@@GamersNexus One would think them needing 4 power bricks for the 12th gen CPU alone
@@GamersNexus you should cover the laptop space. Maybe not full on but it is a market that does matter more and more and maybe at least covering one or two each generation with your expertise and methodology would be awesome. Besides, if you don't wind up keeping them they would make great fan giveaways, hint hint
Expected to come at CES in early January.
Thank you for video, really :) !
Ahh vbs, another software based hardware supported band aid to fix actual hardware flaws and exploit. It will soon be mitigated by even more advanced malware or newer detected flaws introduced by always active virtualization. More security = more nooks and crannies to hide malicious stuff. Anyone remember the ASLR flaws, which managed to predict where and when a certain process value is in ram via only a few java script functions hosted by a web browser? ASLR was actually meant to prohibit exactly that. Another layer of security and always on virtualization... this begs for new rootkits and exploitation of intels management engine and amds psp.
So do we turn it off if we just game and browse the internet?
@@Hexxagone850 : Honestly, if you ask me, I would go the old fashioned way. A proper antivirus software and an up to date web browser, almost all attacks happen through the web browser anyway, so making sure that this one thing has all the latest patches is one part of the equation.
Second part is to not trust and download every thing you see on the web. Being sceptic towards most "to good to be true" software solution is another way to go. Limiting your own web traffic and trusting only few websites is another helpful way.
Having a proper router which can be configured to block traffic with suspicious websites and hosts goes along. In the end there is no 100% safety, browsing the web and using foreign applications will always be a gamble of trust, no software based security will patch hardware based flaws which run really deep into the core architecture.
If something like this VBS sandbox mode actually hinders perfectly legit although drm'd software and or services, it's already a failed security mechanism that impedes and dictates a users behavior as well as forces developers to patch their software or even write it a certain way. Anyone remember the piledriver architecture, amd wanted to force developers to code in a specific for their architecture optimized way. That did so happen... didn't it???
Ηow about Asus lga 1200 compatibility?
Using the existing 1200 mount to an Asus m/b isnt better than using a 1700 kit?
Lmao, the love how i was watching this video on a second screen while playing a pirated game on that list; and because its pirated i have experienced 0 issues, unlike the legit copy XD
Currently apart from some benchmarks and certain GPUs that are x8 with instead of x16, there is hardly any measurable difference between PCIe gen 3 and PCIe gen 4. Thus currently PCIe gen 5 also would have no measurable effect.
them si board with the Asus socket cover broke my brain
Keep in mind that some 3rd gen Ryzen mobile chips don't support virtualization so VBS won't work out of the box and has to be disabled on win 11 for your chip to work. Side note, if you use containers or VM's i would just stay away from Ryzen mobile chips altogether.
PS.
Alternatively, to a fresh install of an OS, you can upgrade or downgrade to an OS and then run an integrity check on the Windows files through Command Prompt by running
"sfc /scannow" (without the quotes)
This will scan the OS files and repair and replace any damaged or missing files.
Great video!
We are finding the VBS is not enable automatically like we were initially told on Intel 11th generation. Microsoft quietly update documentation with a note.
Something I've wondered for a while with all the Spectre and Meltdown stuff. As a gamer with nobody else using my laptop (don't laptop shame me!) is the performance loss from all the patches worth it or would I benefit from disabling them? Obviously I'm thinking in wider terms than just my laptop since I want to get a new build together when I can.
The only way I could see PCIE5 being beneficial today would be if a GPU used PCIE5 x8 instead of x16, freeing up 8 PCIE lanes to something else. But for most users, this would never really be an issue, maybe some edge cases where you want to run a high number of M.2 NVME drives.
I know with cooler alignment and the cold plate, I had got an LGA 1700 kit from Artic for my LF 2 420mm but it was still about a single millimeter too high, which then meant it wasn't mounted right and my 12900k would shoot to 90c in seconds on boot. The thermal paste still had the pattern I applied on the IHS. Had to lower it by a millimeter, which meant not using the metal washers they provided for the standoffs, and instead use the flat, sticky ones turned upside-down. Maybe I didn't mount it right in the first place, but either way lowering the kit by 1mm has worked wonders for me and the temps on my 12900k. I also applied a flat, even spread on the paste this time instead of an X method.
Keep being a beta tester.
On the cooler I was given a sythe fumu 2 for free it covers the whole CPU so got the adapter to mount it looks like a good fit unfortunately waiting on my ram before I can boot the pc and see if it works well
does my Does my deepcool L360 ARGB have ful coverage for 12th gen cpu ihs?
how to get bracket for LGA1700?
I wonder if Dell will consider a cooler change for the first time in forever for 12th gen.
Awesome! I Better pay close attention to the $4k CPU I bought six of... (ATOMIC EYE+ROLL)