As an enthusiast, it's disappointing when you can't get a great OC, but as a consumer, it's awesome that everybody is getting close to the best from their chips without having to become an enthusiast.
Yeah honestly that is where the industry went everyone gets all the performance they spend their money on regardless of overclocking skill it is a good thing.
People always jump on me when I say what you did. I agree, my ideal product is one that is well optimized out of the box. I can tinker but others can't and they shouldn't be held back. So all these boosts and various things are awesome. You can lower if you want, but you don't have to worry that your leaving performance on the table because you don't know "the secret handshake". It's a little like comcast. I know the words I have to say to them to get them to lower my cable bill, but it makes me angry that my grandma would be charged greater than twice if she was alive. Doesn't seem right.
@@billschannel1116 Imagine having cars that all go 130kmh, most of them can do 200, if you can OC them properly. That's where we were some 5-10 years ago, and look at us now, you set the gear and PC goes BRRR. :D
Exactly. Overclocking headroom is performance that’s just left of the table, and the only reason anyone ever started overclocking in the first place, was because the products they purchased were capable of much more than stock performance. I personally don’t like fucking around with BIOS settings that can make marginal improvements or actually lower my performance. I also drive a car with an automatic transmission. It’s got paddle shifters if I want to fuck around with shifting manually, but other than having a bit of fun, riding a little closer to redline, the transmission is still smart enough to know it should upshift before bad things happen. So, that’s where I’m at with computers these days. Just let play around within safe limits, and let me trade away some efficiency for completely worthless performance-but most of the time I’ll probably just let it run as efficiently as possible most of the time, and boosting as hard as it can when I’m actually doing stuff with it.
In the several hours I spent trying to squeeze as much performance out of my my 6700K back in the day, I found that HW monitor has a very strange reporting tactic with voltages. The VID voltage was always astronomical and was super frustrtating because no matter what I did for offsets or even going in and setting a static voltage that was supposed to run no matter what, the VId voltage was always super crazy. After several Google searches I found that the VID numbers are apparently what the CPU is requesting, NOT what it is actuially getting. There is a VCORE voltage number thats towards the top of HWmonitor that reports what the CPU is actually recieving for voltage. I would be willing to bet if Jay checked the VCORE voltage it would show the appropriate voltage for the CPU. I know this was long winded but im hoping I just saved at least one person the several day headeache I had trying to figure this out lol
I am confused by your comment. I have set up a sensor that says "CPU core VID (Effective)" and it goes up and down depending on what I set on the bios. I don't know if it's 100% accurate, but considering the offset I set, 1.02v sounds like it's correct. So my confusion with your comment is why do you say it's weird on reporting or it's always crazy?
@@LautaroQ2812 you may be looking at a different sensor read out than the typical VID report. If TH-cam let you post pictures in the comments I could show you what I'm talking about. But the fact that yours says effective after the VID leads me to believe it's a different read out that's more accurate
@@tivtag Can confirm, the radiator in my room is buggered, my 5900x + 3070fe are compensating nicely... But only when gaming, so productivity has dropped somewhat.
Cores start at 0 because most things are 0 indexed on computers. 0 is normally the first element unlike 1 in natural language. Things that don't say 0 for the first core have code to offset it by 1 for humans to consume.
Yea it's because of the binary system. 1 bit can hold two numbers 0 and 1. 2 bits go from 0 to 3 and so on. It's also a bit historic and makes more sense working with computers in general. I feel like Jay would know this though and refers to the fact that the naming there doesn't really make sense.
Out of curiosity I am more interested to see how well the 12700k will overclock up to the 12900k stock numbers while being $200 cheaper considering the 12700k is drawing less power stock and is clocking at a little slower speeds.
Probably not very well. I expect that there's a fair bit of binning involved and many/most 12700k parts won't be able to hit 12900k clocks, at least at Intel specs.
@@Steamrick You may be right but I wonder just how good the binning process is so early in the release of the new cpu's architecture as well. I just thought it was sort of weird that most of the "press kits" did not include the 12700k chip and what benches I have seen so far at stock speeds the 12700k seems to slot in just about where you would figure it would between the 12600k and the 12900k. I have seen oc video's on both the 12600k and 12900k but none yet using the 12700k good or bad.
@@arklight1670 I had pre ordered the 12700k, it arrived yesterday. Still on the fence on mb and ddr4 or ddr5. Leaning towards ddr4 at the present time.
@@Steamrick Well now that depends is a 12700k a 12900k with 4 E's disabled or is it it's own chip? Do we know this yet? Cause if it's the later then there's no reason to think they wouldn't overclock to the same level as a 12900k, and to the OP's point maybe a little more headroom due to less overall power draw/heat.
I've been digging around the internet the last couple of days, I'm fairly certain the best gaming performance will currently be achieved on a 12900k, on Win10, and with e-cores off. People are starting to do benchmarks and comparisons of this and the results are interesting. Only running P-cores of on a 12900k or 12700k will still give you 16 threads! More than enough for gaming. I would not recommend switching off hyper-threading unless you have a very specific use case and are OC'ing for high clocks in that.
Counting from 0 is very common in computing. In this case it saves some space too since by having the first core be 0000 then the 16th core can be 1111. If the first core was 0001 then the 16th core would have to be 10000 and take an extra bit.
Overclock is not worth it anymore Cpu's these days are running very effecient and just under their limit. So its al a waste of time and heat 😀 The gains are so small in the real world. A 1000 points in cinebech is almost nothing
As someone who builds their own machines, but doesn't stay up to date on hardware information until it's time to upgrade, I appreciate videos like this immensely! Thank you, JTC!
What i think might be quite interesting, if you have the time and inclination to do it, would be to UNDERCLOCK the E-cores, if that can be done in the BIOS. Since it's mostly only used for background tasks, do they really need to turbo to 3ghz or higher, for example. (I know that's not strictly the case as they can be used for other stuff too) But if folks could lower those cores, without sacrificing a lot of performance, then that'd help with heat/power usage.
disabling ecores or downclocking them would be the way to go. the CPU package is very thermally limited once the ecores get hammered with work, so I think you could throw that thermal headroom and wattage into the pcores by disabling the ecores.
*How to undervolt on the asus BIOS* For the z690 boards, go to the overclocking tab. Scroll way down to "Cpu core/cache voltage" Change it to adaptive, NOT offset. Then change it to a minus symbol. go down to the cpu and set it to 0.070 and hit enter. All 12700ks should be able to do that so far. Brings temps down allot. that is -70mv btw. It'll add a bunch of zero's when you hit enter, don't worry about it. The new voltage will be shown during all core load, not single core boosts which still hit higher.
When enabling the AIOC on Asus boards you need to also adjust the thermal velocity boost to +1 and you'll get your same numbers rather than a little less. Only takes an extra second or two to change that in bios as well while you're there.
For maximum desktop gaming efficiency and performance, independent voltage would be really helpful. For most modern games, only one core needs to be blazing fast. Even a dual voltage system with all performance cores would be amazing. You could run half of the cores at maximum efficiency and half of the cores at maximum performance. The system could put every thread that is not bottlenecking on to the low voltage cores, so basically only the render thread would need to be running at high voltage. Maximum safe voltage easily uses 4x more energy than a sensible voltage with a reasonably close clock rate.
@@rickvelde7967 Yeah minor problem though, how exactly are you planning on modulating voltage on die without heavy use of VRMs? What about the miss frequencies or synchronization issues such a system would potentially cause? The incredible amount of die space wasted on basically giving every single core an independent voltage rail or regulation module? I'm saying that's not how electricity works because you've effectively just tried to magic away the main problem of such a system without even acknowledging it. Only to come back and try to assert that I said something I didn't as well, lmao
@@TheFriendlyInvader You don't need independent voltage for each core. Having just two voltage zones would go a long way. Independent voltage control for cache vs logic already exists, so this is a solved problem. It would probably be easiest to have half of the cores on one voltage zone and half on another, but since single threaded performance typically only matters for just one single thread, it would be best to have a small number of cores at high voltage with the rest at low voltage.
It's simple, use a sealed vapor pump style refrigeration system to cool all of the heat syncs of the components, with the components isolated in a sealed case with as much pressure removed as possible, using a vacuum pump. It would prevent condensation from forming and prevent the components from burning up.
This is really good info, thank you for the video. I am planning on updating my PC with the 12900k, just waiting on my Z690 Hero MB and some DDR5. Good to know that I won't have to mess around with manual overclocking, especially because I will mainly use the PC for gaming.
Even at stock clocks the 12900K is a beast. The gains he was getting (5-7%) might not be worth the extra heat for gaming. That's of course up to you, just my thoughts on it.
i have one now for 2 days and i can say WOW! in the bios you set 2 buttons to ai overclock and boom cpu p cores to 5.3ghz and e cores to 4.0ghz, with not much heat. it is so easy nowadays, and that thing is a beast with my 6900xt
I love how a $600 motherboard is now considered "mid-range".. WTF! What the hell happened, my Asus Maximus X Hero board only ran me $230 and now MSRP is like double for the same series.
Overclocking has long since died outside of extreme cooling. Back when there was little difference between server chips and consumer chips, you could get tons more out of a chip because they were basically targeting the same TDP. Companies are not stupid and saw all the extra free performance that they were giving users. So instead of releasing the CPU's at their intended TDP's and clock speeds, they added their own overclocking system to sell that performance to the end user instead of giving it away. The base frequency of 3.2Ghz is the real frequency of the 12900K, if it was to be a server class chip or an older chip before all this stock overclocking nonsense. So, it's not a bad overclocker, intel is just doing all the overclocking possible for you in advance. Why do you think motherboards have gotten so expensive? They are basically A.I. overclocking platforms.
I refresh my PC when the arrival of new gen stock at the retailers, causes the superseded components to then bear much more reasonable prices. The Intel Core i9-12900KS CPU, the ASUS ROG STRIX Z690-G GAMING WIFI mobo, and the ZOTAC GAMING GeForce RTX 3070 Twin Edge OC graphics, in my mini tower, are all factory OC'd! I fully approve of Intel's and ZOTAC's overclocking. ASUS' AI Optimizing required failures to recognize its limits - unacceptable by me. The default settings, even some Auto Settings, as was reported by Gamers Nexus, were way too aggressive to be safe to use. A combo of Auto and Manual settings keeps the ASUS mobo under control.
living in australia, i can say with 100% certainty. NZXT IS NOT SELLING PCS DOWN UNDER. tried to get the BLD kit or a prebuilt, not available anywhere except good ol usa
My 5950X gets 29394 at 4425MHz all core in R23, so much the same as the 12900K. The CPU package power is also just over 200 Watts with 1.25V core voltage!
Great video! Ive seen several videos now of people running these chips stock and overclocked on AIO's with all core full load temps in the 60's and 70's. There are some idiots trying to claim that these 12900k's run at temps in the high 90's and low 100's completely stock on custom water loops!
even worse are the idiots who are falling for this misinformation. then proceed to try and talk shit its hilarious. you mean they dont need a custom loop with a chiller lmao
Gigabyte Gaming x 12600k Set: All core 5GHz p core All core 4Ghz e core XMP memory 3200 cas14 gear 1 Off set .05 Cache ratio 40 And not much else (could squeeze more out of it but really no reason to) Runs very well with my old Noctua D14. Basically a new motherboard and CPU and reused every thing else I had. Nice upgrade to see me through a few years.
Thanks very much for this video! Exactly what I was curious about. I see this as a complete win. I'm building a custom loop with a few radiators and EK's new flagship 1700 block. Hopefully, that will be enough cooling power. I just want to get it to run 5.1 on P and 4 on e. That will be a nice improvement over my current 8086k at 5.1 I've got everything I need now. All I need is DDR5 RAM and hopefully luck of the draw with my chip. Fingers crossed. I work in 3D raytracing and modeling, so I want to squeeze what I can.
@Jorge Laughingman I see your point. And I agree with everything you've said. But Intel has done something new here for PCs, and they've paved the way for some new tech. This comes at a time where AMD has dominated the field for years. This is shaking the competition up again. We can already see that with all the price drops AMD has made since 12th gen's launch. Did it beat AMD flat out? No. But it does perform very close for cheaper.... Considerably cheaper. That is a win. No one ever said a flagship had to run cool on a crappy AIO or air cooler. It uses more power... It runs hotter. It runs faster for cheaper... It encourages competition... Do you not understand what a "win" is? Tests I see run under load at 80-90c with an AIO. That is more than reasonable. And it's a flagship. It should be Watercooled. As for power... If you can't afford the extra power bill, a flagship i9 likely isn't for you anyways. How would you define the release? Seriously. I'm not trying to judge your opinion. Would you say it was a fail? A tie? Curious what your thoughts are. Because from my point of view, it is a win, if only because it encourages competition.
@@MasonMenzies I agree. If you are getting a flagship cpu that already costs around $900, you should probably be able to also afford cooling and power for it (which will be much cheaper than the chip itself). Plus, I am interested to see where the idea of the E and P cores will go with both AMD and Intel chips in the future.
@Jorge Laughingman I would certainly define gamers as an average user. Games just aren't that demanding compared to other applications. For me, working behind the scenes of games, the DDR5 and single thread performance is huge. But everyone is entitled to their thoughts.
@Jorge Laughingman and to add to that, although we're likely never to agree, I think that right now AMD is looking at their line up and thinking... Hey, we don't have a definitive performance win. Not taking into consideration the heat or power. And at the end of the day, flagships aren't for average users, and those who aren't average users won't care about the higher power draw or higher temps. I don't at least. And I've always chased performance. And in this case, like it or not, Intel wins in performance by matching AMD, and doing that for a lower price. It'll be hot. But I don't care. I won't have to fire up a frying pan to cook my eggs. There ya go, now the power draw doesn't matter.
9:01 As soon as I saw the first videos drop on this I thought "I wonder if we could undervolt this to get some better temps" and here we are! This should be more common knowledge
5:02 Yo Jay, everything in digital logic design starts from 0, even at the smallest unit. Like, in a byte there are 8 bits, numbered 0-7 (to be more precise, it's [7:0]). Same principle extends to the multiple instances of the core. Cheerio.
Yup was gonna leave a comment saying this kind of numbering behavior is likely an extension of how things are in binary & every programming language (at least that I'm aware of), where numbering/indexing/etc. always starts at 0, not 1.
Omg is that right? WOW. No one else other than you two knew this. Yeah, I'm sure Jay doesn't know this either. /smh Guys, he was just asking why they don't display it starting with 1, he knows why it starts with 0. JUST LIKE WE ALL DO. ffs.
@@MooKyTig not everyone is going to know this. Most people would have absolutely no reason to know this, even computer enthusiasts and people in IT. But being as it's a question Jay brought up (even if he knows the answer, which yeah he probably does), it doesn't hurt to have someone answer in the comments for the people watching who have no reason to work with binary/hexadecimal/programming/etc. to understand this kind of numbering.
@@MooKyTig wow way to feel you have to defend Jay so strongly lol. I guess people are not allowed to give feedback or opinions.. not like he was rude or ill intent in his comments.
@@IanLantz 1) "people in IT" Bull. I've never met anyone in IT that isn't aware of this. 2) And as for most people, well fine, but that's hardly the people that watch channels like this.
They probably start at core #0 due to the way things are indexed in code (the first element in a collection is index 0 , the second is index 1 and so on)
To me, overclocking only ever makes since for the i5/r5 range since the high end stuff usually is boosted pretty high and the mid range usually has more leg room. Plus the higher end stuff pulls way more power and can generate way more heat for not enough gain to make up for it, as we all just saw in this video. I was running my 9600k at 5 ghz all day but as soon as I went to the 9900k I left it at the stock 4.7 and still manage to pull almost the same amount of power that my i5 did OC’d
@@brkbtjunkie I actually do have the kf version as well and at stock it will pull a little over 100w. To get it stable at 5.1ghz I have to jump it up to 1.39v and it’ll pull almost 160w of power
@@saricubra2867 Yes, but in a Xeon vs. Epyc use case, power draw and multi-core performance are much more important than for your average user. When you have tons of enterprise servers under constant use, a difference in power draw can mean thousands of dollars in electricity costs. And the types of usage these machines receive would scale significantly on many cores/threads, but may not require as much performance from each individual core. That's why server chips normally run at 2-3 Ghz, and never reach 4 Ghz. I'd expect Intel's take on Xeon to be a CPU with all E-cores, rather than a hybrid design. It's been shown that those E-cores have insane IPC, and if Intel were to enable hyperthreading for E-cores on Xeon chips, that would make for a very strong contender against Epyc. I doubt that Intel has any plans to design a special lineup of CPUs for HEDT use. With the core count of the mainstream rapidly increasing, the line between desktop and server has been blurred, and Intel's old X series no longer would have the special position it held 5 years ago. They may choose to not release an HEDT platform at all, or they could use binned Xeon chips with higher core counts, just like AMD's take on Threadripper using binned Epyc chips. When I got into computers 8 years ago, I remember being amazed by Intel's 5960X with its 8 cores. But in the current market where even 16 cores is considered mainstream and also found in server chips, the niche that the X-series once filled has vanished.
Setup a new 12900K on an asus Rog formula. Pretty darn powerful chip. Ran some testing and scores nicely. It just has an auto asus overclock applied but has been stable. Definitely runs hot even watercooled. All core workloads still bring it up to 80 and it is water cooled. No doubt some liquid metal or improved thermal paste would help a little but I suspect it is just a case of when it is under full load it is going to be 80° within seconds of the test stopping it goes back to 30° to 35° fast. Definitely a warm chip.
This honestly doesn't sound any hotter than my i7 8700k water cooled when running a benchmark with AVX instruction. I'm assuming you're getting those temps running Cinebench? AVX workloads causing the CPU to hit 80C is actually really reasonable IMO. If you're doing normal workloads and/or gaming, it's probably much less heat.
Stock 12900k uses slightly less power than 5950x overclocked (according to gamersnexus video). This launch is Intel throwing everything into the CPU, only using peak silicon. The margins on this CPU will probably be reduced compared to the last 15 years.
I’m happy to see the E-cores can achieve beyond 4ghz! This new architecture feels like what Sandy Bridge felt like just around a decade ago. There are even still decent budget builds I’ve seen here and there using i5-2500k’s & i7-2600k’s just because of how far ahead of their time those chips were (usually paired with an RX 570 or 1050Ti/1060 6gb nowadays) Then again, AMD is gonna bring the heat with Zen3D(if that’s what they’re calling it?) as well as Zen4. Man what a time to be alive, if only the chip shortage wasn’t a constant issue.
Despite not being the traditional O/c of the Celeron 300 / Core 2 etc era 🥳🥰 …. The new Intel 12 series is bringing a modern take of those good old days with lots of tinkering with O/c’ing (shame we can’t get the 30-50% clock boost of the oooold cpu cores 👍😆)
You were asking in another video what we wanted to see on the channel... well, I quite enjoyed this, actually! You know what could be cool? Overclocking mid-range hardware, since that's what a lot of us are using. See how far you can push a 5600G or something, that could be cool!
Honestly, 12700k and 12600k seem way more interesting. I will never get why people are interested in top tier cpu like 5950x or this 12900k. Not if it's just for gaming.
@@andreabriganti8621 lmaooo. For a quick sec, I was like, wtf... Jay used so many weird emojies. More to your point, i have also never been compelled to grab x900k + tiers. No judgment to others, but the gain above a x700k vs x900k were never worth for me since I mainly care for gaming.
@@nuruddinpeters9491 Yeah. I didn't mean to judge either, it's just that I don't see worth it to pay 200/250 bucks more to go from a 5900x to a 5950x or from a 12700k to a 12900k. So, since this video is about overclock, o.c. a beast like this cpu it's good, in order to test it and understand the efficiency of the cpu itself, however it's almost useless, in a gaming scenario. Expecially if the major boost is on higher temperatures and power consumption. Have a nice evening mate.
it has been like this since 5 or 6 years. For example, back in the day if you wanted "playable" framerates at PUBG or DAYZ, you needed to overclock EVERY SINGLE ELEMENT of the pc. I still remember going from my 1100t to my 2500k and jumping from 15-20 fps to 30 fps in dayz mod. Adding 2133 rams and overclocking the cpu to 4.8 ghz gave me around 40-50 FPS. Basically every HZ gained was valuable. Nowdays? Every game looks the same and can be ran at 150fps + Maybe when they add complex AI overclocking CPUs will have meaning again. Until then, I rather run at 4 ghz low power than having a 500w toaster for web browsing.
Not surprised by the title, I got the feeling Intel are already screwing all they can and pushing this silicone to the limits just to get ahead of AMD. I am interested to see AMD's inevitable response...
I use an I9-12900k at 5.3 all p-core and 4.2 all e-core, max temps with cinebench is 73c @ 1.3 volts, and the vrm frequency does matter for trying to reach lower volts at higher clocks
Forget all-core overclocking. Just use AI tweaker to assess your chip and it will determine which cores are good or not and give you a recommended voltage for each core. You're likely running into a couple bad cores that need more voltage than the others. Most P-cores should be able to get 5.5 ghz. If you set it up correctly you'll be able to break 30K on that 360mm rad.
You mean overclocking the 12900k is futile as it is basically intel redoing the prescott debacle when AMD had good competitive parts and intel had nothing to counter except its existing architecture so they rebuilt it to yes be faster than the competition but at the vast expense of energy and heat. Sound familiar? I look at this chip and while I do see it is the fastest chip, I also see intel flogging a dead horse in blind panic, and the naive lapping it up and applauding like they did 20 years ago. The general mantra of "well if it's just general use you should not see the heat nor power usage" as per your testing, is insufficient to cover for the fact that when it comes to energy the chips are not worth it for high end use. I mean say you have to render for your job, you save 3 minutes per 15 minute render. 9 hour day. That's 1 hour and 8 minutes saved in rendering time for sure, but at increased electrical cost negating much if not all of the saving, indeed the cost per render in terms of energy may be double that of the competitor who will take that extra hour but cost you far less for the work you do overall in energy costs. We do not live in the era of cheap energy any more. That 250yr period ended circa 2008 for the vast majority especially across europe. Now we have depleted all the fuels that are easy to get ahold of or forced to use less damaging alternatives, generating electrical power has become vastly more expensive for consumers be they business or residential. In an age where we need to do the same or more, with less, it feels like a con.
@@AliceC993 My 5950x is currently running at 220w at 85c at 1.24v (which is a limit I set) in my extensive custom loop. Yes and yes. Which of us has a 5950x?
@@user-co6eo5pz7x If you were referring to me I've been just as critical of when AMD has done this, making reference on several occasions to the FX9590. I give not a toss which company I or anyone else uses, my recommendation over 27 years of building computers professionally and personally has always been to go for best bang for your buck. But that does not stop me from being duly critical no matter the company at the time.
This is why I undervolted my 12700k and just used MSI Enhanced Turbo in bios. Keeps all P cores locked at 4.8GHz and keeps temp below 60c at full load. Took a bit of tweaking using voltage override + offset + LLC3, but it's 100% stable, way cooler, and better performance than bone stock settings.
From what I have been reading that sounds like exactly what AMD is working on when Zen 5 comes out. It sounds like Zen5 will be their P-core, and they will use smaller Zen4 cores for their E-cores. I think they are called 4D Cores or something. But then they are also making an Epyc server chip that is made up of 128 4D cores with less cache etc that should be coming out first. Very exciting time for compute enthusiasts.
It occurs to me that you could maybe get a bit more headroom for the P-cores by intentionally underclocking the E-cores, but whether or not that would result in an actual score improvement (and how much) is not as obvious...
Jay: “Overclocking the 12900k is dangerous” DerBauer: “I just got the 12900k, I’m gonna pour ln2 on it for 5 hours” and just broke 3 or 4 world records yesterday
@@koky179 I don't think he's ever seen LN2 before this dude, does he think you can just fill up a milk carton and leave it on the table until you need it? Lol. To be fair to him though Der Bau8er did this ridiculous overclock, that is definitely worthy of a mention. But like you said, no regular user has freezer stores full of liquid fucking nitrogen 🤣
I'm not huge in overclocking...but I play it safe and get it to a limit to where I'm comfortable with that will underpower the wattage to boot... The AMD 65W chips clocked at 4GHz (including the slight bit of overclocking used) can run at 45W under the right settings...
Loving the videos Jay and Phil. An idea for a video could be the different fan curves which are needed for different types of coolers. Air cooling, aio, water coolers etc.. I have an AiO on my CPU but the fans always seem to ramp up even under the lightest of loads. Hope this is a good idea.
thats one thing I like about my corsair aio. It ramps the fans based on liquid temp not cpu temp. So it takes a lot to heat the liquid up so fans never ramp up unless I run a sustained load for like 15 minutes
I had that issue on my pc too but there is a setting in bios that would let me control the delay of the fan speed up and speed down up to 1 second max. There were steps like 0.5, 0,7 or sth. Cant remember the setting , and atm im too lazy to check it but if u can't find it let me know and ill check after work tomorrow. ( MSI x570 gaming and a ryzen 3600x build but i suppose all motherboards would have it).
Chris, this is the wrong guy to be following. lol. Trust me. This guy is for casuals and mainstream PC users and he even screws that up often. If you're a casual, great. He is perfect for you.
Overclocking -the 12900k- any CPU, and even GPU, from the past like 5 years is pointless and dangerous! Well, not quite pointless unless you're expecting anything significant out of it, like more than like 5 - 10% more performance. Doing it for fun, because you want to squeeze out everything from your product, or to lower temps by setting negative offsets which conversely improves performance a bit because of the algos, is the reason to do it nowadays. P.S. If you correct a spelling mistake, and the text is red, YT will remove that part from your comment. Didn't used to happen, but it did now.
the heat and voltage are great in the work itself. in vain the time spent that the currently strongest cpu would be fast for even more mhz. but I understand. it’s all in the job description.
Guys sorry about that, but I see a lot of people testing OC on 3D tasks like CB23 but, 90% of the users are gamers. From a 3D point of view a gain of 5% is ridiculous, not a worth at all. Sometimes I feel like this is an obsession. Am I missing some point on this thing? because it doesn’t make any sense for me spend a lot of time trying to learn the math to get 5% better performance.
You are 100% right, it doesn't make a lot of sense. It was a big thing in 2000s where you could buy cheapest CPU and get in to perform like a top one with a few BIOS tweaks. Saved you a lot of money and performance gain was substantial. These days you either get an unstable system or fry your cpu with excessive voltage for like 10% performance. Not worth at all!
Something I am super curious about is virtualization on this platform, how does a hypervisor make a distinction between the P cores and E cores. Also I looked and it does support VT-x and VT-d thinking about how I am running Unraid on my desktop to split up my Windows and Linux workstations and wondering how an upgrade might affect me.
Just got one of these. Got a Peerless Assassin as a placeholder $50 cooler and WOW- I think it might be permanent until I have money to throw around (maybe never!). I haven't done AVX tests but at stock all cores 100% it doesn't break 85c, no throttling. I was going to get an AIO but I'm going to hold off until I feel I am limited in some way. I can't believe an air cooler does so well. Been gaming all day as my first hard 'Daily' testing of a few games, many hours each (Cyber Punk/Elden Ring/Fortnite) and the hottest any of my cores got was 68c. I'm not going to bother overclocking until I feel I need it- this is GREAT coming from a 4790k! That a 50 Dollar Air cooler can handle this thing is almost as amazing as the performance jump from my old CPU! Things may be different once I get to testing AVX though- I use things that need it rare enough that if it throttles a bit it won't make a big difference.
forgive me while I laugh maniacally at any notion of "standard" cooling for this chip. People are reporting 360 custom loops being unable to cool it enough to prevent thermal throttling at high loads so God only knows what extra expense is needed just to keep the temps down when at high load. Jay says we won't see these kinds of power draw for gaming and general use, what he forgets to say is you may not see that from games today, but in a year or two, with parallel rendering becoming the norm via vulkan and dx12 ultimate allowing for far greater CPU use under multicore systems? I think he is not factoring in the future.
@@Nine-Signs right 360 is what a lot of ppl are using. will a 240 work, will a 280?. will a nh-d 15 work, ect . i am curious as to what the minimum standard will be. i am well aware they do not ship with cooling.
@@666Necropsy I haven't a clue what the minimum will be as even those who don't have heavy use will still need a cooler capable of cooling the chip at its maximum for those occasional times they do a heavy workload. Either that or put up with thermal throttling.
Not sure if it was mentioned yet, but numbers start at zero because that’s the number the computer language starts at. When programming 0 starts as an iteration. So when you count to 5, it would be 01234.
I don't know....XTU has its merits, but really JAY should have stuck too BIOS overclock, regardless of masses. (Guess more peeps gonna pop their cherry's now pilgrim...)
I need to check if i have AI optimization enabled i tried some overclocking on my 10700k but i opted to keep it at defaults for now it's more than powerful enough for gaming.
I would love to see a similar bench of the 5950X vs 12900K using DDR4 then compare that to the 5950X (being limited to DDR4 and all) vs 12900k's DDR5 scores. Would be interesting to see how much the DDR5 actually effects performance with the same processors when comparing.
It's amusing because simply enabling PBO and increasing PPT/TDC/EDC nets you 28k easy on a 5950x - and this is before any curve optimizer tuning while also using DDR4. I hit 30.6k w/ my PBO+CO tuned 5950x and it's not really super hardcore.
Yeah like running an FX-6300 on 5.3 ghz since release just for the sake of it.😂 Took me a year to get a SMALL watercooling for it 😂 before that it was melting like Tschernobyl.but hey. The fun's in the risk right? ;D was even tempted to try 6ghz because it didnt break a sweat on 5.3. No instability and no raised voltages.
@@marvinlauerwald my FX-8350 is still rumbling along peacefully at 4.4ghz on a 280mm aio normally keep around 60c at stock it keeps around 40c (in winter I have no heater and I've seen 0c as temps and yes inside my house was around 26F) Maybe I should see what my 7 year old aio has left in it and try 5.0ghz?
I've copied every single one of your settings down to the decimal. I have a 360 AIO and I've installed the thermal grizzly contact frame, which gave me a boost. I was able to hit 27k that first day after putting the contact frame on but now I can't get anywhere near that number
@@Safetytrousers Wrong the 5950X with it's two chiplets design, dissipates more heat than a 5800X , even so a 5800 is still cooler than a 12900K source : Page 21 techpowerup 12900 review
Ian Cutress was saying that during his testing HWinfo was showing 100 degrees on the CPU, while he was using a cooler with a temp sensor in the block that was saying the CPU was upper sixties. He thinks that the issue is with HWinfo, not the chip.
At stock settings the i9 pulls over 300W (only during stress tests to be fair; Techspot recommended a 360mm water cooling loop). This is pure guesswork but I'm guessing close to 550W. If you had an RTX 3090 you could have power spikes well in excess of 1000W. Performance is great, but at what cost?
I've just ordered a 12700k and be the first time not gonna bother to overclock.ill try under volting when I've done Tests and built my pc. Also watched this video when got released but watching again for the refresh 🙂
My friend ask me why I have a hot spring in the backyard and I showed him the water loop in my computer and he doesnt get it. Thank you intel for providing me a gaming Pc and a hotspring.
my personal thought would be to drop the clocks on the Ecores, and then use that available wattage on the Pcores. the CPU package as a whole seems to be very thermally limited when the ecores are getting hammers, so honestly, disabling them seems like it would be the way to go for maximum overclocks on the pcores.
you've been looking at the wrong voltage stat on hardware monitor. Vid voltage is what the CPUs request in under normal conditions from its micro code. Proper voltage CPU voltage is on CPU vcore at the top of the page on hardware monitor
Great video as always Jay; I just don't see myself getting 12th gen. Come tax season I am getting either a i7 or i9 10th gen. Biggest expense will be the video card. over 1k for that. Unfortunately I have to go with ebay on that one since these days every retailer you could go to is in some way a scalper. But I love your tech stuff; a lot like me, just with better access to the tech.
If you live near microcenter in the US they still better than scalpers. And if your fine with AMD they always seem to be in stock. Went to micro center yesterday. They had 2 3080tis. Some 20 series cards and a few 10 series cards. Then for AMD. They had like all the cards for the 6000 series in stock. Just shows while AMD has pretty much caught up to intel in cpu and people are fine switching back and forth between AMD and Intel for cpu. AMD is no where close to catching NVDIA market wise. Microcenter was in Yonkers New York btw.
@@ltbeefy9054 I live in Oregon...we used to have Fry's, but when they went belly up we lost our one good brick and motor store for tech hardware. Best Buy's website is a no go; always out of stock. So I am stuck either trying to get lucky with Amazon or going on ebay for a scalper and trying to trust they are not going to screw me over and just send me a box.
@@fightingfalconfan yea bestbuy way of doing it sucks. As they stock online. Microcenter does in store only pickup. Which is only reason I bet it ever has any cards in stock and why I was able to grab myself a gpu. No need to compete with bots. Just need to get there early enough in person. And be lucky enough to live reasonably close to a store.
Why? Come tax season the lower-end 12th gen chipsets and chips will be out. Get yourself a 12400, you'll have a nice, fast, cheap 6-core CPU that'll still be faster than a 11600k. Of course there's reasons to get a 12700k or whatever now, but if cost and price/performance is your goal, you could do better than going back to older-gen Intel CPUs.
When Jay was showing the GUI it took me a second to realize they just recorded the screen with the camera and it made me question the quality of my screen 😂.
They start on core #0 because basically all computer programming uses 0 indexed arrays. For a 4 bit binary number, 0000 binary is 0 decimal, 1111 binary is 15 decimal.
Also depends on the work load, on DXO even with just post processing one image from a Nikon Z7. The load has been distributed too all cores, which is great. But it runs only at 4.4 ghz for P cores and I can't remeber thr E cores, way low with what it is capable of and there is tons of thermal headroom. Even just doing a 5.2 all p core and 4.0 E core would be a DRAMATIC improvement over the stock logic and settings.
Well if I was to buy a 12900K, I don't actually think I'll be interested in overclocking it. So if I was going to do so, this video wouldn't change my mind. But regardless it's good to know what margin I do have for overclocking if I decide to overclock. By the way absolutely love your videos Jay!!! ❤️
I got a 5950X stock running low 60C underload with a Liquid Freezer 280... I am not a fanboy but this feels even more like Intel has something good but like Ryzen had on first release. I am going to wait, naturally, probably 2 cycles before I even consider committing to anything new.
The same folks who hated the FX9590 for doing the same, the same folks that loved the prescott chip for doing the same. When any CPU manufacturer releases their next design to answer the competition, and that design is just an absolute massive power hog, you know the company who built it merely shat the bed at the competition and said "fuck it, ramp the power draw to oblivion to get ahead while we work on something new". And the seals do clap.
Someone should modify the PC cooler spot into a pot to heat up water and cook food with that amount of heat from intel 12th gen we should not waste them 😂 (just kidding)
The reason the voltage goes above 1.4 is because the CPU Current Capability is set to Auto in the BIOS by default, which in turn sets it to 140%. Same goes if the LLC is set to Auto, it auto-sets it to level 2. At least it does on my Maximus.
As an enthusiast, it's disappointing when you can't get a great OC, but as a consumer, it's awesome that everybody is getting close to the best from their chips without having to become an enthusiast.
Yeah honestly that is where the industry went everyone gets all the performance they spend their money on regardless of overclocking skill it is a good thing.
People always jump on me when I say what you did. I agree, my ideal product is one that is well optimized out of the box. I can tinker but others can't and they shouldn't be held back. So all these boosts and various things are awesome. You can lower if you want, but you don't have to worry that your leaving performance on the table because you don't know "the secret handshake". It's a little like comcast. I know the words I have to say to them to get them to lower my cable bill, but it makes me angry that my grandma would be charged greater than twice if she was alive. Doesn't seem right.
You are right but the truth is they do this by tighter segmentation so they could cash out more. Both blue and red team do this.
@@billschannel1116 Imagine having cars that all go 130kmh, most of them can do 200, if you can OC them properly.
That's where we were some 5-10 years ago, and look at us now, you set the gear and PC goes BRRR. :D
Exactly. Overclocking headroom is performance that’s just left of the table, and the only reason anyone ever started overclocking in the first place, was because the products they purchased were capable of much more than stock performance.
I personally don’t like fucking around with BIOS settings that can make marginal improvements or actually lower my performance. I also drive a car with an automatic transmission. It’s got paddle shifters if I want to fuck around with shifting manually, but other than having a bit of fun, riding a little closer to redline, the transmission is still smart enough to know it should upshift before bad things happen. So, that’s where I’m at with computers these days. Just let play around within safe limits, and let me trade away some efficiency for completely worthless performance-but most of the time I’ll probably just let it run as efficiently as possible most of the time, and boosting as hard as it can when I’m actually doing stuff with it.
In the several hours I spent trying to squeeze as much performance out of my my 6700K back in the day, I found that HW monitor has a very strange reporting tactic with voltages. The VID voltage was always astronomical and was super frustrtating because no matter what I did for offsets or even going in and setting a static voltage that was supposed to run no matter what, the VId voltage was always super crazy. After several Google searches I found that the VID numbers are apparently what the CPU is requesting, NOT what it is actuially getting. There is a VCORE voltage number thats towards the top of HWmonitor that reports what the CPU is actually recieving for voltage. I would be willing to bet if Jay checked the VCORE voltage it would show the appropriate voltage for the CPU. I know this was long winded but im hoping I just saved at least one person the several day headeache I had trying to figure this out lol
this vid needs more detailed vcore reporting
yep I can't belive jay in all his years of cpu overclocking was looking at only VID.....
CPU-Z should show your Vcore. Even though it's inaccurate given it's precision. But it at least tries to show the actual Vcore, not the VID... ;-)
I am confused by your comment. I have set up a sensor that says "CPU core VID (Effective)" and it goes up and down depending on what I set on the bios. I don't know if it's 100% accurate, but considering the offset I set, 1.02v sounds like it's correct.
So my confusion with your comment is why do you say it's weird on reporting or it's always crazy?
@@LautaroQ2812 you may be looking at a different sensor read out than the typical VID report. If TH-cam let you post pictures in the comments I could show you what I'm talking about. But the fact that yours says effective after the VID leads me to believe it's a different read out that's more accurate
I hear that we should expect a cold winter ahead so this was the perfect time for these cpus to have come out lol
The 5950x easily gets above the temperatures the voltage offset on this CPU produces.
I know this is a joke but people will actually use this argument believe it or not...
Both AMD and Intel gaming rigs make a good heater in winter ❄️
@@tivtag Can confirm, the radiator in my room is buggered, my 5900x + 3070fe are compensating nicely... But only when gaming, so productivity has dropped somewhat.
It's a dark winter, come on man.
Cores start at 0 because most things are 0 indexed on computers. 0 is normally the first element unlike 1 in natural language. Things that don't say 0 for the first core have code to offset it by 1 for humans to consume.
its because computers are binary , but yeah what u said pretty much
@@gamersplaygroundliquidm3th526 I always figured it was because 0 is technically the first number
Yea it's because of the binary system. 1 bit can hold two numbers 0 and 1. 2 bits go from 0 to 3 and so on. It's also a bit historic and makes more sense working with computers in general. I feel like Jay would know this though and refers to the fact that the naming there doesn't really make sense.
Finally! Another code person that understands this!
Decimal system has 10 values, 0 is the first. In old times they decided we count to 10 with our fingers. Who can blame them now lol
"Push it to the limit!"
Cue 80s montage!
Crash the gaaates!
*whip clap, whip clap*
First thing that came to mind!! ❤🤣
Talk about a missed opportunity...🤣
Cue 80°C montage!!
"Jayz shows 12900K" ... Say hello to my LITTLE friend !
Walk along the razors edge!
Out of curiosity I am more interested to see how well the 12700k will overclock up to the 12900k stock numbers while being $200 cheaper
considering the 12700k is drawing less power stock and is clocking at a little slower speeds.
Probably not very well. I expect that there's a fair bit of binning involved and many/most 12700k parts won't be able to hit 12900k clocks, at least at Intel specs.
@@Steamrick You may be right but I wonder just how good the binning process is so early in the release of the new cpu's architecture as well.
I just thought it was sort of weird that most of the "press kits" did not include the 12700k chip and what benches I have seen so far at stock speeds the 12700k seems to slot in just about where you would figure it would between the 12600k and the 12900k.
I have seen oc video's on both the 12600k and 12900k but none yet using the 12700k good or bad.
@@billwiley7216 if you put 12700/k/kf in youtube you will find what your looking for. from what i have seen that's the one to get. me running 5800x.
@@arklight1670 I had pre ordered the 12700k, it arrived yesterday. Still on the fence on mb and ddr4 or ddr5.
Leaning towards ddr4 at the present time.
@@Steamrick Well now that depends is a 12700k a 12900k with 4 E's disabled or is it it's own chip? Do we know this yet? Cause if it's the later then there's no reason to think they wouldn't overclock to the same level as a 12900k, and to the OP's point maybe a little more headroom due to less overall power draw/heat.
Your focus on windows 10 in the first video 3 days ago was appreciated. I think there's a lot of us that aren't entertaining win11 yet.
Hell nah don't even get close to me with win11
I've been digging around the internet the last couple of days, I'm fairly certain the best gaming performance will currently be achieved on a 12900k, on Win10, and with e-cores off. People are starting to do benchmarks and comparisons of this and the results are interesting. Only running P-cores of on a 12900k or 12700k will still give you 16 threads! More than enough for gaming. I would not recommend switching off hyper-threading unless you have a very specific use case and are OC'ing for high clocks in that.
Counting from 0 is very common in computing. In this case it saves some space too since by having the first core be 0000 then the 16th core can be 1111. If the first core was 0001 then the 16th core would have to be 10000 and take an extra bit.
3:50 Jay motioned it went to 1K watts in the GUI and 4K in the bios. I'm waiting for the board that does 1.21 gigawatts.
4 kW? It would trip my breaker lol, I have a limit of 3 kW for the whole house
You too can have your own "elephant's foot" in the basement! No need to visit Chernobyl tovarich!
What was I thinking 1.21 Gigawatts?
Nice reference, nerd ;) lol
@@t0aster_b4th one of my favorite movies and I couldn't resist!
VID voltage is what the cores are asking for, not what there running at. These chips are good for 5.5-5.7.
The edits in this show is very well done. Kudos to your team.
Overclock is not worth it anymore
Cpu's these days are running very effecient and just under their limit. So its al a waste of time and heat 😀
The gains are so small in the real world. A 1000 points in cinebech is almost nothing
Intel needs to start packaging their CPU's with AIO's now.
And the aio have to be a good one or else ....
@@annguyenlehoang7779 120mm AIO 😁
They need to start using better TIM either liquid metal or a fusible aluminium alloy like AMD uses
custom sounds more reasonable !
All you need is a decent air cooler....dunno why everyone things you need a 360mm AIO to cool things.
As someone who builds their own machines, but doesn't stay up to date on hardware information until it's time to upgrade, I appreciate videos like this immensely! Thank you, JTC!
What i think might be quite interesting, if you have the time and inclination to do it, would be to UNDERCLOCK the E-cores, if that can be done in the BIOS.
Since it's mostly only used for background tasks, do they really need to turbo to 3ghz or higher, for example. (I know that's not strictly the case as they can be used for other stuff too)
But if folks could lower those cores, without sacrificing a lot of performance, then that'd help with heat/power usage.
The smarter thing is likely to just set negative voltage offsets for both types of cores.
disabling ecores or downclocking them would be the way to go. the CPU package is very thermally limited once the ecores get hammered with work, so I think you could throw that thermal headroom and wattage into the pcores by disabling the ecores.
*How to undervolt on the asus BIOS*
For the z690 boards, go to the overclocking tab. Scroll way down to "Cpu core/cache voltage"
Change it to adaptive, NOT offset.
Then change it to a minus symbol.
go down to the cpu and set it to 0.070 and hit enter. All 12700ks should be able to do that so far. Brings temps down allot.
that is -70mv btw. It'll add a bunch of zero's when you hit enter, don't worry about it.
The new voltage will be shown during all core load, not single core boosts which still hit higher.
The coil whine an that motherboard shows how much stress the CPU is doing to it, crazy... 13:45 to 13:48 just crazy man...
When enabling the AIOC on Asus boards you need to also adjust the thermal velocity boost to +1 and you'll get your same numbers rather than a little less. Only takes an extra second or two to change that in bios as well while you're there.
For maximum desktop gaming efficiency and performance, independent voltage would be really helpful. For most modern games, only one core needs to be blazing fast. Even a dual voltage system with all performance cores would be amazing. You could run half of the cores at maximum efficiency and half of the cores at maximum performance. The system could put every thread that is not bottlenecking on to the low voltage cores, so basically only the render thread would need to be running at high voltage. Maximum safe voltage easily uses 4x more energy than a sensible voltage with a reasonably close clock rate.
This is not how electricity works ;_;
@@TheFriendlyInvader Are you asserting that transistor efficiency is not dependent on voltage? If so, you should do some more research...
@@rickvelde7967 Yeah minor problem though, how exactly are you planning on modulating voltage on die without heavy use of VRMs? What about the miss frequencies or synchronization issues such a system would potentially cause? The incredible amount of die space wasted on basically giving every single core an independent voltage rail or regulation module?
I'm saying that's not how electricity works because you've effectively just tried to magic away the main problem of such a system without even acknowledging it.
Only to come back and try to assert that I said something I didn't as well, lmao
@@TheFriendlyInvader You don't need independent voltage for each core. Having just two voltage zones would go a long way. Independent voltage control for cache vs logic already exists, so this is a solved problem. It would probably be easiest to have half of the cores on one voltage zone and half on another, but since single threaded performance typically only matters for just one single thread, it would be best to have a small number of cores at high voltage with the rest at low voltage.
It's simple, use a sealed vapor pump style refrigeration system to cool all of the heat syncs of the components, with the components isolated in a sealed case with as much pressure removed as possible, using a vacuum pump. It would prevent condensation from forming and prevent the components from burning up.
This is really good info, thank you for the video. I am planning on updating my PC with the 12900k, just waiting on my Z690 Hero MB and some DDR5. Good to know that I won't have to mess around with manual overclocking, especially because I will mainly use the PC for gaming.
Even at stock clocks the 12900K is a beast. The gains he was getting (5-7%) might not be worth the extra heat for gaming. That's of course up to you, just my thoughts on it.
i have one now for 2 days and i can say WOW! in the bios you set 2 buttons to ai overclock and boom
cpu p cores to 5.3ghz and e cores to 4.0ghz, with not much heat.
it is so easy nowadays, and that thing is a beast with my 6900xt
@@Poppaai welke ram gebruik je
@@glitchinthematrix9306 ddr4 4000mhz cl16
@@Poppaai nice speel je toevallig warzone? Welke fps haal je daar
I love how a $600 motherboard is now considered "mid-range".. WTF! What the hell happened, my Asus Maximus X Hero board only ran me $230 and now MSRP is like double for the same series.
My maximus IX was $300. I just had to buy the Asus z690-E for $450. Not spending $600 on a mobo when I spent $650 on the 12900k
just get the master its amazing board
@@naor9792 does it have 18+2 power phases?
@@donnyboi1990 I decided to go with the maximus formula. Hasn't arrived yet, but looking forward to seeing what that thing can do.
@@donnyboi1990 remember the mobo is the foundation
Overclocking has long since died outside of extreme cooling. Back when there was little difference between server chips and consumer chips, you could get tons more out of a chip because they were basically targeting the same TDP. Companies are not stupid and saw all the extra free performance that they were giving users. So instead of releasing the CPU's at their intended TDP's and clock speeds, they added their own overclocking system to sell that performance to the end user instead of giving it away.
The base frequency of 3.2Ghz is the real frequency of the 12900K, if it was to be a server class chip or an older chip before all this stock overclocking nonsense. So, it's not a bad overclocker, intel is just doing all the overclocking possible for you in advance. Why do you think motherboards have gotten so expensive? They are basically A.I. overclocking platforms.
I refresh my PC when the arrival of new gen stock at the retailers, causes the superseded components to then bear much more reasonable prices. The Intel Core i9-12900KS CPU, the ASUS ROG STRIX Z690-G GAMING WIFI mobo, and the ZOTAC GAMING GeForce RTX 3070 Twin Edge OC graphics, in my mini tower, are all factory OC'd! I fully approve of Intel's and ZOTAC's overclocking. ASUS' AI Optimizing required failures to recognize its limits - unacceptable by me. The default settings, even some Auto Settings, as was reported by Gamers Nexus, were way too aggressive to be safe to use. A combo of Auto and Manual settings keeps the ASUS mobo under control.
living in australia, i can say with 100% certainty. NZXT IS NOT SELLING PCS DOWN UNDER. tried to get the BLD kit or a prebuilt, not available anywhere except good ol usa
My 5950X gets 29394 at 4425MHz all core in R23, so much the same as the 12900K. The CPU package power is also just over 200 Watts with 1.25V core voltage!
Great video! Ive seen several videos now of people running these chips stock and overclocked on AIO's with all core full load temps in the 60's and 70's. There are some idiots trying to claim that these 12900k's run at temps in the high 90's and low 100's completely stock on custom water loops!
even worse are the idiots who are falling for this misinformation. then proceed to try and talk shit its hilarious. you mean they dont need a custom loop with a chiller lmao
I'm getting 5.4GHz on the two favored cores on my 11900k, and I'm in heaven with gaming and Solidworks. No real reason for me to upgrade right now...
holy silicon lottery batman.... yes robin, it's like I always say when Alfred makes pancakes, newer is not always batter.
Gigabyte Gaming x
12600k
Set:
All core 5GHz p core
All core 4Ghz e core
XMP memory 3200 cas14 gear 1
Off set .05
Cache ratio 40
And not much else (could squeeze more out of it but really no reason to)
Runs very well with my old Noctua D14.
Basically a new motherboard and CPU and reused every thing else I had.
Nice upgrade to see me through a few years.
Thanks very much for this video! Exactly what I was curious about. I see this as a complete win. I'm building a custom loop with a few radiators and EK's new flagship 1700 block. Hopefully, that will be enough cooling power. I just want to get it to run 5.1 on P and 4 on e. That will be a nice improvement over my current 8086k at 5.1
I've got everything I need now. All I need is DDR5 RAM and hopefully luck of the draw with my chip. Fingers crossed. I work in 3D raytracing and modeling, so I want to squeeze what I can.
Good luck with your rig man! Sounds like you’ll be putting it to some good use 👍.
@Jorge Laughingman I see your point. And I agree with everything you've said. But Intel has done something new here for PCs, and they've paved the way for some new tech. This comes at a time where AMD has dominated the field for years. This is shaking the competition up again. We can already see that with all the price drops AMD has made since 12th gen's launch. Did it beat AMD flat out? No. But it does perform very close for cheaper.... Considerably cheaper. That is a win. No one ever said a flagship had to run cool on a crappy AIO or air cooler.
It uses more power... It runs hotter. It runs faster for cheaper... It encourages competition...
Do you not understand what a "win" is?
Tests I see run under load at 80-90c with an AIO. That is more than reasonable. And it's a flagship. It should be Watercooled.
As for power... If you can't afford the extra power bill, a flagship i9 likely isn't for you anyways.
How would you define the release? Seriously. I'm not trying to judge your opinion. Would you say it was a fail? A tie? Curious what your thoughts are. Because from my point of view, it is a win, if only because it encourages competition.
@@MasonMenzies I agree. If you are getting a flagship cpu that already costs around $900, you should probably be able to also afford cooling and power for it (which will be much cheaper than the chip itself). Plus, I am interested to see where the idea of the E and P cores will go with both AMD and Intel chips in the future.
@Jorge Laughingman I would certainly define gamers as an average user. Games just aren't that demanding compared to other applications. For me, working behind the scenes of games, the DDR5 and single thread performance is huge. But everyone is entitled to their thoughts.
@Jorge Laughingman and to add to that, although we're likely never to agree, I think that right now AMD is looking at their line up and thinking... Hey, we don't have a definitive performance win. Not taking into consideration the heat or power. And at the end of the day, flagships aren't for average users, and those who aren't average users won't care about the higher power draw or higher temps. I don't at least. And I've always chased performance. And in this case, like it or not, Intel wins in performance by matching AMD, and doing that for a lower price.
It'll be hot. But I don't care. I won't have to fire up a frying pan to cook my eggs. There ya go, now the power draw doesn't matter.
9:01 As soon as I saw the first videos drop on this I thought "I wonder if we could undervolt this to get some better temps" and here we are! This should be more common knowledge
5:02 Yo Jay, everything in digital logic design starts from 0, even at the smallest unit.
Like, in a byte there are 8 bits, numbered 0-7 (to be more precise, it's [7:0]). Same principle extends to the multiple instances of the core.
Cheerio.
Yup was gonna leave a comment saying this kind of numbering behavior is likely an extension of how things are in binary & every programming language (at least that I'm aware of), where numbering/indexing/etc. always starts at 0, not 1.
Omg is that right? WOW. No one else other than you two knew this. Yeah, I'm sure Jay doesn't know this either. /smh Guys, he was just asking why they don't display it starting with 1, he knows why it starts with 0. JUST LIKE WE ALL DO. ffs.
@@MooKyTig not everyone is going to know this. Most people would have absolutely no reason to know this, even computer enthusiasts and people in IT. But being as it's a question Jay brought up (even if he knows the answer, which yeah he probably does), it doesn't hurt to have someone answer in the comments for the people watching who have no reason to work with binary/hexadecimal/programming/etc. to understand this kind of numbering.
@@MooKyTig wow way to feel you have to defend Jay so strongly lol. I guess people are not allowed to give feedback or opinions.. not like he was rude or ill intent in his comments.
@@IanLantz 1) "people in IT" Bull. I've never met anyone in IT that isn't aware of this. 2) And as for most people, well fine, but that's hardly the people that watch channels like this.
They probably start at core #0 due to the way things are indexed in code (the first element in a collection is index 0 , the second is index 1 and so on)
To me, overclocking only ever makes since for the i5/r5 range since the high end stuff usually is boosted pretty high and the mid range usually has more leg room. Plus the higher end stuff pulls way more power and can generate way more heat for not enough gain to make up for it, as we all just saw in this video. I was running my 9600k at 5 ghz all day but as soon as I went to the 9900k I left it at the stock 4.7 and still manage to pull almost the same amount of power that my i5 did OC’d
My 9900kf goes to 5.2ghz all core on a 280mm radiator and it’s thermally limited at 1.34v. Pulls 130-150w during games.
@@brkbtjunkie I actually do have the kf version as well and at stock it will pull a little over 100w. To get it stable at 5.1ghz I have to jump it up to 1.39v and it’ll pull almost 160w of power
The OC framework has been screwed up to the point of humor. I remember when overclocking meant resoldering a faster crystal and adding a heatsink.
Can you please talk about UPS', AVRs, Surge Protectors, PDUs.
I don't see much content about these power-related peripherals.
Agreed I'd love to see a hybrid Xeon take on Threadripper.
And is going to use a hell more power, for trying to keep up. The 12900K here is using 255W so get that.
@@SamrasNeela While that 12900K has like 60-70% more IPC or singlethread speed than a 2990WX threadripper.
@@saricubra2867 Yes, but in a Xeon vs. Epyc use case, power draw and multi-core performance are much more important than for your average user. When you have tons of enterprise servers under constant use, a difference in power draw can mean thousands of dollars in electricity costs. And the types of usage these machines receive would scale significantly on many cores/threads, but may not require as much performance from each individual core. That's why server chips normally run at 2-3 Ghz, and never reach 4 Ghz.
I'd expect Intel's take on Xeon to be a CPU with all E-cores, rather than a hybrid design. It's been shown that those E-cores have insane IPC, and if Intel were to enable hyperthreading for E-cores on Xeon chips, that would make for a very strong contender against Epyc.
I doubt that Intel has any plans to design a special lineup of CPUs for HEDT use. With the core count of the mainstream rapidly increasing, the line between desktop and server has been blurred, and Intel's old X series no longer would have the special position it held 5 years ago. They may choose to not release an HEDT platform at all, or they could use binned Xeon chips with higher core counts, just like AMD's take on Threadripper using binned Epyc chips.
When I got into computers 8 years ago, I remember being amazed by Intel's 5960X with its 8 cores. But in the current market where even 16 cores is considered mainstream and also found in server chips, the niche that the X-series once filled has vanished.
Setup a new 12900K on an asus Rog formula. Pretty darn powerful chip. Ran some testing and scores nicely. It just has an auto asus overclock applied but has been stable. Definitely runs hot even watercooled. All core workloads still bring it up to 80 and it is water cooled. No doubt some liquid metal or improved thermal paste would help a little but I suspect it is just a case of when it is under full load it is going to be 80° within seconds of the test stopping it goes back to 30° to 35° fast.
Definitely a warm chip.
This honestly doesn't sound any hotter than my i7 8700k water cooled when running a benchmark with AVX instruction. I'm assuming you're getting those temps running Cinebench? AVX workloads causing the CPU to hit 80C is actually really reasonable IMO. If you're doing normal workloads and/or gaming, it's probably much less heat.
Tbh the 12900k already is overclocked out of the box.
;_; so ... A stock intel 12th is like a amd oc ?
That's the best way to describe it, reeks of desperation from Intel they gave it everything they have this time
Stock 12900k uses slightly less power than 5950x overclocked (according to gamersnexus video). This launch is Intel throwing everything into the CPU, only using peak silicon. The margins on this CPU will probably be reduced compared to the last 15 years.
@@annguyenlehoang7779 no my 5950x oc smashes the 12900k stock.
@@uniktbrukernavn Doubtful, the 5950x would have to use more than double it's stock voltage to be using more than 12900k.
I’m happy to see the E-cores can achieve beyond 4ghz! This new architecture feels like what Sandy Bridge felt like just around a decade ago.
There are even still decent budget builds I’ve seen here and there using i5-2500k’s & i7-2600k’s just because of how far ahead of their time those chips were (usually paired with an RX 570 or 1050Ti/1060 6gb nowadays)
Then again, AMD is gonna bring the heat with Zen3D(if that’s what they’re calling it?) as well as Zen4. Man what a time to be alive, if only the chip shortage wasn’t a constant issue.
And the gpu shortage, but hopefully that will improve with the 4000 series
Despite not being the traditional O/c of the Celeron 300 / Core 2 etc era 🥳🥰 …. The new Intel 12 series is bringing a modern take of those good old days with lots of tinkering with O/c’ing (shame we can’t get the 30-50% clock boost of the oooold cpu cores 👍😆)
This iPhone its a scam right?
@@candidosilva7755 Obviously.
@@candidosilva7755 yes
Intel is already pushing it to the max
@@candidosilva7755 most definitely !
You were asking in another video what we wanted to see on the channel... well, I quite enjoyed this, actually! You know what could be cool? Overclocking mid-range hardware, since that's what a lot of us are using. See how far you can push a 5600G or something, that could be cool!
Honestly, 12700k and 12600k seem way more interesting. I will never get why people are interested in top tier cpu like 5950x or this 12900k. Not if it's just for gaming.
@WhatsApp ➕1⑨⑤⑨⑨①⓪⓪⓪②② Really? Thanks I appreciate it but don't need, give it to someone that could need that.
@@andreabriganti8621 lmaooo. For a quick sec, I was like, wtf... Jay used so many weird emojies.
More to your point, i have also never been compelled to grab x900k + tiers. No judgment to others, but the gain above a x700k vs x900k were never worth for me since I mainly care for gaming.
@@nuruddinpeters9491 Yeah. I didn't mean to judge either, it's just that I don't see worth it to pay 200/250 bucks more to go from a 5900x to a 5950x or from a 12700k to a 12900k. So, since this video is about overclock, o.c. a beast like this cpu it's good, in order to test it and understand the efficiency of the cpu itself, however it's almost useless, in a gaming scenario. Expecially if the major boost is on higher temperatures and power consumption. Have a nice evening mate.
For gaming it's indeed pointless, but many people use computers for more than having
@@d.sherman8563 You're right. In fact my whole argument was about people that buy cpu like this, just for gaming and nothing else.
it has been like this since 5 or 6 years.
For example, back in the day if you wanted "playable" framerates at PUBG or DAYZ, you needed to overclock EVERY SINGLE ELEMENT of the pc.
I still remember going from my 1100t to my 2500k and jumping from 15-20 fps to 30 fps in dayz mod. Adding 2133 rams and overclocking the cpu to 4.8 ghz gave me around 40-50 FPS.
Basically every HZ gained was valuable.
Nowdays? Every game looks the same and can be ran at 150fps +
Maybe when they add complex AI overclocking CPUs will have meaning again. Until then, I rather run at 4 ghz low power than having a 500w toaster for web browsing.
Not surprised by the title, I got the feeling Intel are already screwing all they can and pushing this silicone to the limits just to get ahead of AMD. I am interested to see AMD's inevitable response...
AMD 4D lol
I use an I9-12900k at 5.3 all p-core and 4.2 all e-core, max temps with cinebench is 73c @ 1.3 volts, and the vrm frequency does matter for trying to reach lower volts at higher clocks
Forget all-core overclocking. Just use AI tweaker to assess your chip and it will determine which cores are good or not and give you a recommended voltage for each core. You're likely running into a couple bad cores that need more voltage than the others. Most P-cores should be able to get 5.5 ghz. If you set it up correctly you'll be able to break 30K on that 360mm rad.
That's exactly what he did in this video. He was selecting cores the entire time.
You mean overclocking the 12900k is futile as it is basically intel redoing the prescott debacle when AMD had good competitive parts and intel had nothing to counter except its existing architecture so they rebuilt it to yes be faster than the competition but at the vast expense of energy and heat. Sound familiar?
I look at this chip and while I do see it is the fastest chip, I also see intel flogging a dead horse in blind panic, and the naive lapping it up and applauding like they did 20 years ago. The general mantra of "well if it's just general use you should not see the heat nor power usage" as per your testing, is insufficient to cover for the fact that when it comes to energy the chips are not worth it for high end use. I mean say you have to render for your job, you save 3 minutes per 15 minute render. 9 hour day. That's 1 hour and 8 minutes saved in rendering time for sure, but at increased electrical cost negating much if not all of the saving, indeed the cost per render in terms of energy may be double that of the competitor who will take that extra hour but cost you far less for the work you do overall in energy costs.
We do not live in the era of cheap energy any more. That 250yr period ended circa 2008 for the vast majority especially across europe. Now we have depleted all the fuels that are easy to get ahold of or forced to use less damaging alternatives, generating electrical power has become vastly more expensive for consumers be they business or residential. In an age where we need to do the same or more, with less, it feels like a con.
The 5950x is just as hot and power hungry.
@@Safetytrousers No and no.
@@AliceC993 My 5950x is currently running at 220w at 85c at 1.24v (which is a limit I set) in my extensive custom loop. Yes and yes.
Which of us has a 5950x?
@@user-co6eo5pz7x If you were referring to me I've been just as critical of when AMD has done this, making reference on several occasions to the FX9590. I give not a toss which company I or anyone else uses, my recommendation over 27 years of building computers professionally and personally has always been to go for best bang for your buck. But that does not stop me from being duly critical no matter the company at the time.
@@Safetytrousers your "extensive" custom loop sounds like literal hot garbage if you're running at 85 C considering an NH-D15 can do better.
This is why I undervolted my 12700k and just used MSI Enhanced Turbo in bios. Keeps all P cores locked at 4.8GHz and keeps temp below 60c at full load. Took a bit of tweaking using voltage override + offset + LLC3, but it's 100% stable, way cooler, and better performance than bone stock settings.
If 4 E-cores fit in the silicon size of 1 P-core, I wonder what a 40 core E-core only chip would be like using a 12900 die size.
A xenon
From what I have been reading that sounds like exactly what AMD is working on when Zen 5 comes out. It sounds like Zen5 will be their P-core, and they will use smaller Zen4 cores for their E-cores. I think they are called 4D Cores or something. But then they are also making an Epyc server chip that is made up of 128 4D cores with less cache etc that should be coming out first. Very exciting time for compute enthusiasts.
Would be great. Windows has had trouble with lots of threads. E cores are pretty damn good for power and area used.
Arrow Lake (2024) has been leaked to have as many as 40 cores. So in 3 years you'll nearly double your cores.
@@DJaquithFL Yeah! Im not upgrading my 11900KF till Nova Lake. 17th Gen. I think Avery Arrow!
It occurs to me that you could maybe get a bit more headroom for the P-cores by intentionally underclocking the E-cores, but whether or not that would result in an actual score improvement (and how much) is not as obvious...
Jay: “Overclocking the 12900k is dangerous”
DerBauer: “I just got the 12900k, I’m gonna pour ln2 on it for 5 hours” and just broke 3 or 4 world records yesterday
He's a savage
@@t0aster_b4th based circle a
Jays right using LN2 with this crappy CPU is a nightmare, only Der Bau8er and expert types should be doing that, us lot will just wreck the place :D
He is focused more for normal consumer bro. . No one with normal mind will use every day cpu with ln2 just to clock it. Come on.
@@koky179 I don't think he's ever seen LN2 before this dude, does he think you can just fill up a milk carton and leave it on the table until you need it? Lol.
To be fair to him though Der Bau8er did this ridiculous overclock, that is definitely worthy of a mention. But like you said, no regular user has freezer stores full of liquid fucking nitrogen 🤣
I'm not huge in overclocking...but I play it safe and get it to a limit to
where I'm comfortable with that will underpower the wattage to boot... The AMD 65W chips clocked at 4GHz (including the slight bit of overclocking used) can run at 45W under the right settings...
Loving the videos Jay and Phil. An idea for a video could be the different fan curves which are needed for different types of coolers. Air cooling, aio, water coolers etc.. I have an AiO on my CPU but the fans always seem to ramp up even under the lightest of loads. Hope this is a good idea.
thats one thing I like about my corsair aio. It ramps the fans based on liquid temp not cpu temp. So it takes a lot to heat the liquid up so fans never ramp up unless I run a sustained load for like 15 minutes
I had that issue on my pc too but there is a setting in bios that would let me control the delay of the fan speed up and speed down up to 1 second max. There were steps like 0.5, 0,7 or sth. Cant remember the setting , and atm im too lazy to check it but if u can't find it let me know and ill check after work tomorrow. ( MSI x570 gaming and a ryzen 3600x build but i suppose all motherboards would have it).
Chris, this is the wrong guy to be following. lol. Trust me. This guy is for casuals and mainstream PC users and he even screws that up often. If you're a casual, great. He is perfect for you.
when you said "push it to the limit" first thing to pop into my head was the Scarface movie montage
Overclocking -the 12900k- any CPU, and even GPU, from the past like 5 years is pointless and dangerous!
Well, not quite pointless unless you're expecting anything significant out of it, like more than like 5 - 10% more performance.
Doing it for fun, because you want to squeeze out everything from your product, or to lower temps by setting negative offsets which conversely improves performance a bit because of the algos, is the reason to do it nowadays.
P.S. If you correct a spelling mistake, and the text is red, YT will remove that part from your comment. Didn't used to happen, but it did now.
the heat and voltage are great in the work itself. in vain the time spent that the currently strongest cpu would be fast for even more mhz. but I understand. it’s all in the job description.
I miss Phils laugh when Jay said Blenchmark sadge
same
Where is Phil?
Guys sorry about that, but I see a lot of people testing OC on 3D tasks like CB23 but, 90% of the users are gamers. From a 3D point of view a gain of 5% is ridiculous, not a worth at all. Sometimes I feel like this is an obsession. Am I missing some point on this thing? because it doesn’t make any sense for me spend a lot of time trying to learn the math to get 5% better performance.
You are 100% right, it doesn't make a lot of sense. It was a big thing in 2000s where you could buy cheapest CPU and get in to perform like a top one with a few BIOS tweaks. Saved you a lot of money and performance gain was substantial. These days you either get an unstable system or fry your cpu with excessive voltage for like 10% performance. Not worth at all!
Something I am super curious about is virtualization on this platform, how does a hypervisor make a distinction between the P cores and E cores. Also I looked and it does support VT-x and VT-d thinking about how I am running Unraid on my desktop to split up my Windows and Linux workstations and wondering how an upgrade might affect me.
Great question. I am also wondering the same re core distinction. Does anyone know where I can find more info ?
I'll be honest. I got a 3070 in February and used it for mining and now I have $1887.73 with it. All hail Bitcoin
Thanks for covering this jay!
Just got one of these. Got a Peerless Assassin as a placeholder $50 cooler and WOW- I think it might be permanent until I have money to throw around (maybe never!). I haven't done AVX tests but at stock all cores 100% it doesn't break 85c, no throttling. I was going to get an AIO but I'm going to hold off until I feel I am limited in some way. I can't believe an air cooler does so well. Been gaming all day as my first hard 'Daily' testing of a few games, many hours each (Cyber Punk/Elden Ring/Fortnite) and the hottest any of my cores got was 68c. I'm not going to bother overclocking until I feel I need it- this is GREAT coming from a 4790k! That a 50 Dollar Air cooler can handle this thing is almost as amazing as the performance jump from my old CPU! Things may be different once I get to testing AVX though- I use things that need it rare enough that if it throttles a bit it won't make a big difference.
i will be interested in seeing what kind of standard cooling will be used with these chips.
forgive me while I laugh maniacally at any notion of "standard" cooling for this chip. People are reporting 360 custom loops being unable to cool it enough to prevent thermal throttling at high loads so God only knows what extra expense is needed just to keep the temps down when at high load. Jay says we won't see these kinds of power draw for gaming and general use, what he forgets to say is you may not see that from games today, but in a year or two, with parallel rendering becoming the norm via vulkan and dx12 ultimate allowing for far greater CPU use under multicore systems? I think he is not factoring in the future.
A trained butterfly in a cage on top of the CPU; flap-or-burn little buddy.
It'll be called Atomic Blizzard for marketing reasons.
Intel does not ship any kind of "standard" coolers with K series CPUs.
@@Nine-Signs right 360 is what a lot of ppl are using. will a 240 work, will a 280?. will a nh-d 15 work, ect . i am curious as to what the minimum standard will be. i am well aware they do not ship with cooling.
@@666Necropsy I haven't a clue what the minimum will be as even those who don't have heavy use will still need a cooler capable of cooling the chip at its maximum for those occasional times they do a heavy workload. Either that or put up with thermal throttling.
Not sure if it was mentioned yet, but numbers start at zero because that’s the number the computer language starts at. When programming 0 starts as an iteration. So when you count to 5, it would be 01234.
I don't know....XTU has its merits, but really JAY should have stuck too BIOS overclock, regardless of masses. (Guess more peeps gonna pop their cherry's now pilgrim...)
As someone who hasnt opened my chipset box, I thank you Jay for this vid. Gonna keep it default, much love~
I need to check if i have AI optimization enabled i tried some overclocking on my 10700k but i opted to keep it at defaults for now it's more than powerful enough for gaming.
I would love to see a similar bench of the 5950X vs 12900K using DDR4 then compare that to the 5950X (being limited to DDR4 and all) vs 12900k's DDR5 scores. Would be interesting to see how much the DDR5 actually effects performance with the same processors when comparing.
It's amusing because simply enabling PBO and increasing PPT/TDC/EDC nets you 28k easy on a 5950x - and this is before any curve optimizer tuning while also using DDR4. I hit 30.6k w/ my PBO+CO tuned 5950x and it's not really super hardcore.
"Pointless & Dangerous" is my overclocking nickname.
I see you assume the buildzoid philosophy to OC (oh no it died to to resurrect it and kill it again)
Yeah like running an FX-6300 on 5.3 ghz since release just for the sake of it.😂 Took me a year to get a SMALL watercooling for it 😂 before that it was melting like Tschernobyl.but hey. The fun's in the risk right? ;D was even tempted to try 6ghz because it didnt break a sweat on 5.3. No instability and no raised voltages.
@@marvinlauerwald my FX-8350 is still rumbling along peacefully at 4.4ghz on a 280mm aio normally keep around 60c at stock it keeps around 40c (in winter I have no heater and I've seen 0c as temps and yes inside my house was around 26F) Maybe I should see what my 7 year old aio has left in it and try 5.0ghz?
@@RadarLeon do it. My guess is, it will also even reach 5.4 as most 8350's tend to clock high
I've copied every single one of your settings down to the decimal. I have a 360 AIO and I've installed the thermal grizzly contact frame, which gave me a boost. I was able to hit 27k that first day after putting the contact frame on but now I can't get anywhere near that number
Winter is coming, buy an Intel CPU for heating the whole house
3070 does a reasonable job for the living room.
6:06 "Blenchmark" ~Jayz2¢ Twonty-Twone
10:10 Phil, you killed me! :D
18:34 But can it run Flight Simulator?
12900K: How to turn a water cooler into a water boiler
Bro . just drop some eggs and potatoes and we got a meal right there . cant wait to cook my food and playing game at the same time . how convenient
The 5950x gets hotter than this CPU with the voltage offset.
@@annguyenlehoang7779 Gaming peaks at about 65c and stays almost always in the 50s.
@@Safetytrousers i know im just kidding 🤣
@@Safetytrousers Wrong the 5950X with it's two chiplets design, dissipates more heat than a 5800X , even so a 5800 is still cooler than a 12900K
source : Page 21 techpowerup 12900 review
Ian Cutress was saying that during his testing HWinfo was showing 100 degrees on the CPU, while he was using a cooler with a temp sensor in the block that was saying the CPU was upper sixties. He thinks that the issue is with HWinfo, not the chip.
Could have been a bad mount, or insufficient coverage.
How much power does the CPU pull on those OC settings?
At stock settings the i9 pulls over 300W (only during stress tests to be fair; Techspot recommended a 360mm water cooling loop).
This is pure guesswork but I'm guessing close to 550W. If you had an RTX 3090 you could have power spikes well in excess of 1000W.
Performance is great, but at what cost?
Same here, lost over 150 points in CB R20 and some points in TS with 10900K on W11
Upgrading from 10 to 11 shows reduced performance compared to a clean 11 install from my experience. Not much, but still a shame for most.
It is generally never advised to upgrade an os, clean install is by far the way to go
its due to VBS
Some ppl might complain but making over clocking CPU/GPU so easy for the customer over the years is a great thing for the average person
when are you doing a video about 12700k thats the most interesting CPU to be honest
I've just ordered a 12700k and be the first time not gonna bother to overclock.ill try under volting when I've done Tests and built my pc. Also watched this video when got released but watching again for the refresh 🙂
My friend ask me why I have a hot spring in the backyard and I showed him the water loop in my computer and he doesnt get it. Thank you intel for providing me a gaming Pc and a hotspring.
what cpu do you have and what are youre temps?
my personal thought would be to drop the clocks on the Ecores, and then use that available wattage on the Pcores. the CPU package as a whole seems to be very thermally limited when the ecores are getting hammers, so honestly, disabling them seems like it would be the way to go for maximum overclocks on the pcores.
Pointless and dangerous. My cup of tea
Haha same 🤣🤣
Sounds like someone's using their cooling loop drain port to make the tea with.
I appreciate the fact that Jay is showing us his Pcorez
Great video Jay still curious to see what amd is going to come out with before I upgrade
you've been looking at the wrong voltage stat on hardware monitor. Vid voltage is what the CPUs request in under normal conditions from its micro code. Proper voltage CPU voltage is on CPU vcore at the top of the page on hardware monitor
Great video as always Jay; I just don't see myself getting 12th gen. Come tax season I am getting either a i7 or i9 10th gen. Biggest expense will be the video card. over 1k for that. Unfortunately I have to go with ebay on that one since these days every retailer you could go to is in some way a scalper. But I love your tech stuff; a lot like me, just with better access to the tech.
Same here. Then again, I just got my 10900k a few months back.
If you live near microcenter in the US they still better than scalpers.
And if your fine with AMD they always seem to be in stock.
Went to micro center yesterday. They had 2 3080tis. Some 20 series cards and a few 10 series cards.
Then for AMD. They had like all the cards for the 6000 series in stock.
Just shows while AMD has pretty much caught up to intel in cpu and people are fine switching back and forth between AMD and Intel for cpu.
AMD is no where close to catching NVDIA market wise.
Microcenter was in Yonkers New York btw.
@@ltbeefy9054 I live in Oregon...we used to have Fry's, but when they went belly up we lost our one good brick and motor store for tech hardware. Best Buy's website is a no go; always out of stock. So I am stuck either trying to get lucky with Amazon or going on ebay for a scalper and trying to trust they are not going to screw me over and just send me a box.
@@fightingfalconfan yea bestbuy way of doing it sucks. As they stock online.
Microcenter does in store only pickup. Which is only reason I bet it ever has any cards in stock and why I was able to grab myself a gpu.
No need to compete with bots. Just need to get there early enough in person. And be lucky enough to live reasonably close to a store.
Why? Come tax season the lower-end 12th gen chipsets and chips will be out. Get yourself a 12400, you'll have a nice, fast, cheap 6-core CPU that'll still be faster than a 11600k. Of course there's reasons to get a 12700k or whatever now, but if cost and price/performance is your goal, you could do better than going back to older-gen Intel CPUs.
MELT IT DOWN! MELT IT DOWN! Everyone chant with me. MELT IT DOWN! MELT IT DOWN!
When Jay was showing the GUI it took me a second to realize they just recorded the screen with the camera and it made me question the quality of my screen 😂.
They start on core #0 because basically all computer programming uses 0 indexed arrays. For a 4 bit binary number, 0000 binary is 0 decimal, 1111 binary is 15 decimal.
12:57
The 3970X is only 32 cores and 64 threads. The 64 core SKU is the 3990X.
"only"
5:01 in IT you very often start countin from Zero ;)
Also depends on the work load, on DXO even with just post processing one image from a Nikon Z7. The load has been distributed too all cores, which is great. But it runs only at 4.4 ghz for P cores and I can't remeber thr E cores, way low with what it is capable of and there is tons of thermal headroom.
Even just doing a 5.2 all p core and 4.0 E core would be a DRAMATIC improvement over the stock logic and settings.
I read somewhere you shouldn't keep tires indoors under fluorescent lights as the rubber will release ozone.
What about LED lighting? I’d expect the studio lighting to be LED.
Jay just has a stack of tires chilling in the background
Well if I was to buy a 12900K, I don't actually think I'll be interested in overclocking it. So if I was going to do so, this video wouldn't change my mind. But regardless it's good to know what margin I do have for overclocking if I decide to overclock.
By the way absolutely love your videos Jay!!! ❤️
I got a 5950X stock running low 60C underload with a Liquid Freezer 280... I am not a fanboy but this feels even more like Intel has something good but like Ryzen had on first release.
I am going to wait, naturally, probably 2 cycles before I even consider committing to anything new.
Intel fanboys: 110 degrees is perfectly acceptable for 5.2ghz
If you can't cook on it, it's not hot enough
The same folks who hated the FX9590 for doing the same, the same folks that loved the prescott chip for doing the same.
When any CPU manufacturer releases their next design to answer the competition, and that design is just an absolute massive power hog, you know the company who built it merely shat the bed at the competition and said "fuck it, ramp the power draw to oblivion to get ahead while we work on something new".
And the seals do clap.
Someone should modify the PC cooler spot into a pot to heat up water and cook food with that amount of heat from intel 12th gen we should not waste them 😂 (just kidding)
@@Nine-Signs The 5950x is very similar in that respect.
Not a intel fanboy but did you even watch the whole video?
The reason the voltage goes above 1.4 is because the CPU Current Capability is set to Auto in the BIOS by default, which in turn sets it to 140%. Same goes if the LLC is set to Auto, it auto-sets it to level 2. At least it does on my Maximus.
to me this sounds like the K option is just not necessary anymore
Finally I've missed the Overclocking videos, YES!