Never had a issue with my set up because i am locked 5.8Ghz across the board but i do appreciate the fact you are willing to show people how to fix the issue my hats off to you Sir.......
And the issue he is describing could 100% be fixed in microcode if it were true, for "new" CPUs that weren't already degraded. Also, the idea that an i9 CPU shouldn't just work and be stable out of the box is silly.
@@libertyprime9307 99% of consumers don't know how to make adjustments in the Bios when overclocking. I'm having no problems with either my 13900KS or my 14900KS. Both are overclocked, but have to make adjustments in the Bios also to keep it stable and that's where people screw up and get frustrated.
Brother… you are a little brainwashed. Saying “no one should buy an i9 unless you are an enthusiast” is pure copium. They sell the product with stock settings so that ANYONE can use the product with no issues. Is that a hard concept to understand? You should not have to be an enthusiast level person in order to use an i9 at stock settings with no errors. Your outlook is ridiculous.
When he has invested money in Intel stocks and it Simply just doesn't go up. They go down actually. Protecting the investments. If he's not he's a total hypocrite and it shouldn't be watched just because he has knowledge . His knowledge îs not trustworthy when he's rallying for this a corporations. 17:50 yeah accidentale they invented The 3d chips. Wich turn out to be the best way a cpu can work in our gen in gaming
"Consumers should of done the research" Wtf. Intel said it works. Why tf would you expect it...not to? Calling consumers idiots for believe the product specs were legit is so misplaced.😊
You're so wrong this time! Its not only i9 problem, there alot of i7s also in servers. Check that chart that was published... And you cant tell ppl that i9 should be bought only by enthusiast. If someone has money with 0 knowledge he should be able to buy shit and run it at defaults. Intel screwed badly just to compete with amd. And you're right about one thing, they only should offered you a job to fix their cpus to work out of the box for gamers :)
@@GONTE_YT I believe if you just use in the process for gaming a R7 7800x3D will do but if you want to go for something that is good for gaming and streaming go with the R9 7900x3d.
Wendell specifically mentioned that one of the game server providers he talked to ran the same validation tests on CPU's when they were new and at some point later and they would fail the second time around. I've worked in datacenters and I've never seen that kind of behavior from a CPU running within spec (which supermicro W680 boards do) so how can that possibly be the customers fault? Even more problematic is there was no way to 'research' this immediately when the 13900k came out, yes now there is a wealth of information out there but most users are going to at most enable XMP and go with the defaults. Then you have a game developer being very specific with the issues they saw with 13th/14th gen, there isn't much more on the ground than that. You should not need to tweak out of the box to have a working CPU. For best/most optimal performance, sure, but not to have a CPU not potentially die in a few years with out of the box settings, that's ridiculous.
Blame the people that designed the problematic chip. Not the tech tuber, not the competitor, none of that. Sure, AMD may have started with the single core boost, but their chips didn't BREAK while doing so. They didn't put the consumer at risk. Intel did, as shown by the fact that ryzen 3/4 don't have the same issues OOB. It's YOUR JOB as the designer of the product to not push it too far, and to back your product completely. This sure doesn't sound like it. The reviewers can say whatever they want, but when your head hits the pillow, those voltage tables were designed by INTEL and nobody else. A lot of these stupid excuses for the company sound stupid af, and risk invalidating your opinions. Stop blaming people for testing their product and releasing the numbers. Stop blaming AMD for INTEL chip failures. Stop blaming users for buying the chips. Stop blaming motherboard manufacturers for INTEL problems. The mf chip should NEVER be able to suicide itself. PERIOD. When you as a consumer buy a chip, you don't expect to have to configure the thing. The fact that anyone defends intel in ANY WAY shows they are nothing more than a shill for a company that gives no shits about them whatsoever.
Dude is talking about broad telemetry data and you're saying that data.... Is ridiculous? Love your info and enthusiasm but your anti- generic people position is doing you a disservice. You can dismiss Gamer Nexus as a generic TH-cam but the idea they are idiots and not useful seems misguided
The reason the 14900k was put in servers is because intel dropped xeon and said this is your replacement. Then. - the server board doesn't overclock bro. It uses very mild performance pushes. The used it because intel told them.this is your chip for that need. Calling server guys idiots for using what they were told to use is misplaced.
Dude..what normie is gonna do all that, when they dont even know how to open the bios. All that shit is up to intel unless they became a hardware enthusiast only company..
@@notwhatitwasbefore what's the problem with someone who is not tech savvy to buy the top of the line chips? the one who has to make sure the cpu works is the company not the consumer, that's why you pay money to them. the vast majority of people who use pcs don't even know what a bios is, but some of them can afford top of the line and will pay to have the best experience.
@@faus2417 Those people are the biggest problems. It's the same thing when rich kids get fast cars and then they crash them because their driving skills are non-existent.
If you bought a 13900k when it was new, it is entirely likely you degraded it before any "research" existed for an enthusiast to find. If the thing can bork itself at stock settings, that is primarily Intel's fault and nobody else's. Trying to blame the consumer for not doing research is a poorly considered position. I appreciate the info you've released as a workaround. I've used it on systems my family has to prevent issues in the future. But while that's fine for the layman user, because they probably won't notice the difference much anyway, you've essentially enforced a lower than stock limit on low-core boosting which means in certain ways it performs less than was advertised. They bought a $800 cpu for best in class single core speed and many cores for multi-threading, only for someone (your channel) to tell them to slow it down to make it work. I can agree that spending money on an i9 is a waste if you're not an enthusiast, but I don't think we should endorse a position that the only way to reliably run a CPU at its reported stock values without killing it is to go all the way to direct die cooling. I think Intel should be called out for unrealistic stock settings in that case.
Bro, this is not a new stuff. Already with 10th gen and before, after putting the CPU in the mobo, firstt thing ppl were doing is turn off all the turbo/speedstep/wahtever shit, set fixed vcore to 1.30-1.35V with regular cooling or maybe 1.4V with custom loop and/or delid, put some nice LLC on it and FAFO for whatever core/cache ratios you can hit. It was that simple for i9's and i7's from the 1st gen Core-i processors, just different safe voltage for different gens.
“Stock” settings vary for different motherboards and even across the same board but different bios revisions! Intel needs to reign in the Wild West cloud/spectrum of ‘default’ settings and have default settings followed by all. Motherboard vendors for years have been pumping more voltage than necessary into the cpu.
I agree. regardless of knowledge level, a company should NEVER release a product assuming the average consumer will magically decide to tinker in the BIOS. First time seeing this channel and very disappointment by the bias of this dude..
7950X3D didn't ship with a scheduler and relies on Windows GameBar + Chipset drivers to schedule games to the X3D chiplet. Problem is that gamers uninstall GameBar because it's bloat / an extra overlay and many don't install Chipset drivers either.
there's two issues. the original "baseline default" shit was partly Intel for not being strict enough on bios defaults, and partly on the board partners for their ridiculous overclocks being made bios defaults. The new issue is probably a design fault and that's only intel's fault.
If a company releases a product that doesn't work, or dies over time they are at fault, nobody else, displacing issues to others because they had to degrade the part to remain relevant rather than take the L and develop something better later is a sign of management issues.
lol you love to make intel excuses. enthusiasts only. you know people who play games but are still hobby level still want the best they can reasonably get. plenty of people burning their chips out on custom loops. shouldn't need to delid a chip to make it last more than a couple years
You should probably stop assuming you already know the answer to this issue. It's not just simple silicon degradation. Wendel's data clearly shows that there is a design flaw with all 16 e-core raptor lake chips. Intel wont say what it is though which is concerning. It's like bumpgate with the the ps3 all over again.
@@JoeLegionTV It's not only the K sku's and while the numbers are a lot lower 'it' seems to affect some 13700's as well. There is enough data out there now that it seems pretty probable that there is some issue with the higher clocked chips in general, not just the K versions.
@@JoeLegionTV Most of these hysterics are coming from people that don't even own Intel hardware. It doesn't help that reckless game devs have decided to join in with the gaslighting.
Legend giving people the fix you better respect this man for what he gives you priceless knowledge literally this could save Intel millions of dollars and like he says he's going to lose a lot of money from consults but he's doing it to help the people with the truth.
100% on the manufacturer they determined the specs and performance of their product - Intel and AMD should limit and lock CPUs from 'dangerous' voltages and currents. You are completely off the track, safety and fair use is a legal requirement for all manufacturers irrespective of the industry. What would happen if a car manufacturer acted like Intel/AMD, your car can go 100 miles per hour but if you reach that speed more than 10 times than your motor is wrecked, it's your fault?
Ok, so Intel sells a product, doesn't provide ANY special guidance to the customer on installing and running said product (last I checked there's nothing on the box saying "for use with liquid cooling only" or "you must manually configure this chip in BIOS to not boost too high to kill itself"), and yet you think the customer is automatically to blame when their chip dies prematurely? Sorry dude, that's a shit take and you know it. CPU's have had built in safeguards to protect themselves from overheating (to the point of shutting down if they simply can't downclock low enough to keep it under control) since the AMD64 days! This protection is a feature that consumers have come to expect for over a decade now. On top of that, it's a pretty basic expectation that the chip maker (be it AMD or Intel) isn't going to default their product to running in a state (voltage, temp, or otherwise) that they know will degrade it quickly over time. That'd be the equivalent of an automaker selling 100,000 cars where the engine will run well past the redline for 30 second bursts when at 100% throttle, then blaming the customer for using it when the motor blows up in 10,000 miles. There would be mass outrage and congressional hearings if they tried to pull something like that, but somehow we should expect less when it comes to CPU's? Nah, that excuse doesn't fly here. Now obviously all of this comes with the giant caveat that all bets are off if the customer has overclocked or otherwise run the chip out of the bounds intel provides. If a customer disables CET, raises the single core boost multiplier past what Intel intended, and/or ups the voltage beyond safe limits then obviously intel isn't on the hook for that. And it seems the motherboard manufacturers have been DEFAULTING them to these out of bounds state for many of their boards, trying to gain an unfair advantage over each other in benchmarks, in which case they share at least some of the blame (*glares at ASUS). It also appears Intel has provided little or very loose guidance to the motherboard manufacturers, knowing they would do this and that this practice would benefit them in benchmarks against their competitor, so there's likely blame to be had both with Intel and the mobo manufacturers. But considering Wendell clearly showed in the data that these chips are failing even when they lived their entire life in a W680 board that doesn't run the chip out of bounds, we can't conclude it's only because of mobo makers running the chips incorrectly. As of now though, it appears that Intel is simply configuring them from the factory to run at boost clocks in single threaded workloads that require too much voltage to remain stable and not degrade prematurely, and if so then that's squarely on them. And the excuse that tech tubers are somehow at fault for all this because they run Cinebench single thread benchmarks? Seriously? That excuse reeks of Intel fanboyism. Single-threaded workloads are a very common and entirely reasonable workload that users will encounter everyday, so if the chipmakers are opportunistically boosting one or two cores when under low thread workloads to take advantage of the power and thermal headroom to extract more performance, then running Cinebench to test that scenario is entirely reasonable. And this isn't even a new feature like you implied that AMD started. This has been a thing on Intel chips since the x79 days! Old school Mac Pro's with the 1680 v2 chip were doing this in 2013, with staggered core multipliers for 1 core all the way to 8 core workloads (multis were 39/38/37/35/34/34/34/34). This has been a thing for a long time. If intel is defaulting those 1 or 2 thread multipliers to a level that causes their chips to request a VID that's beyond what the chip can handle without increased degradation, then that's on them. In short, unless the customer is going into the BIOS and willingly configuring the CPU to run in a dangerous state and removing the built in safeguards, then the blame lies with Intel and/or the manufacturer of the specfic motherboard.
Intel really shot themselves in the foot with the "14th gen" rebrand. If it were just "13th gen" having problems people might not be as alarmed, but with "13th and 14th gen", now it's a pattern. Nobody will be buying 15th gen. It's hard to feel bad for them. They've been getting slandered by the tech press for 5 years, but the best course of action would have been to ignore it and keep making good products. We could have got a 10 core Raptor Lake instead of an overvolted cinebench machine.
Wasn't Wendle saying these were data centre machines that were undervolted / downclocked? And that the problem was with the CPU itself, not power or voltage draw?
Intel needs to answer for its shenanigans. If they don’t make raptor lake customers whole, then Lawsuit. My 13900k has been fine since day 1. I set a single core boost to 6 GHz. Undervolted. The only thing I had to do for stability was step my ddr5 ram down from 7200 to 7000. Otherwise no issues. But for those facing issues, Intel needs to make them whole!
It's hilarious how similarly we think... All your tips are things I do by default every time I build a computer for myself or others over the last 20 years... Never use default settings in the MB... Ironically, I fixed this issue day 1 for myself when I noticed the high default v-core voltages on my own computer when I upgraded to a 14900k and have never had any issue's...
You havs to change factory settings to make it last. Unacceptable dude. It doesn't just work. Otherwise you could install and set to defaults and be done.
@@adamtajhassam9188 yea same, i loved tuning intel chips, been doing it since 9900k but it has gotten absolutely dogshit to do since 12th gen, absolute waste of my free time
This is a thing you buy to make your games go fast. Single core performance is a valid metric because it (mostly) tells you how fast your game is going to go. If you buy a sports car based on its 0-to-60 and its engine explodes when it goes 60 that is the manufacturer's fault, you can't expect every buyer to go find out exactly how the engine of every car they test drive works when they just want to drive fast.
13900k here, and I do believe I'm starting to run into stability issues. I am on air, and I never expected to get full boosting out of my chip. I tune for the best I can get for stability. For me, I set my peaks to 5.4Ghz, 1.25v(tested for this), and I've been running fairly good for awhile. I'm getting occasional, and more frequent reboots however. I also have a couple of 13700k's and they are clocked lower 5.2ghz. We'll see if they just take longer to be impacted. The part of this story that I find of interest is the CPU's they are using for servers, using server MB's that DON'T Boost too all hell and back. They are degrading as well, and with the volumes they use, they can collate the data much better than users that have 1, maybe 2 systems.
I usually take a few hours to tune my new intel and amd chips. Out of box settings either apply too much voltage or the PBO/Turbo settings are all over the place.
I have an i7 14700k.Zero troubles.Maybe being a simple janitor and not having money saved me from a i9 trouble.I also have to admit i undervolted my cpu just when I got it.I have to apologize because my rusty English.Super great channel.Congrats from Spain.Nice work Jules.
no shame in getting a 14700K! it's basically 90% of the i9 variants in terms of multicore without perhaps quite as much problems with stock behavior because of the lower clock demands. i have one myself. it's been an absolute beast
It depends what batch you got, the problem is that intel is doing really cheap and bad silicon for the newer batches of 700k, 900k on 13th and 14th gen and those can't handle proper stability, if you got one of the older batch you could be fine but who knows if you'd ever get a problem down the line and you get bricked on crashes and slow downs.
I have an i9 13900k and i had 90C in games at every time and i have an 360 corsair aio. I limited the watts but limiting the amps did the trick. Thx for ur videos!
Not everyone is Framechasers with all that knowledge, most just turn on and enable XMP and call it a day. And most don't have an extra $500 for consulting.
@@valentin3186or just buy amd i just bought a 7800x3d after having intel all my life its silky smooth there is less dips and driver installation was so much easier
@@TheAcadianGuy Well I don't buy framechasers kit. I use P-54 E-43 R-48 for my 13900KF. Higher frequencies is not better even for performance. I can feel it in my game.
Jumped of team blue i was dealing w a faulty 14900k that would crash every 5-12 hours, just waiting 4 the am5 mobo to come in Cant be bothered w a poor quality made chip. I was a loyal buyer too for 30 yrs. The gamers net base crashes were alot less when they switched to AMD so i dont agree its entirely the users fault.
Interesting...so many different theories...ring degradation, design flaw...heat dissipation issue, too much voltage to preferred cores...All I know is what I experienced. Bought a 14900K and Asus Z790 Maximus Hero board. Cooled via 360 AIO. Started having stability issues rather quickly...not 6 months down the road, more like 1 month. Didn't know any better, 1st Intel CPU I've had in a DIY PC in years. Was running the Asus AI OC with optimized BIOS defaults. Saw 6.1 and even 6.2 frequencies, was like hot damn this CPU is fast! Lol. But the temps were definitely too high for my liking. I finally stopped using the OC profile, started doing research. That's when all this shit came into focus. So, I set Intel limits on the long- and short-term power...seemed to get more stable at first, but then the crashes kept coming. Finally, it got so bad that the PC was almost unusable. I think it was too little too late and that my chip had already degraded. Of course, at the time, I didn't even think about increasing the voltages to compensate for the degradation. But then, I also have never done a CPU delid, either so even if I had upped the voltages, my AIO wouldn't have been able to keep up. Opened an RMA with Intel...took a little bit of back and forth and some delays, but finally got it approved. However, I went ahead and bought a new 14900k in the interim, because I wasn't sure how long it was going to take to get the replacement. This one I factory defaulted from the start, locked the cores to 5.7, and set the amp limit to 307 (IIRC). This new CPU has been trouble-free, and the AIO cools it just fine. Finally got my RMA replacement and sold it to recoup some of the cost for the 2nd CPU. FAFO, boys, FAFO...
Are you using APO on the 14900K? I was impressed by the fps and the power consumption numbers when i watched Hardware Unboxed's video. It turns the i9 into the most power efficient CPU ever made.
CPU shouldn't degrade that fast, we used to abuse on them for years and never have issues, now we are at points where even at stock we are degrading at points that will be measure even before i buy a new pc. As much as i like intel, the average user will be better off with a 5800X3D / 7800X3D / 98000X3D and just let it boost based on your cooling.
@@FrameChasersThe only reason that i own a 12700K and not anything higher or better is because i avoid +5GHz chips with the voltages that they reach and the worse perfomance per watt. I'm using a Deepcool Assassin 3 with my 12700K and is 67-71 degrees max and 20C iddle, you can't do that with a current i9/i7.
Intel can compete but by targeting perfomance per watt and software optimizations, they have the monolithic Big-Little. Instead of software optimizations for the 14900K with those impressive fps and watts results (like Hardware Unboxed showed with APO), intel doesn't care and only targets Cinebench and overclocks the heck out of those chips.
When the 14900K has APO, it can destroy the 7800X3D on games easily by 30% more fps. The 14900K also gets a 50% reduction in power draw. *Intel can't just take the clockspeed of AMD and adapt it to their chips* , *they are using monolithic Big-Little, their architecture is adapted for IPC and low latency not clocks* .
Its not just the high end skus either. Undervolted 14600k I purchased in Feb. for my htpc kicked the bucket. Frequent BSODs, and two ssd corruption errors.
I have a 13900ks, when i first started my pc the voltage went to 1.472v and I did not like the voltage so I locked my cores to 5.8ghz and set a max of 1.35v have not had a single issue with my cpu, now im kinda happy I did so instead of running it at stock.
The good thing about this debacle is that in a couple months many OEM prebuilts that are used by companies and such will be sold for their instability issue. You could probably get a second hand dell optiplex with a 13900K for cheap.
It is not just a i9 problem. I got a 13700kf at launch and immediately locked it to 5.2ghz all cores. Undervolted it and power limited. Still getting random sluggish behaviour and IO errors. Sometimes opening a folder on nvme drives takes a minute randomly.
Like he said unless you’re an Enthusiast and know how to manipulate the bios and frequency for stability then don’t buy the Chip. Intel should have been more clear on that point, because the out of the box spec for instance on the 14900KS is 6.2 gigahertz. I have a very good system made for overclocking but could not run this chip anywhere near 6.2. Crash, crash crash…. So I started to back off on the frequency to where it was stable at 5.8 letting the 2 cores boost to 5.9. 5.9 all core would eventually crash also. But 5.8 locked or letting it boost will work all day long. Also for Gaming I lower the E-Cores to 3.8.
@@papasmurf5598 Going back to 12900KS, those didn't fail after a year with out of the box settings. Yeah, 14900K/S and 13900K/S are enthusiast products. That doesn't mean it's OK for them to fail after being installed in a default configuration. In fact, the default configuration should maybe limit them a bit. That's how it was for 11900K, 10900K and previous generations.
@@eye776 Remember Intel specifically said the cooling solution had to be top rated water cooling to hit the target 6.2 gig out of the box spec. AIO will not do the job. Thats right on the information that comes with the chip. I have my 14900KS running at an all core 5.8 boosting to 5.9 on best two cores on a Kraken elite 360mm cooler and it never crashes. If I had water cooling with a large radiator I could run this chip at 6.2 but I dont have that set up. There are several guys online with big coolers and they love this chip. I like it too because 5.8 all core and boosting to 5.9 isnt bad. I can run the chip at 6gig but it will eventually heat up my cooler and crash so I lower the frequency back to 5.8.
@@papasmurf5598Intel told Xeon customers to buy this chip as it was their only option. They ran at extremely modest levels of power. Some people also literally use these for their job. Your take is extremely misguided.
The problem is motherboard makers are disabling the thermal velicity boost temperature limit of 70c and allowing all cores to boost to the top bin. This was the default on my msi board. Left at default and the default ac loadline, you see 1.45-1.5v rammed into the cpu.
the gold mine of information that i've learned from you from die setts in ram to the Core differences between intel generations and what ones to look for, what amd chips we should be looking at for gaming, i overclock all my stuff i got a 12700kf i7 with a 4070 ti from Colorful maxxed out 100+ fps in games when it doesn't crash (thanks intel) had a 5800x3d that i gave to my little brother with a 6750xt and hes right behind me in frames. wouldnt of been able to do any of it without your help its so hard to find people who aren't trying to sell something.
So what you are saying is intel engineers should have joined your discord before they created the product and this would have prevented intel spreading misinformation on how the product should behave. Intel demos these products before the tech tubers and gives them ndas on how they should "benchmark" it. You talk about a fix but what you have a preventative workaround. Now your "fix" is fantastic but if build 10 system all same components and run the same test with solid works and 3 of them fail within 30minutes of system being built. Then i apply your "Fix" those 3 outliers become at that point in time stable those chips just get advanced RMA because if they are becoming that problematic that quickly then eventually they will fail.
Actually, indium which is what Intel uses for its sTIM has better thermal conductivity than liquid metal which is an alloy of gallium and indium. The reason why it performs better when you use liquid metal over the stock sTIM is because the sTIM that Intel applies is so much thicker. STIM on Intel chips is usually between 0.3 mm and 0.4mm while a typical application of liquid metal is between 0.02 and 0.04mm thick so it creates much less resistance for the heat to travel through.
TLDR, If you want i9 to just work out of the box, make sure on day one you delid, use Liquid Metal, lap your IHS, lock frequency, lock voltage and disable boost and it will just work. Everybody that is enthusiast knows this is what you have to do if you don’t want your CPU to destroy itself. If you are not in the discord channel you’re not an enthusiast. The instructions above are also clearly posted prominently on Intel’s website and in the CPU manual. Again Intel just works
In terms of whose fault it is, I personally think it's very clear that the blame lies squarely with Intel (and AMD on their part, respectively) for playing along in this stupid benchmarking game after it got out of hand. They're responsible for shipping something that works to the consumer, and it appears that they now do not. And I think that is also clearly the stance that tech-tubers have adopted; I don't think they are necessarily lazy or ignorant, but they work under the assumption that Intel (and AMD) have shipped something that actually works (which *should* have been a valid assumption!), so they test the product it as-is, because that's what the typical buyer will do. The manufacturers should not be selling these products running in some self-destruction mode by default, period. But, for the sake of argument, if they desperately want to do so, they should at the very least provide clear documentation about what the buyer (and tech-tuber as well, by extension) is expected to do to have their product not self-destruct.
@@LordVader1887 Well, AMD clearly isn't responsible for Intel's mess. But the claim in the video is that AMD has/had the same kind of problem going on (possibly fixed later with an AGESA update?), their own problem is then AMD's fault.
@@LordVader1887 Fair enough. The AMD aspect as raised by me is pretty much a "and AMD as well, to whatever extent they are causing the same kind of problems as claimed in the video", the main point from my point of view is just that each manufacturer is responsible for their own products working right.
This B.S. is why I keep paying Xeon tax - both in cost and single threaded performance. Maximum turbos of around 3.7-4.3ghz never seem to have problems and 4-6 memory channels with ecc are reasonably fast and very reliable. Yes, I used to overclock and even ran CFD models on an overclocked system that I validated as stable. But, as you indicate, the modern boost nonsense has gotten out of hand and pushes silicon in ways that makes an old overclocker uncomfortable.
Man my 13900k degraded so badly on defaults, never let it get over 90c with tweaked voltages, bought it on launch, degraded January this year, I couldn't even install a GPU driver, it wouldn't see my 4090 when running the Nvidia driver install, but I could manually install using device manager to force the driver install. Luckily I got in before it hit the fan, because Intel RMA'd it send me a new one, tweaked this one too, but WAY more than the first CPU, we know a lot more now so my voltages/core etc tweaking is different.
I use my i9 for work, so instead of OCing it I undervolted it, limited the amperage to 400A and disabled some of the boost technology Also put it on default power limits as well.... It runs BF2042 with the same FPS at a lower clock and lower temps with an AIO.... go figure So far I am very happy with this CPU and I think it might be OK since it is taking the undervolt very well
Even on the server mobos the voltage was to high? I thought Wendell said they didn't clock over 5.2 or something like that are the using suicide voltages at 5.2
Running the two sacrificial cores at suicidal voltage and frequency is exactly how they are degrading. Those game servers Wendell analysed also run high clocks and voltage when CPU loads are light. Just limit preferred cores to 5.3x. Job done. FrameChasers deserve way more subscribers. Everyone else just too afraid to say what is now becoming obvious. But massive kudos for not beating around the bush trying to be PC
Asus IA OC hit My 13900ks at 6.9 all P core 5.9 all E core 8000mhz ddr4 on My 4133 ddr4... Took me a month with random 1 BSOD weekly till i Saw the almost 1.9v cpu... Build at april 2023 24th december cpu was a total brick barely booted to desktop to watch movies till january to Replacedl it... Constantly gave memory errors in browser discord self closing on start.. BSOD even trying disabling cores downclocking to 3ghz etc... Currently no issues besides having to enabled the new "intel default" for that stupid Gray Zone Warfare that auto crashes even if i Just change pcie gen on the gpu or nvme from auto to manual at what auto is at... That is some next level shit
The data Wendell analyzed was from 13th/14th gen *servers* running at *extremely conservative settings* where stability was probably their first, second, and third priority. They were operating at *very low temperatures* and a whole batch of them were even undervolted. Until Intel comes clean, we don't know for sure what the actual root cause is, but "boosting way too high" clearly isn't it, although it probably doesn't help either. The rapid silicon degradation actually looked *much worse* in the conservatively set servers than in the so-called "pushed beyond their limits" clients. Alderon Gaming CEO said that they were starting to experience "near 100% failure rate" on their 13th/14th gen Intel servers after three months. Things being much worse on conservatively set servers than "way too high out of the box" clients may be a key clue in eventually pinpointing the actual root cause. Whatever it may be, it's looking exaggerated on servers (which means "boosting too high" isn't it). It may take longer than it does on servers, but those 13th/14th gen client kits may catch up with you yet.
@@Jacob-dw4vd For Wendell's data, the exasperated server techs trying to get the 13th/14th gen Intel machines not to die at 450000+% the rate of AMD machines and 12th gen Intel machines would be the ones limiting the single core boost, not the game devs.
@@stevenliu1377 One source with improper server settings doesn't constitute the whole of raptor lake being faulty. Scientific method wouldn't even allow you to reach that conclusion with so few samples. The intel boost algorithm has been running unchecked because of improper motherboard settings. A core hitting 1.6v for a duration is going to degrade. This doesn't make the cpu architecture faulty, it just means people haven't been using it right.
@@Jacob-dw4vd There's been many sources. Wendell's data is just especially convincing and Alderon Gaming's case especially extreme. Limiting boost would be one of the first solutions server techs faced with CPU instability would try. Watch the video and look the recorded peak temperature. Pushing too much power to the CPU for too long is clearly not why this is happening. Scientific skepticism does allow for the possibility that the earth may still be flat and server techs exasperated by CPU instability doesn't limit the boost clock, but the burden of proof for such outlandish claims is certainly on you, not on me. Even if we were to overlook that and assume we lived on flat earth where server techs whose jobs are stability, stability, and stability were somehow worse at configuring rigs for stability after encountering constant crashes than my 9-year-old kid, you would still be faced with the problem that these same techs were having no such gaming-breaking issues with Intel 12th gen or AMD rigs in the same data sets. The evidence overwhelmingly points to there being something seriously wrong with Intel 13th/14th gen Intel CPUs. No amount of sophistry is going to change that.
I would highly recommend not getting at 13 or 14 gen i9 but get a 12th gen i9 instead so you don't have to deal with this problem by under volt or lock the cause.
You keep saying Intel and AMD have to stop single core boosts. AMD however does not have single core or dual core boost. All cores boost to the same frequency, and theres no raised limits for single core workloads.
The interesting question for me is why non-K SKUs are not listed as having crashes/failures. Is there something to K SKU CPUs that's causing the issue or are non-K SKUs just not bought/used for those workloads? Also 14700K SKUs seem to be affected as well, even though their stock max boost is up to 5.7GHz. What's the theory on those?
I did that to my 12900ks, 1.2 Vcore, all core lock and whatever I could get for that was good enough. Yes I left some performance on the table, so be it. I'll be doing the same with my 13900KS.
I've had a 13900K on an ASUS Z790 Hero board sunce launch, and it's fine. Shortly after I built the system, ASUS released a new BIOS that had a 90°C MCE and I have been using that one ever since instead of the 100°C MCE setting. I gave up 1K points in CBR23 multi core and wasn't cooking my chip.
I have seen errors like this for a lot of reasons, bad sub timings on ram can do this too! It will cause lag on some software and throw every error under the sun, but never throw a memory error!
👎 thumbs 👎 for the tiktok ad. Hard to believe degrading is an issue. I run mine at 1.71V but tvb 70C... Meaning the temp never exceed 70C but I can enjoy all core sync 5.6 ghz sync when not rendering monkey heads. It's been 4.5 years
Thanks for your take. My 12900KF & 12700KF are stable on Nobara. Haven't toyed with the 12400F. Not sure what to think but am gaming on them fine. Mostly seems like a 13th/14th issue, not really sure what to believe.
@@JoeLegionTV Depends on your overall bin, and in particular ring and MC quality of each particular chip. Delidding at least takes thermals out of the equation as much as possible (with a MoRa 420 or chiller). You must have a pretty good chip to run those speeds on a 360 AIO. My 13900K could run 5.8 but only 7200 RAM speed because I'm using a Z790 Hero as I wanted the iGPU to be usable, on what was then custom loop, but it had a lot of fan noise at full load with Noctua 'industrial' fans.
I have a 13900kf running all cores at 5.7 all ecores 4.5 ring 5.0 with 64gb ram oc to 6800 and never ever had any issues, i think this teck tubers are using 240 AIO on their i9 and are killing it
@@StormKhan-s7q I'm using a Iceman direct water block, alphacool 1260mm rad, my GPU is also with WB on same loop with 1 D5 pump and gskills 64gb 6400XMP ram
Ok, so you limit your cpu to 5.6Ghz or whatever and now it works. Great! But you know what is the problem? You paid 6.2 Ghz with a lot of money because this is advertised speed. Right? You see the problem here?
If this comment gets 1000 likes I will release an in depth step by step fix on the next video
bet
Never had a issue with my set up because i am locked 5.8Ghz across the board but i do appreciate the fact you are willing to show people how to fix the issue my hats off to you Sir.......
salam brotha.
Yeah mate, that would he pretty good.
Crossing my fingers my 13700K stays healthy, I have e-cores off and all cores boosting to 5.7ghz and it runs smooth as butter.
It's Intels fault. It should be stable out of the box.
And the issue he is describing could 100% be fixed in microcode if it were true, for "new" CPUs that weren't already degraded. Also, the idea that an i9 CPU shouldn't just work and be stable out of the box is silly.
um no not with anything high performance tuning equiptment always requires testing n tuning
It's funny cause for years you've said if you want "your shit to just work than buy Intel" how times have changed!
Amd timmy
Then not Than
But yes
That remains true, if you tune your chip Intel is better and more predictable. Only AMD rival is X3D but it's extremely situational.
No, it's 100% Intel's fault, 0% consumer fault.
pretty shure its intels ai´´s fault and the oversight by intel not realising how much bs ais are spewing out
@@libertyprime9307 99% of consumers don't know how to make adjustments in the Bios when overclocking. I'm having no problems with either my 13900KS or my 14900KS. Both are overclocked, but have to make adjustments in the Bios also to keep it stable and that's where people screw up and get frustrated.
Brother… you are a little brainwashed. Saying “no one should buy an i9 unless you are an enthusiast” is pure copium. They sell the product with stock settings so that ANYONE can use the product with no issues. Is that a hard concept to understand? You should not have to be an enthusiast level person in order to use an i9 at stock settings with no errors. Your outlook is ridiculous.
When he has invested money in Intel stocks and it Simply just doesn't go up. They go down actually. Protecting the investments. If he's not he's a total hypocrite and it shouldn't be watched just because he has knowledge . His knowledge îs not trustworthy when he's rallying for this a corporations.
17:50 yeah accidentale they invented The 3d chips. Wich turn out to be the best way a cpu can work in our gen in gaming
This is Intel's fault. Very odd stance to blame consumers.
Damn what brings you to my tiny channel
"Consumers should of done the research"
Wtf. Intel said it works.
Why tf would you expect it...not to?
Calling consumers idiots for believe the product specs were legit is so misplaced.😊
You're so wrong this time! Its not only i9 problem, there alot of i7s also in servers. Check that chart that was published... And you cant tell ppl that i9 should be bought only by enthusiast. If someone has money with 0 knowledge he should be able to buy shit and run it at defaults. Intel screwed badly just to compete with amd.
And you're right about one thing, they only should offered you a job to fix their cpus to work out of the box for gamers :)
Im just so glad I avoided all this by buying AMD X3D
3D V-cache is the GOAT
@@JaymondoGB which one? Which one is good gaming and streaming?
@@GONTE_YT I believe if you just use in the process for gaming a R7 7800x3D will do but if you want to go for something that is good for gaming and streaming go with the R9 7900x3d.
Wendell specifically mentioned that one of the game server providers he talked to ran the same validation tests on CPU's when they were new and at some point later and they would fail the second time around. I've worked in datacenters and I've never seen that kind of behavior from a CPU running within spec (which supermicro W680 boards do) so how can that possibly be the customers fault? Even more problematic is there was no way to 'research' this immediately when the 13900k came out, yes now there is a wealth of information out there but most users are going to at most enable XMP and go with the defaults.
Then you have a game developer being very specific with the issues they saw with 13th/14th gen, there isn't much more on the ground than that. You should not need to tweak out of the box to have a working CPU. For best/most optimal performance, sure, but not to have a CPU not potentially die in a few years with out of the box settings, that's ridiculous.
The buck stops with intel, end of.
I don't care why their defaults are the way they are, they set them that way and then sold the product.
Blame the people that designed the problematic chip. Not the tech tuber, not the competitor, none of that. Sure, AMD may have started with the single core boost, but their chips didn't BREAK while doing so. They didn't put the consumer at risk. Intel did, as shown by the fact that ryzen 3/4 don't have the same issues OOB. It's YOUR JOB as the designer of the product to not push it too far, and to back your product completely. This sure doesn't sound like it. The reviewers can say whatever they want, but when your head hits the pillow, those voltage tables were designed by INTEL and nobody else.
A lot of these stupid excuses for the company sound stupid af, and risk invalidating your opinions. Stop blaming people for testing their product and releasing the numbers. Stop blaming AMD for INTEL chip failures. Stop blaming users for buying the chips. Stop blaming motherboard manufacturers for INTEL problems. The mf chip should NEVER be able to suicide itself. PERIOD. When you as a consumer buy a chip, you don't expect to have to configure the thing. The fact that anyone defends intel in ANY WAY shows they are nothing more than a shill for a company that gives no shits about them whatsoever.
Dude is talking about broad telemetry data and you're saying that data....
Is ridiculous?
Love your info and enthusiasm but your anti- generic people position is doing you a disservice.
You can dismiss Gamer Nexus as a generic TH-cam but the idea they are idiots and not useful seems misguided
The reason the
14900k was put in servers is because intel dropped xeon and said this is your replacement.
Then. - the server board doesn't overclock bro. It uses very mild performance pushes.
The used it because intel told them.this is your chip for that need.
Calling server guys idiots for using what they were told to use is misplaced.
Intel dropped Xeon? When did that happen?
Dude..what normie is gonna do all that, when they dont even know how to open the bios. All that shit is up to intel unless they became a hardware enthusiast only company..
What normie should buy an i9 in the first place? just got the point where he says that in the video as I was typing lol
@@notwhatitwasbefore what's the problem with someone who is not tech savvy to buy the top of the line chips? the one who has to make sure the cpu works is the company not the consumer, that's why you pay money to them.
the vast majority of people who use pcs don't even know what a bios is, but some of them can afford top of the line and will pay to have the best experience.
@@notwhatitwasbefore they buy the prebuild gaming monster ultra pc for 5k and a 3060 in it paired with an i9
@@Greenalex89 I guess they buy prebuilt PC from people who don't know how to build a PC?
@@faus2417 Those people are the biggest problems. It's the same thing when rich kids get fast cars and then they crash them because their driving skills are non-existent.
This guy makes as much sense as joe biden when he speaks
So happy with my 7950X3D....
If you bought a 13900k when it was new, it is entirely likely you degraded it before any "research" existed for an enthusiast to find. If the thing can bork itself at stock settings, that is primarily Intel's fault and nobody else's. Trying to blame the consumer for not doing research is a poorly considered position.
I appreciate the info you've released as a workaround. I've used it on systems my family has to prevent issues in the future. But while that's fine for the layman user, because they probably won't notice the difference much anyway, you've essentially enforced a lower than stock limit on low-core boosting which means in certain ways it performs less than was advertised. They bought a $800 cpu for best in class single core speed and many cores for multi-threading, only for someone (your channel) to tell them to slow it down to make it work.
I can agree that spending money on an i9 is a waste if you're not an enthusiast, but I don't think we should endorse a position that the only way to reliably run a CPU at its reported stock values without killing it is to go all the way to direct die cooling. I think Intel should be called out for unrealistic stock settings in that case.
Great take, most logical so far.
Except in my discord. Each new gen of cpu launch is thoroughly vetted by me. Anything outside of the discord I can’t help
@@FrameChasers if ppl buy one of your i9 bundles, do they get access to the discord and vetting? Or is it possible they left it stock?
Bro, this is not a new stuff. Already with 10th gen and before, after putting the CPU in the mobo, firstt thing ppl were doing is turn off all the turbo/speedstep/wahtever shit, set fixed vcore to 1.30-1.35V with regular cooling or maybe 1.4V with custom loop and/or delid, put some nice LLC on it and FAFO for whatever core/cache ratios you can hit. It was that simple for i9's and i7's from the 1st gen Core-i processors, just different safe voltage for different gens.
“Stock” settings vary for different motherboards and even across the same board but different bios revisions! Intel needs to reign in the Wild West cloud/spectrum of ‘default’ settings and have default settings followed by all. Motherboard vendors for years have been pumping more voltage than necessary into the cpu.
i should've trusted hardware unboxed
No excuse for intel. Shouldn’t have to tweak anything to make it work out of the box. Wasn’t that the problem with the 7959x3d as well? Lol
In this case customers do need it buttt in general these things should not happen.
7950x recent honest vid was they just set case mode own and your fine.
I agree. regardless of knowledge level, a company should NEVER release a product assuming the average consumer will magically decide to tinker in the BIOS. First time seeing this channel and very disappointment by the bias of this dude..
7950X3D didn't ship with a scheduler and relies on Windows GameBar + Chipset drivers to schedule games to the X3D chiplet.
Problem is that gamers uninstall GameBar because it's bloat / an extra overlay and many don't install Chipset drivers either.
there's two issues. the original "baseline default" shit was partly Intel for not being strict enough on bios defaults, and partly on the board partners for their ridiculous overclocks being made bios defaults. The new issue is probably a design fault and that's only intel's fault.
It's a manufacturer issue. Stable, good product should always be priority over marketing bs.
If a company releases a product that doesn't work, or dies over time they are at fault, nobody else, displacing issues to others because they had to degrade the part to remain relevant rather than take the L and develop something better later is a sign of management issues.
lol you love to make intel excuses. enthusiasts only. you know people who play games but are still hobby level still want the best they can reasonably get. plenty of people burning their chips out on custom loops. shouldn't need to delid a chip to make it last more than a couple years
You should probably stop assuming you already know the answer to this issue. It's not just simple silicon degradation. Wendel's data clearly shows that there is a design flaw with all 16 e-core raptor lake chips. Intel wont say what it is though which is concerning. It's like bumpgate with the the ps3 all over again.
@@JoeLegionTV There's a stat shown by warframe developer that shows 13700k and 14700k are also failing. Although, it's quite low, below 5%.
He sells himself as an expert why would he admit he is wrong. Such a stinky person
@@JoeLegionTV correlation does not imply causation :(
@@JoeLegionTV It's not only the K sku's and while the numbers are a lot lower 'it' seems to affect some 13700's as well. There is enough data out there now that it seems pretty probable that there is some issue with the higher clocked chips in general, not just the K versions.
@@JoeLegionTV Most of these hysterics are coming from people that don't even own Intel hardware. It doesn't help that reckless game devs have decided to join in with the gaslighting.
I am at fault because Intel decided to overclocked the CPU to death.. How is that my fault? lol great take as always.
Legend giving people the fix you better respect this man for what he gives you priceless knowledge literally this could save Intel millions of dollars and like he says he's going to lose a lot of money from consults but he's doing it to help the people with the truth.
Thanks for the 5er man 😭😭. I really appreciate the kind words and support ❤️
@@FrameChasersJust seeing this now checking back on the video but no problem you're welcome! 💯🙏❤️
100% on the manufacturer they determined the specs and performance of their product - Intel and AMD should limit and lock CPUs from 'dangerous' voltages and currents.
You are completely off the track, safety and fair use is a legal requirement for all manufacturers irrespective of the industry.
What would happen if a car manufacturer acted like Intel/AMD, your car can go 100 miles per hour but if you reach that speed more than 10 times than your motor is wrecked, it's your fault?
What they have to do is boost IPC, not going the Piledriver route.
Zen 3 and Alder Lake were good, everything launched after both sucks.
EV manufacturers are already doing this? E.g. you can only run the stupid acceleration a max number of times due to battery degredation.
Ok, so Intel sells a product, doesn't provide ANY special guidance to the customer on installing and running said product (last I checked there's nothing on the box saying "for use with liquid cooling only" or "you must manually configure this chip in BIOS to not boost too high to kill itself"), and yet you think the customer is automatically to blame when their chip dies prematurely? Sorry dude, that's a shit take and you know it. CPU's have had built in safeguards to protect themselves from overheating (to the point of shutting down if they simply can't downclock low enough to keep it under control) since the AMD64 days! This protection is a feature that consumers have come to expect for over a decade now. On top of that, it's a pretty basic expectation that the chip maker (be it AMD or Intel) isn't going to default their product to running in a state (voltage, temp, or otherwise) that they know will degrade it quickly over time. That'd be the equivalent of an automaker selling 100,000 cars where the engine will run well past the redline for 30 second bursts when at 100% throttle, then blaming the customer for using it when the motor blows up in 10,000 miles. There would be mass outrage and congressional hearings if they tried to pull something like that, but somehow we should expect less when it comes to CPU's? Nah, that excuse doesn't fly here.
Now obviously all of this comes with the giant caveat that all bets are off if the customer has overclocked or otherwise run the chip out of the bounds intel provides. If a customer disables CET, raises the single core boost multiplier past what Intel intended, and/or ups the voltage beyond safe limits then obviously intel isn't on the hook for that. And it seems the motherboard manufacturers have been DEFAULTING them to these out of bounds state for many of their boards, trying to gain an unfair advantage over each other in benchmarks, in which case they share at least some of the blame (*glares at ASUS). It also appears Intel has provided little or very loose guidance to the motherboard manufacturers, knowing they would do this and that this practice would benefit them in benchmarks against their competitor, so there's likely blame to be had both with Intel and the mobo manufacturers. But considering Wendell clearly showed in the data that these chips are failing even when they lived their entire life in a W680 board that doesn't run the chip out of bounds, we can't conclude it's only because of mobo makers running the chips incorrectly. As of now though, it appears that Intel is simply configuring them from the factory to run at boost clocks in single threaded workloads that require too much voltage to remain stable and not degrade prematurely, and if so then that's squarely on them.
And the excuse that tech tubers are somehow at fault for all this because they run Cinebench single thread benchmarks? Seriously? That excuse reeks of Intel fanboyism. Single-threaded workloads are a very common and entirely reasonable workload that users will encounter everyday, so if the chipmakers are opportunistically boosting one or two cores when under low thread workloads to take advantage of the power and thermal headroom to extract more performance, then running Cinebench to test that scenario is entirely reasonable. And this isn't even a new feature like you implied that AMD started. This has been a thing on Intel chips since the x79 days! Old school Mac Pro's with the 1680 v2 chip were doing this in 2013, with staggered core multipliers for 1 core all the way to 8 core workloads (multis were 39/38/37/35/34/34/34/34). This has been a thing for a long time. If intel is defaulting those 1 or 2 thread multipliers to a level that causes their chips to request a VID that's beyond what the chip can handle without increased degradation, then that's on them.
In short, unless the customer is going into the BIOS and willingly configuring the CPU to run in a dangerous state and removing the built in safeguards, then the blame lies with Intel and/or the manufacturer of the specfic motherboard.
So if you set it wrong it degrades and dies, what a junk..
just like anything
Lock the cores and UP the voltage slightly for the lower all-core frequency ...and possibly slow down the frequency 100 mhz or so if needed.
Feel good not buying Intel, I jumped from i5 6600 to 5800x3d. I was digusted by intel when they introduced the cuckcores.
@@furudoerika6977 Are you the schizo that spams this on /g/ as well?
9900k running since 2019 at 1.33v.
5ghz.
no problems since.
Intel really shot themselves in the foot with the "14th gen" rebrand. If it were just "13th gen" having problems people might not be as alarmed, but with "13th and 14th gen", now it's a pattern. Nobody will be buying 15th gen.
It's hard to feel bad for them. They've been getting slandered by the tech press for 5 years, but the best course of action would have been to ignore it and keep making good products. We could have got a 10 core Raptor Lake instead of an overvolted cinebench machine.
But if Intel launched a 10 P-core only chip people would laugh at it because the productivity would be destroyed by the Ryzen 9s.
The explanation doesn’t matter … imagine having to know all of that so you don’t have crashes, your average person is not gonna have a clue about it…
Wasn't Wendle saying these were data centre machines that were undervolted / downclocked? And that the problem was with the CPU itself, not power or voltage draw?
the problem is fixed by overvolting not undervolting
that why it doesn't matter if it in data centers they still boost two cores
@@BSF-7772 Ah, I wasn't thinking. Makes sense.
Nice walktrough from where the issue started: the battle of single core CineSH1Tbench score.
Intel needs to answer for its shenanigans. If they don’t make raptor lake customers whole, then Lawsuit. My 13900k has been fine since day 1. I set a single core boost to 6 GHz. Undervolted. The only thing I had to do for stability was step my ddr5 ram down from 7200 to 7000. Otherwise no issues. But for those facing issues, Intel needs to make them whole!
It's hilarious how similarly we think... All your tips are things I do by default every time I build a computer for myself or others over the last 20 years... Never use default settings in the MB... Ironically, I fixed this issue day 1 for myself when I noticed the high default v-core voltages on my own computer when I upgraded to a 14900k and have never had any issue's...
You havs to change factory settings to make it last. Unacceptable dude. It doesn't just work. Otherwise you could install and set to defaults and be done.
Single core should always be fine. A CPU should NOT degrade from a few single core benchmarks. The copium is out of this world.
I pretty much stopped watching anyone that mentions intel at this point
i have a camera on intel now switching to AMD 4 the simple reason - stability 1st then extra speed when possible.
Same. Unsubscribed a lot of channels.
@@adamtajhassam9188 yea same, i loved tuning intel chips, been doing it since 9900k but it has gotten absolutely dogshit to do since 12th gen, absolute waste of my free time
This is a thing you buy to make your games go fast. Single core performance is a valid metric because it (mostly) tells you how fast your game is going to go. If you buy a sports car based on its 0-to-60 and its engine explodes when it goes 60 that is the manufacturer's fault, you can't expect every buyer to go find out exactly how the engine of every car they test drive works when they just want to drive fast.
13900k here, and I do believe I'm starting to run into stability issues. I am on air, and I never expected to get full boosting out of my chip. I tune for the best I can get for stability. For me, I set my peaks to 5.4Ghz, 1.25v(tested for this), and I've been running fairly good for awhile. I'm getting occasional, and more frequent reboots however. I also have a couple of 13700k's and they are clocked lower 5.2ghz. We'll see if they just take longer to be impacted. The part of this story that I find of interest is the CPU's they are using for servers, using server MB's that DON'T Boost too all hell and back. They are degrading as well, and with the volumes they use, they can collate the data much better than users that have 1, maybe 2 systems.
Intel just works... Dont mind what others say 😂
I usually take a few hours to tune my new intel and amd chips. Out of box settings either apply too much voltage or the PBO/Turbo settings are all over the place.
I have an i7 14700k.Zero troubles.Maybe being a simple janitor and not having money saved me from a i9 trouble.I also have to admit i undervolted my cpu just when I got it.I have to apologize because my rusty English.Super great channel.Congrats from Spain.Nice work Jules.
no shame in getting a 14700K!
it's basically 90% of the i9 variants in terms of multicore without perhaps quite as much problems with stock behavior because of the lower clock demands. i have one myself. it's been an absolute beast
It depends what batch you got, the problem is that intel is doing really cheap and bad silicon for the newer batches of 700k, 900k on 13th and 14th gen and those can't handle proper stability, if you got one of the older batch you could be fine but who knows if you'd ever get a problem down the line and you get bricked on crashes and slow downs.
@@siphi7583 This is your personal theory, with no basis in reality.
Your English is better than some Americans I know!
@@Jacob-dw4vd Nah it is the truth, if you don't want to believe it go ahead, you'll find in the next couple of months.
I have an i9 13900k and i had 90C in games at every time and i have an 360 corsair aio. I limited the watts but limiting the amps did the trick. Thx for ur videos!
you should undervolt it bro
@@JoeLegionTVMaybe he's running at 1.5 volts and that's why he reaches like 90C.
My i7-12700K is 1.17volts, doesn't reach 65 degrees while gaming
Not everyone is Framechasers with all that knowledge, most just turn on and enable XMP and call it a day. And most don't have an extra $500 for consulting.
12700k/12900k is pretty okay, prices are down, bundles available for a decent price.
13700k to 14900k are more affected from what I've seen
@@valentin3186or just buy amd i just bought a 7800x3d after having intel all my life its silky smooth there is less dips and driver installation was so much easier
They should get non-K CPUs. Or maybe framechasers kit.
@@iceboy1170I generally don't like non-k because of locked sa
@@TheAcadianGuy Well I don't buy framechasers kit. I use P-54 E-43 R-48 for my 13900KF. Higher frequencies is not better even for performance. I can feel it in my game.
Jumped of team blue i was dealing w a faulty 14900k that would crash every 5-12 hours, just waiting 4 the am5 mobo to come in Cant be bothered w a poor quality made chip. I was a loyal buyer too for 30 yrs. The gamers net base crashes were alot less when they switched to AMD so i dont agree its entirely the users fault.
Interesting...so many different theories...ring degradation, design flaw...heat dissipation issue, too much voltage to preferred cores...All I know is what I experienced. Bought a 14900K and Asus Z790 Maximus Hero board. Cooled via 360 AIO. Started having stability issues rather quickly...not 6 months down the road, more like 1 month. Didn't know any better, 1st Intel CPU I've had in a DIY PC in years. Was running the Asus AI OC with optimized BIOS defaults. Saw 6.1 and even 6.2 frequencies, was like hot damn this CPU is fast! Lol. But the temps were definitely too high for my liking. I finally stopped using the OC profile, started doing research. That's when all this shit came into focus. So, I set Intel limits on the long- and short-term power...seemed to get more stable at first, but then the crashes kept coming. Finally, it got so bad that the PC was almost unusable.
I think it was too little too late and that my chip had already degraded. Of course, at the time, I didn't even think about increasing the voltages to compensate for the degradation. But then, I also have never done a CPU delid, either so even if I had upped the voltages, my AIO wouldn't have been able to keep up.
Opened an RMA with Intel...took a little bit of back and forth and some delays, but finally got it approved. However, I went ahead and bought a new 14900k in the interim, because I wasn't sure how long it was going to take to get the replacement. This one I factory defaulted from the start, locked the cores to 5.7, and set the amp limit to 307 (IIRC). This new CPU has been trouble-free, and the AIO cools it just fine. Finally got my RMA replacement and sold it to recoup some of the cost for the 2nd CPU. FAFO, boys, FAFO...
Are you using APO on the 14900K?
I was impressed by the fps and the power consumption numbers when i watched Hardware Unboxed's video.
It turns the i9 into the most power efficient CPU ever made.
What were the temps you were uncomfortable with on the first chip?
CPU shouldn't degrade that fast, we used to abuse on them for years and never have issues, now we are at points where even at stock we are degrading at points that will be measure even before i buy a new pc. As much as i like intel, the average user will be better off with a 5800X3D / 7800X3D / 98000X3D and just let it boost based on your cooling.
That’s because the nodes are so small now and can’t handle the abuse like before
@@FrameChasersThe only reason that i own a 12700K and not anything higher or better is because i avoid +5GHz chips with the voltages that they reach and the worse perfomance per watt.
I'm using a Deepcool Assassin 3 with my 12700K and is 67-71 degrees max and 20C iddle, you can't do that with a current i9/i7.
The reason this happened is that they can't compete with AMD at the same wattages so they have to way overboost just to get competitive.
Intel can compete but by targeting perfomance per watt and software optimizations, they have the monolithic Big-Little.
Instead of software optimizations for the 14900K with those impressive fps and watts results (like Hardware Unboxed showed with APO), intel doesn't care and only targets Cinebench and overclocks the heck out of those chips.
When the 14900K has APO, it can destroy the 7800X3D on games easily by 30% more fps. The 14900K also gets a 50% reduction in power draw.
*Intel can't just take the clockspeed of AMD and adapt it to their chips* , *they are using monolithic Big-Little, their architecture is adapted for IPC and low latency not clocks* .
How the tables have turned for Inlet...oof.
Its not just the high end skus either. Undervolted 14600k I purchased in Feb. for my htpc kicked the bucket. Frequent BSODs, and two ssd corruption errors.
I have a 13900ks, when i first started my pc the voltage went to 1.472v and I did not like the voltage so I locked my cores to 5.8ghz and set a max of 1.35v have not had a single issue with my cpu, now im kinda happy I did so instead of running it at stock.
The good thing about this debacle is that in a couple months many OEM prebuilts that are used by companies and such will be sold for their instability issue.
You could probably get a second hand dell optiplex with a 13900K for cheap.
My 7800X3D just works🙂
your banned, get out of here
@@TRUMP_FOR_2024My 12700K just works as well, i'm banned?
Tell us about the AMD dip, grandpa.
I'm so glad I found this channel. Great info!
It is not just a i9 problem. I got a 13700kf at launch and immediately locked it to 5.2ghz all cores. Undervolted it and power limited. Still getting random sluggish behaviour and IO errors. Sometimes opening a folder on nvme drives takes a minute randomly.
You have bad memory settings. Start over with xmp.
Good advice from this channel.
But doesn’t really excuse intel.
These aren’t Celeron and i3 failures, they’re top dollar product.
Like he said unless you’re an Enthusiast and know how to manipulate the bios and frequency for stability then don’t buy the Chip. Intel should have been more clear on that point, because the out of the box spec for instance on the 14900KS is 6.2 gigahertz. I have a very good system made for overclocking but could not run this chip anywhere near 6.2. Crash, crash crash…. So I started to back off on the frequency to where it was stable at 5.8 letting the 2 cores boost to 5.9. 5.9 all core would eventually crash also. But 5.8 locked or letting it boost will work all day long. Also for Gaming I lower the E-Cores to 3.8.
@@papasmurf5598 ty 4 the insight , still feels intel lied in more ways then 1.
@@papasmurf5598 Going back to 12900KS, those didn't fail after a year with out of the box settings.
Yeah, 14900K/S and 13900K/S are enthusiast products.
That doesn't mean it's OK for them to fail after being installed in a default configuration.
In fact, the default configuration should maybe limit them a bit.
That's how it was for 11900K, 10900K and previous generations.
@@eye776 Remember Intel specifically said the cooling solution had to be top rated water cooling to hit the target 6.2 gig out of the box spec. AIO will not do the job. Thats right on the information that comes with the chip. I have my 14900KS running at an all core 5.8 boosting to 5.9 on best two cores on a Kraken elite 360mm cooler and it never crashes. If I had water cooling with a large radiator I could run this chip at 6.2 but I dont have that set up. There are several guys online with big coolers and they love this chip. I like it too because 5.8 all core and boosting to 5.9 isnt bad. I can run the chip at 6gig but it will eventually heat up my cooler and crash so I lower the frequency back to 5.8.
@@papasmurf5598Intel told Xeon customers to buy this chip as it was their only option. They ran at extremely modest levels of power. Some people also literally use these for their job. Your take is extremely misguided.
How’s that AMDip now?
still unfixable
@@antraxbeta23 He's proven it multiple times.
@@antraxbeta23 I have 5800x3d so I can experience the dip by the worst way possible :(
@@antraxbeta23 no need, just fire up any cpu heavy title and start moving fast, lows are bad
@@antraxbeta23 it kinda does, go to heavy cpu in warzone and your lows will be worse than 13900k´s, its way more expensive tho.
The problem is motherboard makers are disabling the thermal velicity boost temperature limit of 70c and allowing all cores to boost to the top bin. This was the default on my msi board. Left at default and the default ac loadline, you see 1.45-1.5v rammed into the cpu.
the gold mine of information that i've learned from you from die setts in ram to the Core differences between intel generations and what ones to look for, what amd chips we should be looking at for gaming, i overclock all my stuff i got a 12700kf i7 with a 4070 ti from Colorful maxxed out 100+ fps in games when it doesn't crash (thanks intel) had a 5800x3d that i gave to my little brother with a 6750xt and hes right behind me in frames. wouldnt of been able to do any of it without your help its so hard to find people who aren't trying to sell something.
So what you are saying is intel engineers should have joined your discord before they created the product and this would have prevented intel spreading misinformation on how the product should behave. Intel demos these products before the tech tubers and gives them ndas on how they should "benchmark" it. You talk about a fix but what you have a preventative workaround. Now your "fix" is fantastic but if build 10 system all same components and run the same test with solid works and 3 of them fail within 30minutes of system being built. Then i apply your "Fix" those 3 outliers become at that point in time stable those chips just get advanced RMA because if they are becoming that problematic that quickly then eventually they will fail.
Actually, indium which is what Intel uses for its sTIM has better thermal conductivity than liquid metal which is an alloy of gallium and indium.
The reason why it performs better when you use liquid metal over the stock sTIM is because the sTIM that Intel applies is so much thicker. STIM on Intel chips is usually between 0.3 mm and 0.4mm while a typical application of liquid metal is between 0.02 and 0.04mm thick so it creates much less resistance for the heat to travel through.
TLDR, If you want i9 to just work out of the box, make sure on day one you delid, use Liquid Metal, lap your IHS, lock frequency, lock voltage and disable boost and it will just work. Everybody that is enthusiast knows this is what you have to do if you don’t want your CPU to destroy itself. If you are not in the discord channel you’re not an enthusiast. The instructions above are also clearly posted prominently on Intel’s website and in the CPU manual. Again Intel just works
What a great time to be a frame chaser
In terms of whose fault it is, I personally think it's very clear that the blame lies squarely with Intel (and AMD on their part, respectively) for playing along in this stupid benchmarking game after it got out of hand. They're responsible for shipping something that works to the consumer, and it appears that they now do not.
And I think that is also clearly the stance that tech-tubers have adopted; I don't think they are necessarily lazy or ignorant, but they work under the assumption that Intel (and AMD) have shipped something that actually works (which *should* have been a valid assumption!), so they test the product it as-is, because that's what the typical buyer will do.
The manufacturers should not be selling these products running in some self-destruction mode by default, period. But, for the sake of argument, if they desperately want to do so, they should at the very least provide clear documentation about what the buyer (and tech-tuber as well, by extension) is expected to do to have their product not self-destruct.
@@LordVader1887 Well, AMD clearly isn't responsible for Intel's mess.
But the claim in the video is that AMD has/had the same kind of problem going on (possibly fixed later with an AGESA update?), their own problem is then AMD's fault.
@@LordVader1887 Fair enough. The AMD aspect as raised by me is pretty much a "and AMD as well, to whatever extent they are causing the same kind of problems as claimed in the video", the main point from my point of view is just that each manufacturer is responsible for their own products working right.
Fuck! That comparison to Ozempic is spot on!!! Perfect!
This B.S. is why I keep paying Xeon tax - both in cost and single threaded performance.
Maximum turbos of around 3.7-4.3ghz never seem to have problems and 4-6 memory channels with ecc are reasonably fast and very reliable.
Yes, I used to overclock and even ran CFD models on an overclocked system that I validated as stable. But, as you indicate, the modern boost nonsense has gotten out of hand and pushes silicon in ways that makes an old overclocker uncomfortable.
Man my 13900k degraded so badly on defaults, never let it get over 90c with tweaked voltages, bought it on launch, degraded January this year,
I couldn't even install a GPU driver, it wouldn't see my 4090 when running the Nvidia driver install, but I could manually install using device manager to force the driver install.
Luckily I got in before it hit the fan, because Intel RMA'd it send me a new one, tweaked this one too, but WAY more than the first CPU, we know a lot more now so my voltages/core etc tweaking is different.
I use my i9 for work, so instead of OCing it I undervolted it, limited the amperage to 400A and disabled some of the boost technology
Also put it on default power limits as well.... It runs BF2042 with the same FPS at a lower clock and lower temps with an AIO.... go figure
So far I am very happy with this CPU and I think it might be OK since it is taking the undervolt very well
Even on the server mobos the voltage was to high? I thought Wendell said they didn't clock over 5.2 or something like that are the using suicide voltages at 5.2
13700k here...going strong for the last 2 years...no issue.
I'm wondering about the new X3D of the AMD 9000 series, as I believe they said that they will be unlocked for overclocking.
I have a 13900k with DH15 since 27/12/2022 and i have no stability issue yet.. But i have put very early pl1= 125w and pl2=253w .
Running the two sacrificial cores at suicidal voltage and frequency is exactly how they are degrading. Those game servers Wendell analysed also run high clocks and voltage when CPU loads are light. Just limit preferred cores to 5.3x. Job done. FrameChasers deserve way more subscribers. Everyone else just too afraid to say what is now becoming obvious. But massive kudos for not beating around the bush trying to be PC
Asus IA OC hit My 13900ks at 6.9 all P core 5.9 all E core 8000mhz ddr4 on My 4133 ddr4...
Took me a month with random 1 BSOD weekly till i Saw the almost 1.9v cpu...
Build at april 2023 24th december cpu was a total brick barely booted to desktop to watch movies till january to Replacedl it...
Constantly gave memory errors in browser discord self closing on start..
BSOD even trying disabling cores downclocking to 3ghz etc...
Currently no issues besides having to enabled the new "intel default" for that stupid Gray Zone Warfare that auto crashes even if i Just change pcie gen on the gpu or nvme from auto to manual at what auto is at...
That is some next level shit
The data Wendell analyzed was from 13th/14th gen *servers* running at *extremely conservative settings* where stability was probably their first, second, and third priority. They were operating at *very low temperatures* and a whole batch of them were even undervolted. Until Intel comes clean, we don't know for sure what the actual root cause is, but "boosting way too high" clearly isn't it, although it probably doesn't help either.
The rapid silicon degradation actually looked *much worse* in the conservatively set servers than in the so-called "pushed beyond their limits" clients. Alderon Gaming CEO said that they were starting to experience "near 100% failure rate" on their 13th/14th gen Intel servers after three months. Things being much worse on conservatively set servers than "way too high out of the box" clients may be a key clue in eventually pinpointing the actual root cause. Whatever it may be, it's looking exaggerated on servers (which means "boosting too high" isn't it).
It may take longer than it does on servers, but those 13th/14th gen client kits may catch up with you yet.
Server platforms are not limiting single core boost. At least not the game devs complaining about this.
@@Jacob-dw4vd For Wendell's data, the exasperated server techs trying to get the 13th/14th gen Intel machines not to die at 450000+% the rate of AMD machines and 12th gen Intel machines would be the ones limiting the single core boost, not the game devs.
@@stevenliu1377 One source with improper server settings doesn't constitute the whole of raptor lake being faulty. Scientific method wouldn't even allow you to reach that conclusion with so few samples.
The intel boost algorithm has been running unchecked because of improper motherboard settings. A core hitting 1.6v for a duration is going to degrade. This doesn't make the cpu architecture faulty, it just means people haven't been using it right.
@@Jacob-dw4vd There's been many sources. Wendell's data is just especially convincing and Alderon Gaming's case especially extreme.
Limiting boost would be one of the first solutions server techs faced with CPU instability would try. Watch the video and look the recorded peak temperature. Pushing too much power to the CPU for too long is clearly not why this is happening.
Scientific skepticism does allow for the possibility that the earth may still be flat and server techs exasperated by CPU instability doesn't limit the boost clock, but the burden of proof for such outlandish claims is certainly on you, not on me.
Even if we were to overlook that and assume we lived on flat earth where server techs whose jobs are stability, stability, and stability were somehow worse at configuring rigs for stability after encountering constant crashes than my 9-year-old kid, you would still be faced with the problem that these same techs were having no such gaming-breaking issues with Intel 12th gen or AMD rigs in the same data sets.
The evidence overwhelmingly points to there being something seriously wrong with Intel 13th/14th gen Intel CPUs. No amount of sophistry is going to change that.
I would highly recommend not getting at 13 or 14 gen i9 but get a 12th gen i9 instead so you don't have to deal with this problem by under volt or lock the cause.
Eww imagine buying Intel
You keep saying Intel and AMD have to stop single core boosts. AMD however does not have single core or dual core boost. All cores boost to the same frequency, and theres no raised limits for single core workloads.
The interesting question for me is why non-K SKUs are not listed as having crashes/failures. Is there something to K SKU CPUs that's causing the issue or are non-K SKUs just not bought/used for those workloads?
Also 14700K SKUs seem to be affected as well, even though their stock max boost is up to 5.7GHz. What's the theory on those?
cus before the bios was updated. many vendors was still stuffing 1.47volts into stock chips for that said boost.. So the damage has been done there..
You should be hired by Intel
i hope amd's consumer and server side shares increase alot and intel learns a lesson
i think the more common sense is not having to disable half of a CPU you just purchased so it doesn't self destruct.
MY 13900k has been running at 5.8 all core and 6ghz 4 core with a MAX of 1.37v limited and its still as solid as ever..
I did that to my 12900ks, 1.2 Vcore, all core lock and whatever I could get for that was good enough. Yes I left some performance on the table, so be it. I'll be doing the same with my 13900KS.
I think it’s a cache problem. It’s the only thing really different between 12th and 13/14th gen. Only time will tell.
No, domino effect from the high voltages from those absurd clocks, damage the ring bus therefore the rest of the CPU falls apart.
i was watching their vids and kept asking myself why this wasnt an issue with the OC guys, theyre "failure rate" is also ridiculous
I've had a 13900K on an ASUS Z790 Hero board sunce launch, and it's fine. Shortly after I built the system, ASUS released a new BIOS that had a 90°C MCE and I have been using that one ever since instead of the 100°C MCE setting. I gave up 1K points in CBR23 multi core and wasn't cooking my chip.
I just bought a 12900k from Amazon's fire sales. It will go on the shelf in case there is a problem with the 13700k. Love it.
I have seen errors like this for a lot of reasons, bad sub timings on ram can do this too! It will cause lag on some software and throw every error under the sun, but never throw a memory error!
👎 thumbs 👎 for the tiktok ad. Hard to believe degrading is an issue. I run mine at 1.71V but tvb 70C... Meaning the temp never exceed 70C but I can enjoy all core sync 5.6 ghz sync when not rendering monkey heads. It's been 4.5 years
Thanks for your take. My 12900KF & 12700KF are stable on Nobara. Haven't toyed with the 12400F. Not sure what to think but am gaming on them fine. Mostly seems like a 13th/14th issue, not really sure what to believe.
Steve & Wendell should request to do a TH-cam session with you. It will help others in general and credit your efforts.
Ya all this infighting sucks. Helps no one.
Agree on the enthusiast part. De-lid, direct die and 320W limit. No problem for 14900KS on Asus Z790 Hero.
no many delid though most youtubers do that the average person does not. that is a u tuber 4 not really helpping.
@@JoeLegionTV Depends on your overall bin, and in particular ring and MC quality of each particular chip. Delidding at least takes thermals out of the equation as much as possible (with a MoRa 420 or chiller). You must have a pretty good chip to run those speeds on a 360 AIO. My 13900K could run 5.8 but only 7200 RAM speed because I'm using a Z790 Hero as I wanted the iGPU to be usable, on what was then custom loop, but it had a lot of fan noise at full load with Noctua 'industrial' fans.
@@JoeLegionTV I would guess that they do have good V/F curves.
I have a 13900kf running all cores at 5.7 all ecores 4.5 ring 5.0 with 64gb ram oc to 6800 and never ever had any issues, i think this teck tubers are using 240 AIO on their i9 and are killing it
what aio and ram are you using?
and i think he is just scaring people to buy it from him (his pc kits)
@@StormKhan-s7q I'm using a Iceman direct water block, alphacool 1260mm rad, my GPU is also with WB on same loop with 1 D5 pump and gskills 64gb 6400XMP ram
My 13700k is @5700Mhz ingame since almost 1 1/2 year. No crash, it just works.
If i9s and ryzen 9s weren't meant to go in servers then amd/intel need to stop making server Chipsets for them
Ok, so you limit your cpu to 5.6Ghz or whatever and now it works. Great! But you know what is the problem? You paid 6.2 Ghz with a lot of money because this is advertised speed. Right? You see the problem here?
"Intel should be selling these CPUs pre-delidded --with no lid --with no warranty." LOL.
Ok if u dont buy a i9 then what do u recommend? For gaming and for overall cpu ?