I would recommend most people go the 265k. It is a better chip for most uses and it is much more power efficient. I built a 9900x system and I'm using it right now. I also build systems and this week I did a very expensive 265k system for a customer. I was very impressed with the power efficiency especially in normal use and it would idle under 10W. The Intel will also fully utilise CUDIMMs. Intel also fully implement Thunderbolt in the motherboards. Standards are good and both Apple and Intel support Thunderbolt. I think this was a good review as far as it went but there is a lot more to a computer than gaming...unless you have that 5lb card and demand the ultimate framerates...then go AMD.
For those who don’t know intel released a statement that both bios level and software level are problems and to be updated. Who knows how it will really be in the future.
I saw that too, so I decided to wait to test the 285K until after they release the new firmware ... hopefully that will be soon and hopefully it will provide a decent boost in performance.
I'm in the market for a new PC. I'm running a 4th gen i7, and recently upgraded my GTX 1060 to an RTX 4070 Ti Super & a new PS. Now I need a new MB, CPU, RAM and a modern case. I don't mind purchasing the Core Ultra 7 today, since you can see the last time I upgraded my system (10 years ago!) So to get stuck with 12th-14th gen hardware would suck for the next 10 years. Well that's my logic anyway.
@@blackbirdpctech I can't wait! I am planning on buying ASUS Rog Strix Z890-E Gaming Wifi with Intel Core Ultra 7 265K, so I would like your opinion about this. You have my subscription, mate. Kudos!
No, for this analysis I just included the cost of the CPU ... you are indeed correct that the price of the RAM is different and will favor the 9900X. I should have made an additional chart with at least the RAM prices included but that said, it simply would have reinforced my recommendation. The overall upgrade cost of the new platform, RAM and CPU makes the Core Ultra CPUs a really tough sell.
Thanks for showing how awesome the 265K is for professional applications. As a "former rocket scientist", can you speak to the NPUs in the future and how professionals can make use of them? I'm seriously thinking of waiting for the 285K to become available at MSRP before building my next professional machine for logistics management applications on an industrial scale.
I don't think I showed "awesome" performance for the 265K in professional apps, but it almost certainly would be great in apps that are tailored for Intel processors, like Adobe products. NPUs or Neural Processing Units are supposed to be optimized for neural networks. I use the words "supposed to" because I have not made use of them yet in that capacity, so I can't talk to how much more efficient they are than GPUs. The largest compute cost for the development of AI is in the training of large neural networks using large datasets, so NPUs are supposed to be more efficient (less power usage) than GPUs at doing this training. Definitely a good thing as the world embraces AI at scale. I'm not sure how it would specifically help you in logistics management, but developing algorithms to help predict failures and optimize inventory might be something you should consider.
I got a Ryzen 9 9900X system...just put it together and very happy with it. It was kind of a tough choice between that and the Intel 265K. I think in the end I didn't need and didn't want to spend the extra on the Z890 board. I got an MSI Pro B650S board which ticked all the boxes and a nice liquid cooler and a Be Quiet Shadow Base 800FX...my once in 10 year upgrade. It is a shame that Intel started off on the wrong foot but each chip release is a big deal and needs to be carefully planned and coordinated with Microsoft...it seems so anyway.
me too i bought the 265k, Z890 AORUS ELITE WIFI7 ICE Intel LGA 1851 ATX Motherboard, and G.Skill Trident Z5 Royal RGB 32GB (2 x 16GB) DDR5-7200 PC5-57600 CL34 everything for like 500$
8000 CL38 or faster is the way to go. I got a 7200 kit and ended up using gratuitous voltage to OC it to 8000. The perf difference is huge just in general usage opening apps, booting up, etc.
Glad my motherboard BIOS lets me choose between 0x113 and 0x114 microcode, because 0x114 dropped FPS in some games. It allowed me to go back to 0x113 without reflashing the BIOS. But, I upgraded from an i7 11300K, so it's an improvement no matter what, in my situation.
The only thing to be careful of is that 0x114 is the official fix for the voltage issues, so I would caution against using an earlier microcode, even if it means a small decrease in performance.
I am waiting for the new firmware/bios to be released at the end of this month before testing the 285K … hopefully it provides a large performance boost
@@blackbirdpctech I don't have that kind of "PC Enthusiast" disease. I'm fine with my 5900X, NVIDIA RTX 4070, and I'm running The Talos Principle II at the max settings with DLSS Quality and the game is awesome. :) Granted, I don't need more than 60FPS because even if I configure my LG 42" TV in PC mode, set the PC to 120Hz and use HDMI 2.1 cable, I'm just not seeing the difference between 60Hz and 120Hz at 4K. Of course, I only play single-player games.
You got a sub I have been looking everywhere for level headed reviewers and just want the information without all the doom and gloom and overblown thumbnails.
Thank you for the OC settings, I've applied same settings you have and my system is stable as well, performance increased a bit. Intel APO works fine for me as I tested with and without APO (with your OC settings applied) and avg FPS in CS2 are increased by Intel APO. Overall OC + intel APO gave me a boost of like 100-120FPS in CS2 benchmarks. I wanted to ask as I don't really find any useful info on the internet, have you tried the "unleashed" profile for the gigabyte board on this CPU? I'm looking to try it and see what it does, but I'm kinda scared.
Perhaps APO works better in some games ... I didn't try it in CS2, but in the games I tried it was noticeably worse. One thing you may wish to try is to do an undervolt ... when I tried it initially it didn't work however I did manage to get it working a few days ago, so I plan to incorporate that in to the tweak guide for the 285K. I didn't try playing around with different profiles but I will also check it out for the 285K ... I would say don't worry, but after the raptor lake issues it's probably not a great idea.
@@blackbirdpctech Would also love some more in depth tweaking for the ultra 7 265k, since i think it's more common than 285k between gaming users. I can't say I feel the need for an undervolt right now as I paired with with a Galahad 2 Trinity, and a Lancool 3 case with 3 extra bottom fans..the cpu runs very cold, didn't see anything over 50c in any games, and the AIO fans are not reaching sound-making speeds.
@@vanilla1st I expect the tweaks for the 285K to be the same as the ones I used for the 265K, just different magnitudes. The benefit of an undervolt would be increasing your boost frequency or maintaining it longer, so it's definitely worth trying.
I would likely go with the 285K because of the support from these platforms for Intel ... hopefully that will change soon as AMD acquires a bigger market share.
Incredible in depth detailed analysis/information plus tweaking tips. How is this channel not a multi million sub yet? Keep it up, and I am sure you will be able gain more views! Maybe mix in some regular tech news discussion other than reviews?
I've thought about how best to do news ... I was thinking about doing it on a weekly live or podcast ... if I am going to do it I would really like to add something unique to the mix ... thanks for the suggestion, will think about it more ... and welcome to the Blackbird PC Tech community!
Did I mess up buying a 265k? I had an i7 11700kf and my system is having issues, so I got parts to build a new system. I didn't want the overheating issues for 14th gen. My current system also had 64GB memory and I wanted my new system to have more using only 2 chips as I heard that's better. So I've got Corsair Vengeance 2x48GB which is only DDR5-5600 CL40. Should I return it and get DDR5-6800 CL34? I'd have to chip in $80 out of my pocket. Will it make that much of a difference? I am getting the 7900XT or XTX video card. I have no intentions to overclock anything. I just want it to run stable and be pretty fast. I'm not concerned being top of the line.
You didn't make a mistake moving to a 265K from an 11700, however by using DDR5-5600 you will leave quite a bit of performance on the table in games, as I showed in the video.
@@chungkingcansuckit6345 265k has other benefits if you need them. I would never build a PC just to play games with. Seems like a waste of money. I'm waiting for the 285K to come back.
Man I was really hoping for a win from intel, but these core 200 series are just sad. So they asking for expensive ram just to barely keep up. And motherboards that could be end of life as soon as you buy them soo sad. Nobody with common sense would buy these products intel need to be taught a lesson don’t rush half baked nonsense and expect us to buy it.
I'm in the middle of acquiring the parts for a new build, based on the 265K Core 7 Ultra. Memory (currently the 2x16/32GB sticks of Corsair Vengeance at 6400 with CEL 36 if I recall right, may pickup another set of the same to bump to 64GB. 2 NVME drives from Crucial and an older HGST refurbished 4TB mechanical drive for video asset storage and am looking at Intel for the graphics card. Not sure if B580 or higher. Mostly researching, script writing, video editing ant anticipating 4K editing using Davinci Resolve 19.1. So it'll be interesting to see how it performs when all said and done. Case, 2 drives, memory arrive today or this week, the PSU the earliest on the 17th and a front I/O with SD card reader isn't due until mid to late Feb. So far, I have the motherboard, Gigabyte Aero G, the main NVME drive, Peerless Assassin cooler tower, the contact frame for the CPU (a better safe than sorry/just in case situation, and didn't cost much), an exhaust fan for the Focus G case (red) from Fractal (rear) and CPU.
Good luck with your build ... I will have a comprehensive step-by-step tuning and optimization guide out next week for you to use once you are done building and ready to extract max performance.
@@blackbirdpctech Thank you. I'm coming from a 7th gen i5 Dell SFF Optiplex, which was an unexpected replacement for a 4th gen of the same thing, and before that, I rocked a 1st gen i7 Dell Studio XPS and its replacement stemmed from a final straw of the hard drive failing, and most everything else on that PC, minus the PSU, memory and CPU had to be replaced due to failures over the decade I had that machine. So this new build should be a HUGE step up from the past few years (2019).
Unfortunately many day one reviewers didn't test the 9000 series CPUs well, so there was widespread misinformation that they don't offer any meaningful performance boost over the 7000 series, which is simply not true. The 7900X is a good CPU, but the 9900X is much better.
@@blackbirdpctech thanks for the fast reply and also that's good to hear, the 9000 series are definitely on my radar in terms of upgrade. speaking about intel, do you know if i5 14gen also has instability issues like i9 and i7?
Yes, I think it was anything over 60W TDP, but that said the new firmware 0x12B was the final fix from Intel and was supposed to fix the voltage issue.
Hey im looking to buy a new pc all together and im still in a bit of doubt on which CPU and GPU i should buy. I want to try out 4k gaming but 1440p would be fine too and my budget for the complete build would be around 3000$ maximum preferably less. Seeing these performances makes me think for gaming the 14700K would be better. But at the same time the 265K might get a bit better after the firmware updates. But dont know about overclocking and dont quite wanna mess up or anything. TL;DR I want to know what CPU & GPU combo to buy preferably without much overclocking for 1440p/4K gaming
@23:00 - is the latency delta in the AIDA 64 benchmark between the 265 and other two parts humanly noticeable during normal Windows 11 operation and fast paced productivity tasks? Will a DDR5 6400 v. a DDR5 8400 RAM kit mitigate any perceived issues?
Yes it is noticeable ... Windows doesn't feel as responsive. That said the recent firmware changes that Intel has made do help and with some tuning and optimization you can mitigate it ... take a look at my 285K review that went live earlier today: th-cam.com/video/fNFL2bRIWLM/w-d-xo.html
Your system latency measurement was a big takeaway for me. I didn't realize the change to the chiplet architecture affected this so much. Everyone will feel that when you are simply navigating Windows, like we do with our systems for 75% of the time. For this kind of money, I'd expect big wins in productivity, par or better single thread gaming performance, and I want my system to feel zippy when I'm browsing around. I do think the memory improvement could yield dividends in the long run, if Intel doesn't go under before they can ramp up some of these clock speeds in this architecture. Gosh...I just can't believe this company could let their competitive advantage slip away. It's not like they didn't see this coming. What a massive blunder. My opinion? Intel's new architecture is far enough behind that they need to immediately ramp up the clock speeds by a full generation in order to close the performance gap and drop the prices of the 245k/265k/285k to sub $300, even if it means taking the hit. They can't compete on performance/price ratio like this. They're going to make zero revenue...why would any OEM system builder offer Intel as an option unless it's a niche 3D rendering workstation? It's a loss, man. New bios? You really think it'll close the gap? If so, it should have been the original released bios for the platform. What a blunder! They are getting cooked in these reviews. It's killing Intel's consumer credibility.
I don't disagree however based on recent direct feedback from Intel I do think the new firmware could provide a significant boost ... I don't think the cpu was fully baked at launch and if that's true, a firmware fix could really help. If it doesn't then the only option they have is to reduce the price.
@@blackbirdpctech the latency is inherent to the tile Architecture and reduced clock speeds, that firmware wont do much in that regard sadly, it is just physics. There is just much more latency when u have to go through an interconnect layer before instructions hitting RAM, that and the clockspeed regressions are what is producing the latency, no way this can be fixed in a microcode update cuz as i said, its physics. It would be like expecting to send the sun a wifi signal to control global warming. There were people who were getting like 180n/s latency when loading XMP on high speed kits and stuff like that though, i am assuming that kind of weird behaviour is what Intel will be targeting, probably some OS/BIOS/Hardware combos not working as intended and whatnot. This is the main reason x3Ds do good in games since a lot of stuff doesn't ever need to to RAM, obviously in some games this cache isn't large enough, but clearly AMD isn't even trying anymore (they continue to give is same core counts and memory cache sizes), they do produce custom EPYC CPUs with HBM4 3Dstacked as L4 cache, if they did that on the desktop they would create an insane good gaming CPU, they just don't need to.
@@RampageGW2 I stated this in my video: "... with the launch of Arrow Lake, Intel has moved away from monolithic designs, to chiplet based designs, and as a result, the system latency has increased significantly. This is something that you will definitely notice in Windows … " So I'm not sure why you felt the need to explain why these new chips have higher latency. Also, the clock speed change has no meaningful impact on system latency. That said, there are many ways for Intel to improve the performance ... gaming performance is not simply driven by latency. For example, Intel will likely optimize how the load is spread across each core and prioritize the p-cores. The reason that boosting the clocks on the e-cores had such a large impact on performance is that they are carrying too much of the load in games.
Hi, Really great video! I need a suggestion, Im building a pc for 4k multilayer video editing on after effects/premiere pro and ai engineering (using softwares like pytorch and tensorflow) I'm confused between these chips - Ryzen 9 7900x , Ryzen 9 9900x , i7 14700k, I will pair them with either 4070 super or 4070 ti super..The 7900x is coming in my budget, for 9900x I will have to increase my budget.. Please just tell me which chip should I go with..Im really confused! Also can I pair 7900x/9900x with gigabyte x870 eagle wifi 7 motherboard..? And one more question 😅 Can you tell me if I should wait for the 5070 to launch or just build my PC now in 10-15 days with 4070 super/ti super.. I would appreciate your response 🙂
If you use a lot of Adobe products then I think Intel would be a better choice ... if you can get a 265K on sale then that might be a good option for you. No one knows when the 5070 will launch and be available so my recommendation is, if you need your new PC now then build it now.
I'm really looking forward to seeing just how much Intel can boost performance with new firmware ... it could actually become a great chip, hopefully the new Bios will be out soon.
At least it overclocked well ... the stock performance is really disappointing ... it's because of lack of competition that the 9800X3D sold out so rapidly and is being scalped, so yes we really need Intel to step up.
@blackbirdpctech Agreed, if it hadn't been competitive in productivity it would have been game over. I'd say their best thing they came up with was their igpu, they *almost* were on par with amd and soon battlemage will come. I'm crossing my Fingers.
i picked up 265k from microcenter for $299 yesterday. and Gigabyte z890 ud wifi6e for $139 after $70 bundle discount. upgrading from i5-8600k, not a bad deal i guess.
I don't have a Microcenter near me. I wanted someone to ship me one. I ended up paying $90 more. :( What memory did you get? My current system has 64GB and I wanted more, but didn't want more than 2 sticks. I hear your system runs better with only 2 sticks. So I got 2x48GB Corsair Vengeance DDR5-5600. I wonder if I should have went with 2x32GB DDR5-6800?
Biggest problem is the numbers intel published are way beyond what the real world results are. Shot themselves in the foot which seems to be Pats favourite thing to do
What I find really crazy is not that they lied about their performance, they all do that, but that even their initial performance projections were disappointing … so they set an expectation and then failed to even meet that. Hopefully they can recover some performance with their coming firmware fix.
@@blackbirdpctech Yes they all fudge the numbers but normally intel is fairly conservative when they do. This time they clearly were not because they were hoping people would just blindly purchase based on the power savings changes
Yes, that definitely changes the value discussion. At $299.99 and assuming that the other chips remain at the same price, then the 265K will offer 38.4% better value in 1% lows and 5% in average performance. The only caution I would add is that you should add the cost of the RAM in to the value discussion as well, something I should have done in the video. The reason I say this is that to extract max performance from a 265K you need to use high-speed RAM, which costs considerably more than a DDR5-6400 EXPO kit and DDR5-7200 XMP kit. So keep that in mind, but this really is a great price point for the 265K.
Some day 1 reviewers used ordinary DDR5 keeping the CUDIMMS or a follow up. The problems seen involved Windows versions and BIOS, with Intel doing relatively strongly in synthetic benchmarks but often poorly in application tests. So no injustice there, the other price cuts and performance improvements plus 1 shot platform make Core Ultra S a poor choice.
I agree that it's currently a poor choice, as I stated in the video. My chip was a standard retail CPU, not a review sample, and the OC was mild, so I expect that it will work on most 265K's. It was impressive to see how much extra performance I could get out of it ... let's see what the new bios brings.
@@blackbirdpctech but if Intel were supplying you and know you'll overclock they can send you a golden sample. Even retail, I have seen past evidence suggesting early units tend to be higher quality. It's a basic problem when going out of the specs and Intel did refuse some RMAs, in one case claiming a ludicrously low DDR5 speed was what was supported
@@RobBCactive I purchased my 265K from Amazon ... I currently purchase all of the CPUs I test from either Amazon, Newegg or in-person at a Microcenter.
Not really. Hyperthreading never gave huge boosts save for some very specific apps and workloads, in some scenarios it even cost you. And with the leaks it caused needing mitigations that higher performance was lost a good deal.
@@blackbirdpctech Likely. Also they use a different thing than AMD called Universal Chiplet Interconnect Express for which Intel alone was not responsible, so other companies had their parts put in as well as Intel. I never looked into it really but if the design is not optimised for CPU specifically (and includes NPU, TPU, GPU, DPU, etc) that potentially costs them in performance. This in addition to using TSMC which may or may not have needed concedssions in the design in some ways compared to if Intel was ready to make that node themselves. Supposedly the design is 99% not dependent upon a specific fab or node but that number seems awfully optimistic. The tiles also give latency just what it is. AMD has the CCD to the IO die, maybe a 2nd CCD (desktops anyway), where Intel has more tiles from what I saw, GPU tikle, SOC tile, IO tile, L3 stuck to the ringbus, this adds up all in all. On the upside, this probably means memory latency will be less felt. We are talking nanoseconds not microseconds but it does cost performance.
Now that there's an Intel/AMD partnership in the x86 advisory group, I think we will see AMD and Intel agree to ditch SMT/Hyperthreading, and probably E/P cores as well on the intel side. Both technologies add a lot of cost and complexity for little to no performance benefit
I have a 17yo PC and wish to upgrade to to a new PC that will work another 17 years. The games I play is mostly CPU heavy like X4 I Currently have An i7 6800k Edit: I was thinking of the year 2017 when I bought it so 7yo
I still don’t know what’s wrong with my new 265K build. Huge amount of skipped frames when trying to play TH-cam videos from the IGPU, even at resolutions much lower than 4K. Win 11 Pro 24H2. MS Edge or Chrome, can’t tell which one is the worst. Crazily, disabling acceleration in the browser settings would result in fewer frame drops, about half. It’s that bad!! Btw when using the discrete GPU output on the same system, no issues whatsoever. Has anyone noticed a similar behavior?
@@blackbirdpctech It's an ROG Strix 3090 OC. FYI I just installed the latest Intel Graphics Driver 32.0.101.6314, and the issue persists. Just terrible experiences with this new Intel system... while I'm loving my new Mini M4 Pro that just works out of the (tiny) box!!
@@blackbirdpctech For example, I want to maximize the VRAM and the other DGPU resources available for LM Studio and other AI tasks. The 3090 is going to be replaced by a 5090, that's why I built this system. I don't game. Regardless, if this new $500 SOC cannot handle TH-cam video playbacks, Thanks but No Thanks!! Even the OLED TV that this system is connected to can play 4K HDR videos w/out any frame drops. None. Any system worth more than $40 should be be able to do that nowadays. However, before returning the system, I really would like to know whether the issue is common, or related to something wrong in my system.
nobody wants to review intel's ARC A770 LE 16gb, on a REAL processor or realistic rig, with a REAL amount of ram so we've never saw deeplink tech at work, and we've never saw the a770 using it's max potential. many people would like to see a review like this i promise you. (the upcoming B780 would be nice to see reviewed on a proper system too)
A lot of it was not having Windows schedulers and more, not ready to make it run as good as it is capable of. Level 1 Techs has a great video on it, he explains why Linux exploits it's gains better, for now. Intel/MSFT are working to rectify this. I will be waiting for Panther Lake, as those will show big gains. Only 6-7 months away!
A lot of it was microcode based ... Intel needed more time to get the firmware and software right ... I will cover it in detail in my 285K video that will be released later this week.
@@blackbirdpctech 245K I don't have any need for more cores and the only game I play is Unreal Tournament. I have been running a 12600K golden sample for almost three years that has a better memory controller than the 14600K I bought recently. The 12600k will run 7600 stable, the 14600K won't even boot at that setting. Same board and same memory, it was a Gigabyte Z790 Elite X but it went tits up after a month so I returned it and I'm using a Asrock Z690 Taichi and even though it has an 8 layer shielded board I can only get 7000 at higher timings. (well it's a Z690 so OK) I have had a lot of MB's and this Taichi is the best board I have had once I got used to the bios layout. I don't want to layout the money for a new Taichi for the 245K so bought a MAG Z890 Tomahawk WiFi. I've been playing UT for 23 years, I'm 73 years old. BTW: I subbed your channel, that information you mentioned about the TREFI setting lowered my latency by 10ns! That was jaw dropping to me. I have never heard of that before...
23 years of UT is dedication! One thing I would focus on for your 245K is the e-cores ... these chips seem to rely on them much more, so boosting the frequency by 100 MHz can make a big difference. And yes, you can tweak a lot of different sub-timings but this is the only one that I've found to have a significant impact on all CPUs without impacting stability. Appreciate your sub and welcome to the Blackbird PC Tech community!
I am new to pc and I recently bought a one that I mainly use for gaming. It has an intel core ultra 7 265kf, a PNY GeForce Rtx 4070 super 12 gb gpu, 32 gb of ddr5- 6400mhz, and an MSI Pro z890 S. I’ve mainly been playing rust and can get around 140-160 fps on high or medium settings but when looking at certain things it drops to 20-40 fps. I have most of the in game quality settings turned down but it still drops. I can get 100-200 fps on less intense games but I was just wondering if there pc just isn’t built for more graphic intensive games or something else.
I'm still on a i9-9900K MSI GTX 1060 Gaming X 6Gb with a gigabyte Aorus master Z390 i want to skip the 1440p middle man and go straight to 4K but 4K QD- OLED 240hz DP 2.1a monitors are so expensive I need a hole new build.
You don't need DP2.1 ... you can use DSC. Unfortunately the only cables that are certified DP 2.1 are 3ft cables at the moment ... 6ft cables should be here by the end of Q2.
Of course, Intel 265K is not bad at all, even this CPU is just the best value within the Ultra 200 series. The problem is the new platform that is only available on top-tier Mobo, while Ryzen even fits perfectly in the common B650M. And their brother's 14700k with the current tempting price also will prevent buyers from stepping into the new 265K.
Agreed, the platform upgrade cost is high, especially if you include CUDIMM RAM, and the prior generation is better value ... not a great position to put yourself in to.
Micro Center has I7 265K marked for $299.00 in store only. I got 2 for $529.00. 1 rang up for $229.00 went back 4 hours later and they fixed the register extra $70.00 but not the price of $299.00. It should be $399.00 like everywhere else.
This came up during my 9950X vs 14900K testing and here was my explanation from that video: If we take a look at Call of Duty: Modern Warfare III at 1080P minimum settings we see a significant performance difference between the two CPU’s, with the 9950X generating 37% higher average FPS and a whopping 73% higher 1% lows. To understand why this might be happening we need to take a deeper look at the benchmark results. For the 14900K you can clearly see that the CPU is the bottleneck 99% of the time, which means that the processor performance isn’t enough to keep up with the game at these settings. As I explained earlier, this is precisely what we want when testing the relative performance of CPU’s. However, when we look at the in-game benchmark data for the 9950X, we see that the CPU is not fully loaded, with the GPU being the bottleneck 25% of the time. This means that the processor performance is so good that even at 1080p/minimum settings, the 9950X is able to keep up with the game 25% of the time, thereby shifting the load to the GPU, which in turn results in a significant FPS boost. I tried lowering the render resolution even further but I was unable to fully load the CPU, it’s just that damn good. This issue is coming up during my 9800X3D vs 7800X3D testing ... the 9800X3D is so good that it places the load back on to the GPU ... even a 1% shift in load back to the GPU will boost your average FPS, so it can make a weaker CPU look better than it really is relative to the stronger CPU. The way to really see which one is "better" is to look at the 1% lows.
I want to build my pc using Intel cpu and I don’t care about the price but I’d want a good one for heavy games and to do like streaming what would you recommend?
It significantly improved on 1% lows in games and power efficiency, so no, it's not a bad CPU ... it is however priced poorly with respect to the competition.
Plus a new motherboard for each next gen Intel CPU release. The efficiency is a step in the right direction but the CPU isn't there yet in terms of value and performance. Maybe next gen.
@@JohnWalsh2019 I really don't like the push to change socket frequently ... that's something they should consider changing in their design philosophy.
I assume that you are referring to the Intel Default Settings, and yes I used the Performance power profile. That said, your chip will NOT "destroy itself" if you don't use it, especially if you are running the latest 0x12B microcode ... I covered this extensively in prior videos.
Wondering if you could dive deep into Arrow Lake's Alchemist iGPU with 4XE Cores. I heard the iGPU's gaming performance sits between a GTX 1050 and a 1050Ti and slightly better than the Ryzen 7 5700G's iGPU.
The challenge for any content creator is time and deciding what to focus on. This topic would be interesting to investigate however I don't think it would be popular ... especially with all of the new products on the horizon. That said I will add it to my list, but it might be a while before I can get to it.
The latency for a chiplet or tile based design will always be worse when compared with a monolithic design. That said, it's not anywhere near as bad as some of the initial reviews claimed and I expect the new firmware to improve it further.
It's not the RAM, it's the chiplet based design ... Intel moved away from a monolithic design for the first time with the Core Ultra series, so now the speed with which they pass information between the chiplets has a large impact on latency.
@@blackbirdpctech Yes, So they are giving 70$ extra off so I am getting 265k + Asrock z890 SL + 32GB 6000 CL30 for 590$. While on MC I can also get 7950X3D + 670E MSI MAG + 32GB 6000Mhz CL36 for around 730$ Could you help me choosing the correct one Which one I should go with ? My primary usage will be gaming + PCVR + Some other light weight stuff
I'm not super familiar with VR requirements but for gaming you will definitely see more performance with the 7950X3D. To really take advantage of the 265K you need to pair it with fast DDR5 RAM. In addition, there will be one more generation after the 9000 series on AM5, so that will give you a growth path ... not sure if socket 1851 will support another generation.
Hi, i am from other region, i thinking of build a pc both productivity and gaming in tight budget 1.2k - 1.5k . And i confuse in cpu choice 13600k or 7900 with 4070s .or you can say under 300$ cpu,Can you have some advice?
Well I consider upgrading from 5800x3D to something with more multicore power paired to RTX 4090 playing at 1440p. Currently I am already running 420mm AIO so heat isnt issue. I am always on the latest Windows version, wouldnt disable CCDs or ecores and expect smooth all around experience and gaming performance (my focus is 0.1% lows). I am thinking about 14700k, 14900k (150USD diff), 265k (50USD diff) since your testing showed that ecores had less performance downsides compared to dual CCD design. I am most likely not going to benefit from the IO upgrade since I run Optane P1600X for system drive. My prefered motherboards will cost around the same. I would be able to wait another generation or two if you expect noticeble improvements for my use case. I am glad that you mentioned improved support for 8000MT+ non CUDIMM I was asking that question myself since day one and nobody talked about it before.
At this point I would recommend waiting to see what the new firmware/bios update coming from Intel does to performance before buying a 265K or 285K. If it boosts performance in a meaningful way, as Intel suggests it will, then these chips might make for a good upgrade.
You need to look at three parameters, not just the CAS latency, to understand what impact memory will have. In addition to bandwidth, the first word latency and RAM latency are the most important parameters that will directly impact performance. So a kit of DDR-7200 CL34 has a first word latency of 23.3ns and RAM latency of 9.4ns whereas a kit of DDR5-8000 CL38 has a first word latency of 21ns and RAM latency of 9ns.
Don't forget, they also appointed two CEO's ... because all companies will run better with two people in charge 😉 More seriously, we really need them to step up and be competitive or prices will inevitably go up. At a minimum they should drop their prices for the Core Ultra series.
@@blackbirdpctech That`s the Intel way i guess, why would you do something with 1 if you can use 2. Indeed, you can already see the AMD prices go up. I`m glad i already have mine, bought it right away, The 9950X is really awesome.
I agree, that’s why I decided to wait to test the 285K … I think it will be much more valuable to show performance after the bios update scheduled for early Dec.
@@blackbirdpctech Because when I adjust the clocks, the cores, and also the ring of the CPU, the CPU DLVRin Vcore/VCCin tends to run about 1.5V or so. I'm just wondering if I should try to adjust the voltage pertaining to that because I don't want the chip to die on me. And I haven't been able to find anything on it.
Anyone have information on CAMM2 memory? Reviews from last Tech Show MSI teased a MB with this type of memory, not sure of socket. However it was a Project Zero with hidden connectors.
since you seem so wealthy, can you go back in time and bench: a intel a770 16gb Le with a z690 board and a i7 13700k with 128gb ddr4? nobody wants to review intel's ARC A770 LE 16gb, on a REAL processor or realistic rig, with a REAL amount of ram so we've never saw deeplink tech at work, and we've never saw the a770 using it's max potential. many people would like to see a review like this i promise you. (the upcoming B780 would be nice to see reviewed on a proper system too)
I tested an A770 on a Z690 with a 14700K with 96GB of RAM ... you can watch the video here: th-cam.com/video/3K_kqBsUuOY/w-d-xo.html One very disappointing thing I heard directly from Intel during CES was that they do not plan to release a B770/780 ... not good news because I was looking forward to testing one.
@@drewnewby Pat was fired by Intel and then hired back after the two joint CEO morons screwed things up … probably a good idea to look into how Intel got where they are. And sure, if someone buys Intel then I agree, but that doesn’t change the fact that we need multiple strong competitors and at the moment that’s Intel and amd.
new intel cpu require new mb and a new ram type to perform their best. 12th gen cpu may need a new mb so thats cheaper if upgrading from older cpu. and upgrading 12th to 13/14th gen is cheaper since a 12th gen mb will support the 13/14th gen. but mainly, the ultra cpu range are all brand new design architecture, and will have these teething troubles for a while. for gaming i would hold off unless you really need to replace that celeran dual core for gaming, and just go safe with 12th gen - if you can find the mb and cpu before the stock runs out, of course. or go AMD. if the hotter running cpus and gpus from them doesnt bother you.
While Core Ultra 265K and Ryzen 9900X might seem a good comparison, If the goal to do Gaming they are mismatched. Ryzen 9900x should compete with Intel's core i5 as the performance core and ccd core number of 6 match there. Core Ultra 265K still has 8 performance cores and in gaming it is competing with 6core ccd.
There are some games that benefit from the extra cores on CCD1 being enabled, as shown in my 9950X review, so I do think that the 9900X is the correct direct competitor based on actual performance and price. One thing to be aware of with the 265K is that the efficiency cores play an important role in gaming, turn them off and your fps drops significantly.
I'm more than happy w/ my 14700K + 4090 gaming rig. Considering the current climate of games, there is no reason for me to upgrade for a while. I have a large library of great games that I still have not made the time to play so I'm good for at least two more gen of cpu & gpu cycles.
They should make it cheaper. Do what amd did when they were worse in games but better in some multi-core stuff. So 100 dollars below 9700x would make it good.
Intel's core lineup might and should have been much better, but they chose to make it this way. The Hyper threading was retired for no reason. When they said we will not use Hyper Threading I though they found a way to achieve that performance without hyper threading which was a good idea. But they just tried everything to make the CPU s worse than AMD. Still memory bandwidth performance is much better than AMD and it means that for all memory bandwidth limited scenarios still Intel is the better choice.
i had nearly a dozen customers with 13/14th gen i7 and i9 who died in months ... at my tiny lvl it's HUGE , nobody should buy 13/14th gen those chips are faulty by design and i don't believe any amount of microcode will save them
I feel for you because that must have been a nightmare to deal with. That said, my experience with 13th/14th gen CPUs was very different ... over the 2 years I used them extensively I never had an issue with any of them, and I pushed them pretty hard in bios. I'm reasonably confident that Intel has fixed the issue with their final microcode update ... they took way too long to address this issue properly and push out a solution, but I do think they finally got there.
My friend was bulid new pc with intel core ultra 9 285k for me, i know my brother and my old friend are preferred to Intel pc But when relased New AMD most powerful for programs drawing or 3d prigrams on future (today AMD still powerful for gaming, you know.) That's why i take intel for programs drawing and 3d programs only today.
That's great to hear ... I've been getting a few questions related to professional workloads for these chips, so getting feedback from a developer is helpful.
1 p core = 4 e cores which intel put in to keep up wiff amds & beyound them Ecores r like 4 donkeys pulling a heavy load slower but do heavy work rendering as to 1 P/horse all this chit is what i think i absorbed and wont help me in anyway as i dont have this series & wont its been all a fooking gamble con job since the 9series i bought i7.9700k They dangle a carrot 🥕 tied to string effect its weird how the older CPUS still do better or near
Personally, I think a good amount of your performance improvements when running your higher capacity Rams are coming from your low clock cycles on these higher megahertz ram. You had 6,400 at 32 but then you had 7,200 at 34, that's just going to be faster at a default, then you had 8200 at 38, so generally speaking you're running at 5nano sec, 4.72 ns, and 4.63 ns so basically its just faster, and hence the general 6 & 8 percent improvements as they are 6 & 8 percent faster.
I don't really understand your point or your numbers. If you look at the DDR5-7200 CL34 kit for example, it has a first word latency of 23.3ns and a RAM latency of 9.4ns ... where do you come up with 4.72? Regardless, Intel systems tend to scale linearly with RAM speed, as mentioned in the video, so the benchmark results should come as no surprise ... the only challenge is running higher speed RAM stable.
@@blackbirdpctech It can give 30% more performance by tuning it like frameChaser did but this platform is too difficult to tune, all the previous ones were easy and tuning it doesn't fix the irregular performance either.
I would be careful quoting my friend Jufes on this one ... it's not more difficult to tune than a 14700K, but if you are selling a PC tuning course then sure, it makes sense to push the narrative that it is 😉
@@blackbirdpctech justamente no quiero comprar el curso, pero no me atrevo a comprar la plataforma por el rendimiento tan malo, creo esperare la serie 300, el curso vale 500 dollars, I have the 10700k and it is very very easy to tune, I am not an expert but the platform is easy and I wanted to buy the 200 series and this tremendous betrayal on the part of Intel occurs
@@blackbirdpctech I just don't want to buy the course, but I don't dare buy the platform because of the poor performance, I think I'll wait for the 300 series, the course is worth 500 dollars, I have the 10700k and it is very very easy to tune, I am not an expert but the platform is easy and I wanted to buy the 200 series and this tremendous betrayal on the part of Intel occurs
😂🤣 Your video and things coming out of your mouth aren't matching buddy. 265 didn't do well even with those super high spec'ed DDR5. Don't sugar coat things its bad case closed!
Did you watch the video before commenting? This is what I actually said in the video: "... then it becomes clear that the 265K does not offer good value against any of its direct competitors. While it’s great to see Intel make improvements in efficiency, they clearly priced this CPU too high relative to the competition, which even includes their own prior generation chip. If you are looking for a great all around chip, then I would highly recommend the 9900X, it offers excellent performance at an equally great price. If however you are an Intel fan, then the 14700K is currently a much better option." Do better buddy 😉
The same reviewers said the 9700X is crap too, but it’s not. Try taking a look at my content before commenting, or perhaps even watch the video you commented on, you might learn something or see that I recommended the 9900X over the 265K.
I would recommend most people go the 265k. It is a better chip for most uses and it is much more power efficient. I built a 9900x system and I'm using it right now. I also build systems and this week I did a very expensive 265k system for a customer. I was very impressed with the power efficiency especially in normal use and it would idle under 10W. The Intel will also fully utilise CUDIMMs. Intel also fully implement Thunderbolt in the motherboards. Standards are good and both Apple and Intel support Thunderbolt. I think this was a good review as far as it went but there is a lot more to a computer than gaming...unless you have that 5lb card and demand the ultimate framerates...then go AMD.
For those who don’t know intel released a statement that both bios level and software level are problems and to be updated. Who knows how it will really be in the future.
I saw that too, so I decided to wait to test the 285K until after they release the new firmware ... hopefully that will be soon and hopefully it will provide a decent boost in performance.
5 year warranty.
👍
I have a 14900K and 14700K with current BIOS. Neither machine has issues. Both are smoking my workloads.
@FMBriggs absolutely, people forget how unstable AM5 was when it first launched ... I couldn't get DDR5-6000 EXPO to run on my 7950X at the time.
I'm in the market for a new PC. I'm running a 4th gen i7, and recently upgraded my GTX 1060 to an RTX 4070 Ti Super & a new PS. Now I need a new MB, CPU, RAM and a modern case. I don't mind purchasing the Core Ultra 7 today, since you can see the last time I upgraded my system (10 years ago!) So to get stuck with 12th-14th gen hardware would suck for the next 10 years. Well that's my logic anyway.
Great video, plain and simple. I got i7 265K ultra. Building tonight. Excited about my new rig and latest technology from Intel.
I will have a more comprehensive step-by-step tweak guide out next week.
I will Watch it… greedings from Austria
@@blackbirdpctech I can't wait! I am planning on buying ASUS Rog Strix Z890-E Gaming Wifi with Intel Core Ultra 7 265K, so I would like your opinion about this. You have my subscription, mate. Kudos!
I also build a rig on that CPU and ordered CUDIMM memory 8800mhz from T-Force, We'll see how it perform
@@Sworbjorn I ordered Core 9 285K instead.
When you factor the cost per frame are you also including the price of the whole system? The RAM prices are all different.
No, for this analysis I just included the cost of the CPU ... you are indeed correct that the price of the RAM is different and will favor the 9900X. I should have made an additional chart with at least the RAM prices included but that said, it simply would have reinforced my recommendation. The overall upgrade cost of the new platform, RAM and CPU makes the Core Ultra CPUs a really tough sell.
Thanks for showing how awesome the 265K is for professional applications. As a "former rocket scientist", can you speak to the NPUs in the future and how professionals can make use of them? I'm seriously thinking of waiting for the 285K to become available at MSRP before building my next professional machine for logistics management applications on an industrial scale.
I don't think I showed "awesome" performance for the 265K in professional apps, but it almost certainly would be great in apps that are tailored for Intel processors, like Adobe products. NPUs or Neural Processing Units are supposed to be optimized for neural networks. I use the words "supposed to" because I have not made use of them yet in that capacity, so I can't talk to how much more efficient they are than GPUs. The largest compute cost for the development of AI is in the training of large neural networks using large datasets, so NPUs are supposed to be more efficient (less power usage) than GPUs at doing this training. Definitely a good thing as the world embraces AI at scale. I'm not sure how it would specifically help you in logistics management, but developing algorithms to help predict failures and optimize inventory might be something you should consider.
BTW, the i7 14700k is like only $287 US vs $440 for the AMD CPU! As of this typing.
Yes, prices change over time ... it's good to see Intel dropping the prices of their older Raptor Lake CPUs.
I got a Ryzen 9 9900X system...just put it together and very happy with it. It was kind of a tough choice between that and the Intel 265K. I think in the end I didn't need and didn't want to spend the extra on the Z890 board. I got an MSI Pro B650S board which ticked all the boxes and a nice liquid cooler and a Be Quiet Shadow Base 800FX...my once in 10 year upgrade. It is a shame that Intel started off on the wrong foot but each chip release is a big deal and needs to be carefully planned and coordinated with Microsoft...it seems so anyway.
@@Inspectergadget69 I have the ultra 7 and for the games I play I run great performance
I got the 265k on Black Friday for $229 from micro center along with a z890m. What is the minimum ram speed you’d recommend?
I would suggest DDR5-7200 as the minimum and DDR5-8000 as ideal.
me too i bought the 265k, Z890 AORUS ELITE WIFI7 ICE Intel LGA 1851 ATX Motherboard, and G.Skill Trident Z5 Royal RGB 32GB (2 x 16GB) DDR5-7200 PC5-57600 CL34 everything for like 500$
@@Michael-go4ix that's a great deal.
@@Michael-go4ix what a steal
8000 CL38 or faster is the way to go. I got a 7200 kit and ended up using gratuitous voltage to OC it to 8000. The perf difference is huge just in general usage opening apps, booting up, etc.
Glad my motherboard BIOS lets me choose between 0x113 and 0x114 microcode, because 0x114 dropped FPS in some games. It allowed me to go back to 0x113 without reflashing the BIOS. But, I upgraded from an i7 11300K, so it's an improvement no matter what, in my situation.
The only thing to be careful of is that 0x114 is the official fix for the voltage issues, so I would caution against using an earlier microcode, even if it means a small decrease in performance.
A lot of precise and useful info , great channel
Thanks!
8700K..9900K..10900K..11900K..12900K....now what? ...my upgrade path got BORKED....thanks Intel.
I am waiting for the new firmware/bios to be released at the end of this month before testing the 285K … hopefully it provides a large performance boost
I can see few pointless upgrades here 😅😅
Not pointless, it's called the "PC Enthusiast" disease ... I suffer from it too 😉
@@blackbirdpctech I don't have that kind of "PC Enthusiast" disease. I'm fine with my 5900X, NVIDIA RTX 4070, and I'm running The Talos Principle II at the max settings with DLSS Quality and the game is awesome. :) Granted, I don't need more than 60FPS because even if I configure my LG 42" TV in PC mode, set the PC to 120Hz and use HDMI 2.1 cable, I'm just not seeing the difference between 60Hz and 120Hz at 4K. Of course, I only play single-player games.
@@blackbirdpctech I don't know man going from 10900K to 11900k? 😅
You got a sub I have been looking everywhere for level headed reviewers and just want the information without all the doom and gloom and overblown thumbnails.
Welcome to the Blackbird PC Tech community!
Thank you for the OC settings, I've applied same settings you have and my system is stable as well, performance increased a bit. Intel APO works fine for me as I tested with and without APO (with your OC settings applied) and avg FPS in CS2 are increased by Intel APO. Overall OC + intel APO gave me a boost of like 100-120FPS in CS2 benchmarks.
I wanted to ask as I don't really find any useful info on the internet, have you tried the "unleashed" profile for the gigabyte board on this CPU? I'm looking to try it and see what it does, but I'm kinda scared.
Perhaps APO works better in some games ... I didn't try it in CS2, but in the games I tried it was noticeably worse. One thing you may wish to try is to do an undervolt ... when I tried it initially it didn't work however I did manage to get it working a few days ago, so I plan to incorporate that in to the tweak guide for the 285K. I didn't try playing around with different profiles but I will also check it out for the 285K ... I would say don't worry, but after the raptor lake issues it's probably not a great idea.
@@blackbirdpctech Would also love some more in depth tweaking for the ultra 7 265k, since i think it's more common than 285k between gaming users. I can't say I feel the need for an undervolt right now as I paired with with a Galahad 2 Trinity, and a Lancool 3 case with 3 extra bottom fans..the cpu runs very cold, didn't see anything over 50c in any games, and the AIO fans are not reaching sound-making speeds.
@@vanilla1st I expect the tweaks for the 285K to be the same as the ones I used for the 265K, just different magnitudes. The benefit of an undervolt would be increasing your boost frequency or maintaining it longer, so it's definitely worth trying.
For editing and productivity, specifically with davinci resolve, would you go with a 7950x, 9900x, 285k or the 265k?
I would likely go with the 285K because of the support from these platforms for Intel ... hopefully that will change soon as AMD acquires a bigger market share.
Incredible in depth detailed analysis/information plus tweaking tips. How is this channel not a multi million sub yet? Keep it up, and I am sure you will be able gain more views! Maybe mix in some regular tech news discussion other than reviews?
I've thought about how best to do news ... I was thinking about doing it on a weekly live or podcast ... if I am going to do it I would really like to add something unique to the mix ... thanks for the suggestion, will think about it more ... and welcome to the Blackbird PC Tech community!
Did I mess up buying a 265k? I had an i7 11700kf and my system is having issues, so I got parts to build a new system. I didn't want the overheating issues for 14th gen. My current system also had 64GB memory and I wanted my new system to have more using only 2 chips as I heard that's better. So I've got Corsair Vengeance 2x48GB which is only DDR5-5600 CL40. Should I return it and get DDR5-6800 CL34? I'd have to chip in $80 out of my pocket. Will it make that much of a difference?
I am getting the 7900XT or XTX video card. I have no intentions to overclock anything. I just want it to run stable and be pretty fast. I'm not concerned being top of the line.
You didn't make a mistake moving to a 265K from an 11700, however by using DDR5-5600 you will leave quite a bit of performance on the table in games, as I showed in the video.
@@chungkingcansuckit6345 265k has other benefits if you need them. I would never build a PC just to play games with. Seems like a waste of money. I'm waiting for the 285K to come back.
Should just buy the Ultra Core 7 with 2x16GB 7200CL34 or 2x32GB 6400CL32
Man I was really hoping for a win from intel, but these core 200 series are just sad. So they asking for expensive ram just to barely keep up. And motherboards that could be end of life as soon as you buy them soo sad. Nobody with common sense would buy these products intel need to be taught a lesson don’t rush half baked nonsense and expect us to buy it.
It's a tough sell for Intel fans ...
microcenter currently has 265k for 300 and additional 70 off when you buy any z890 mobo
That's an awesome deal!
I'm in the middle of acquiring the parts for a new build, based on the 265K Core 7 Ultra. Memory (currently the 2x16/32GB sticks of Corsair Vengeance at 6400 with CEL 36 if I recall right, may pickup another set of the same to bump to 64GB. 2 NVME drives from Crucial and an older HGST refurbished 4TB mechanical drive for video asset storage and am looking at Intel for the graphics card. Not sure if B580 or higher.
Mostly researching, script writing, video editing ant anticipating 4K editing using Davinci Resolve 19.1.
So it'll be interesting to see how it performs when all said and done. Case, 2 drives, memory arrive today or this week, the PSU the earliest on the 17th and a front I/O with SD card reader isn't due until mid to late Feb.
So far, I have the motherboard, Gigabyte Aero G, the main NVME drive, Peerless Assassin cooler tower, the contact frame for the CPU (a better safe than sorry/just in case situation, and didn't cost much), an exhaust fan for the Focus G case (red) from Fractal (rear) and CPU.
Good luck with your build ... I will have a comprehensive step-by-step tuning and optimization guide out next week for you to use once you are done building and ready to extract max performance.
@@blackbirdpctech Thank you. I'm coming from a 7th gen i5 Dell SFF Optiplex, which was an unexpected replacement for a 4th gen of the same thing, and before that, I rocked a 1st gen i7 Dell Studio XPS and its replacement stemmed from a final straw of the hard drive failing, and most everything else on that PC, minus the PSU, memory and CPU had to be replaced due to failures over the decade I had that machine.
So this new build should be a HUGE step up from the past few years (2019).
@@johnhpalmer6098 that will be a massive increase in performance.
Very informative video - thanks!
Glad you liked it!
This content ia why i watch your channel. Someday, you will have a million subs and i can say i was watching from the beginning
Really appreciate that, thanks for being a supporter from the beginning!
excelent and well detailed video, wouldn't also 7900x be a good option? from what I saw, it's basically 9900x but with lower price
Unfortunately many day one reviewers didn't test the 9000 series CPUs well, so there was widespread misinformation that they don't offer any meaningful performance boost over the 7000 series, which is simply not true. The 7900X is a good CPU, but the 9900X is much better.
@@blackbirdpctech thanks for the fast reply and also that's good to hear, the 9000 series are definitely on my radar in terms of upgrade. speaking about intel, do you know if i5 14gen also has instability issues like i9 and i7?
Yes, I think it was anything over 60W TDP, but that said the new firmware 0x12B was the final fix from Intel and was supposed to fix the voltage issue.
Hey im looking to buy a new pc all together and im still in a bit of doubt on which CPU and GPU i should buy. I want to try out 4k gaming but 1440p would be fine too and my budget for the complete build would be around 3000$ maximum preferably less.
Seeing these performances makes me think for gaming the 14700K would be better. But at the same time the 265K might get a bit better after the firmware updates. But dont know about overclocking and dont quite wanna mess up or anything.
TL;DR
I want to know what CPU & GPU combo to buy preferably without much overclocking for 1440p/4K gaming
Excellent video as always !! Also I can't wait for the 9800X3D vs Arrow Lake video 😁
I'm waiting for the new firmware/bios to be released ... I want to give the 285K a fighting chance 😉
@@blackbirdpctech285k is the flagship, it should compete with 9900x3d - if AMD can fix core parking with 9900x3d.
@@allenzhang373 I know it's the flagship, but that won't help it compete with the 9800X3D ... the 9800X3D will beat it easily in gaming at the moment.
@23:00 - is the latency delta in the AIDA 64 benchmark between the 265 and other two parts humanly noticeable during normal Windows 11 operation and fast paced productivity tasks? Will a DDR5 6400 v. a DDR5 8400 RAM kit mitigate any perceived issues?
Yes it is noticeable ... Windows doesn't feel as responsive. That said the recent firmware changes that Intel has made do help and with some tuning and optimization you can mitigate it ... take a look at my 285K review that went live earlier today: th-cam.com/video/fNFL2bRIWLM/w-d-xo.html
Micro center has the 9900x for $360 and the 14700k for $315, seems like the 9900x would be the better pick?
Yes, it performs better and the AM5 platform has an upgrade path. If you add the RAM and CPU together you will rapidly close the gap.
@@blackbirdpctech Just picked up the 9900x from micro center, thanks!
Your system latency measurement was a big takeaway for me. I didn't realize the change to the chiplet architecture affected this so much. Everyone will feel that when you are simply navigating Windows, like we do with our systems for 75% of the time. For this kind of money, I'd expect big wins in productivity, par or better single thread gaming performance, and I want my system to feel zippy when I'm browsing around. I do think the memory improvement could yield dividends in the long run, if Intel doesn't go under before they can ramp up some of these clock speeds in this architecture. Gosh...I just can't believe this company could let their competitive advantage slip away. It's not like they didn't see this coming. What a massive blunder. My opinion? Intel's new architecture is far enough behind that they need to immediately ramp up the clock speeds by a full generation in order to close the performance gap and drop the prices of the 245k/265k/285k to sub $300, even if it means taking the hit. They can't compete on performance/price ratio like this. They're going to make zero revenue...why would any OEM system builder offer Intel as an option unless it's a niche 3D rendering workstation? It's a loss, man. New bios? You really think it'll close the gap? If so, it should have been the original released bios for the platform. What a blunder! They are getting cooked in these reviews. It's killing Intel's consumer credibility.
I don't disagree however based on recent direct feedback from Intel I do think the new firmware could provide a significant boost ... I don't think the cpu was fully baked at launch and if that's true, a firmware fix could really help. If it doesn't then the only option they have is to reduce the price.
@@blackbirdpctech the latency is inherent to the tile Architecture and reduced clock speeds, that firmware wont do much in that regard sadly, it is just physics. There is just much more latency when u have to go through an interconnect layer before instructions hitting RAM, that and the clockspeed regressions are what is producing the latency, no way this can be fixed in a microcode update cuz as i said, its physics. It would be like expecting to send the sun a wifi signal to control global warming. There were people who were getting like 180n/s latency when loading XMP on high speed kits and stuff like that though, i am assuming that kind of weird behaviour is what Intel will be targeting, probably some OS/BIOS/Hardware combos not working as intended and whatnot. This is the main reason x3Ds do good in games since a lot of stuff doesn't ever need to to RAM, obviously in some games this cache isn't large enough, but clearly AMD isn't even trying anymore (they continue to give is same core counts and memory cache sizes), they do produce custom EPYC CPUs with HBM4 3Dstacked as L4 cache, if they did that on the desktop they would create an insane good gaming CPU, they just don't need to.
@@RampageGW2 I stated this in my video:
"... with the launch of Arrow Lake, Intel has moved away from monolithic designs, to chiplet based designs, and as a result, the system latency has increased significantly. This is something that you will definitely notice in Windows … "
So I'm not sure why you felt the need to explain why these new chips have higher latency. Also, the clock speed change has no meaningful impact on system latency.
That said, there are many ways for Intel to improve the performance ... gaming performance is not simply driven by latency. For example, Intel will likely optimize how the load is spread across each core and prioritize the p-cores. The reason that boosting the clocks on the e-cores had such a large impact on performance is that they are carrying too much of the load in games.
Hi, Really great video!
I need a suggestion, Im building a pc for 4k multilayer video editing on after effects/premiere pro and ai engineering (using softwares like pytorch and tensorflow)
I'm confused between these chips - Ryzen 9 7900x , Ryzen 9 9900x , i7 14700k, I will pair them with either 4070 super or 4070 ti super..The 7900x is coming in my budget, for 9900x I will have to increase my budget..
Please just tell me which chip should I go with..Im really confused!
Also can I pair 7900x/9900x with gigabyte x870 eagle wifi 7 motherboard..?
And one more question 😅
Can you tell me if I should wait for the 5070 to launch or just build my PC now in 10-15 days with 4070 super/ti super..
I would appreciate your response 🙂
If you use a lot of Adobe products then I think Intel would be a better choice ... if you can get a 265K on sale then that might be a good option for you. No one knows when the 5070 will launch and be available so my recommendation is, if you need your new PC now then build it now.
@@blackbirdpctech ok thank you so much for your recommendation! ♥️
I watched so many scary videos blaming Intel for everything.
I wanted to get AMD.
but then I saw the crazy prices.
so I got the 12900.
That’s actually a good solution, avoids the 13/14th gen issues and skips the crazy pricing/availability issues.
Arrow wins in some productivity tasks, but they are rare. Amd even wins in Photoshop these days, and don't get me starting about gaming.
I'm really looking forward to seeing just how much Intel can boost performance with new firmware ... it could actually become a great chip, hopefully the new Bios will be out soon.
@blackbirdpctech yeah if they have some ground to catch up, I'm all for it. We need healthy competition.
At least it overclocked well ... the stock performance is really disappointing ... it's because of lack of competition that the 9800X3D sold out so rapidly and is being scalped, so yes we really need Intel to step up.
@blackbirdpctech Agreed, if it hadn't been competitive in productivity it would have been game over. I'd say their best thing they came up with was their igpu, they *almost* were on par with amd and soon battlemage will come. I'm crossing my Fingers.
i picked up 265k from microcenter for $299 yesterday. and Gigabyte z890 ud wifi6e for $139 after $70 bundle discount. upgrading from i5-8600k, not a bad deal i guess.
A 265K at $299 is a great deal and definitely changes the value discussion.
I don't have a Microcenter near me. I wanted someone to ship me one. I ended up paying $90 more. :( What memory did you get? My current system has 64GB and I wanted more, but didn't want more than 2 sticks. I hear your system runs better with only 2 sticks. So I got 2x48GB Corsair Vengeance DDR5-5600. I wonder if I should have went with 2x32GB DDR5-6800?
Yes, DDR5-6800 would be better because as shown in the video, Arrow Lake scales well with memory speed.
Biggest problem is the numbers intel published are way beyond what the real world results are. Shot themselves in the foot which seems to be Pats favourite thing to do
What I find really crazy is not that they lied about their performance, they all do that, but that even their initial performance projections were disappointing … so they set an expectation and then failed to even meet that. Hopefully they can recover some performance with their coming firmware fix.
@@blackbirdpctech Yes they all fudge the numbers but normally intel is fairly conservative when they do. This time they clearly were not because they were hoping people would just blindly purchase based on the power savings changes
Microcenter dropped the 265k price to $299. Does that change your opinion at all?
Really? Wow...
Yes, that definitely changes the value discussion. At $299.99 and assuming that the other chips remain at the same price, then the 265K will offer 38.4% better value in 1% lows and 5% in average performance. The only caution I would add is that you should add the cost of the RAM in to the value discussion as well, something I should have done in the video. The reason I say this is that to extract max performance from a 265K you need to use high-speed RAM, which costs considerably more than a DDR5-6400 EXPO kit and DDR5-7200 XMP kit. So keep that in mind, but this really is a great price point for the 265K.
I hope Best Buy or Amazon follows! There is no Micro Center near me. What an awesome production chip for us non-gamers!
@@blackbirdpctech Well, a better deal was a free z890m MB with 265k purchasing 😭
Some day 1 reviewers used ordinary DDR5 keeping the CUDIMMS or a follow up. The problems seen involved Windows versions and BIOS, with Intel doing relatively strongly in synthetic benchmarks but often poorly in application tests.
So no injustice there, the other price cuts and performance improvements plus 1 shot platform make Core Ultra S a poor choice.
I must have missed that … I saw none of them mention that ddr5-8000 would work on a 4-dimm motherboard … that’s why I was so shocked that it worked.
@@blackbirdpctech but what do Intel support? I think you'll find a different story. Good results with review silicon is a dangerous path to tread.
I agree that it's currently a poor choice, as I stated in the video. My chip was a standard retail CPU, not a review sample, and the OC was mild, so I expect that it will work on most 265K's. It was impressive to see how much extra performance I could get out of it ... let's see what the new bios brings.
@@blackbirdpctech but if Intel were supplying you and know you'll overclock they can send you a golden sample. Even retail, I have seen past evidence suggesting early units tend to be higher quality.
It's a basic problem when going out of the specs and Intel did refuse some RMAs, in one case claiming a ludicrously low DDR5 speed was what was supported
@@RobBCactive I purchased my 265K from Amazon ... I currently purchase all of the CPUs I test from either Amazon, Newegg or in-person at a Microcenter.
People really underestimated Hyper threading. Now its come back to bite intel with the 'issues'.
I believe Intel knew about it but they prefered to suck intentionally.
I think the issues really stem from the new architecture that is a chiplet based design versus what Intel were used to on monolithic designs.
Not really. Hyperthreading never gave huge boosts save for some very specific apps and workloads, in some scenarios it even cost you. And with the leaks it caused needing mitigations that higher performance was lost a good deal.
@@blackbirdpctech Likely. Also they use a different thing than AMD called Universal Chiplet Interconnect Express for which Intel alone was not responsible, so other companies had their parts put in as well as Intel. I never looked into it really but if the design is not optimised for CPU specifically (and includes NPU, TPU, GPU, DPU, etc) that potentially costs them in performance.
This in addition to using TSMC which may or may not have needed concedssions in the design in some ways compared to if Intel was ready to make that node themselves. Supposedly the design is 99% not dependent upon a specific fab or node but that number seems awfully optimistic.
The tiles also give latency just what it is. AMD has the CCD to the IO die, maybe a 2nd CCD (desktops anyway), where Intel has more tiles from what I saw, GPU tikle, SOC tile, IO tile, L3 stuck to the ringbus, this adds up all in all. On the upside, this probably means memory latency will be less felt. We are talking nanoseconds not microseconds but it does cost performance.
Now that there's an Intel/AMD partnership in the x86 advisory group, I think we will see AMD and Intel agree to ditch SMT/Hyperthreading, and probably E/P cores as well on the intel side. Both technologies add a lot of cost and complexity for little to no performance benefit
Where's the 5800x3d/5700x3d?
Relative to the 265K?
yes
That's something I didn't consider but I can include in a future video.
2 gens ago high end is always worth testing vs todays mid range.
I have a 17yo PC and wish to upgrade to to a new PC that will work another 17 years. The games I play is mostly CPU heavy like X4 I Currently have An i7 6800k
Edit: I was thinking of the year 2017 when I bought it so 7yo
Wow, 17 years ... that's impressive longevity ... what GPU does it have?
@@blackbirdpctechI thought of the Year I bought it 2017 sorry, I have a RX550
@@Josivis That is still impressive longevity
@@blackbirdpctech what would you suggest? I wish to also play Stalker2 when Modders find an optimisation mod.
@@Josivis that really depends on your budget and primary objective, which based on what you said above appears to be gaming.
a bit much for me. and ya i'm very surprised ram never had it's own clock built in. oof.
I still don’t know what’s wrong with my new 265K build. Huge amount of skipped frames when trying to play TH-cam videos from the IGPU, even at resolutions much lower than 4K. Win 11 Pro 24H2. MS Edge or Chrome, can’t tell which one is the worst. Crazily, disabling acceleration in the browser settings would result in fewer frame drops, about half. It’s that bad!! Btw when using the discrete GPU output on the same system, no issues whatsoever.
Has anyone noticed a similar behavior?
What make/model is your dGPU?
@@blackbirdpctech It's an ROG Strix 3090 OC. FYI I just installed the latest Intel Graphics Driver 32.0.101.6314, and the issue persists. Just terrible experiences with this new Intel system... while I'm loving my new Mini M4 Pro that just works out of the (tiny) box!!
I don't really understand the issue ... why wouldn't you simply use your system with the dGPU driving your tasks?
@@blackbirdpctech For example, I want to maximize the VRAM and the other DGPU resources available for LM Studio and other AI tasks. The 3090 is going to be replaced by a 5090, that's why I built this system. I don't game. Regardless, if this new $500 SOC cannot handle TH-cam video playbacks, Thanks but No Thanks!! Even the OLED TV that this system is connected to can play 4K HDR videos w/out any frame drops. None. Any system worth more than $40 should be be able to do that nowadays. However, before returning the system, I really would like to know whether the issue is common, or related to something wrong in my system.
since you seem so wealthy, can you go back in time and bench:
a intel a770 16gb Le with a z690 board and a i7 13700k with 128gb ddr4?
nobody wants to review intel's ARC A770 LE 16gb, on a REAL processor or realistic rig, with a REAL amount of ram
so we've never saw deeplink tech at work, and we've never saw the a770 using it's max potential.
many people would like to see a review like this i promise you. (the upcoming B780 would be nice to see reviewed on a proper system too)
Guys, who knows how to undervolt an intel core ultra 7 155h processor not through the BIOS?
You can do it through Intel XTU as explained in the video.
A lot of it was not having Windows schedulers and more, not ready to make it run as good as it is capable of. Level 1 Techs has a great video on it, he explains why Linux exploits it's gains better, for now. Intel/MSFT are working to rectify this. I will be waiting for Panther Lake, as those will show big gains. Only 6-7 months away!
A lot of it was microcode based ... Intel needed more time to get the firmware and software right ... I will cover it in detail in my 285K video that will be released later this week.
I'm building an arrow lake system tomorrow
What CPU are you building with?
@@blackbirdpctech 245K I don't have any need for more cores and the only game I play is Unreal Tournament.
I have been running a 12600K golden sample for almost three years that has a better memory controller than the 14600K I bought recently. The 12600k will run 7600 stable, the 14600K won't even boot at that setting. Same board and same memory, it was a Gigabyte Z790 Elite X but it went tits up after a month so I returned it and I'm using a Asrock Z690 Taichi and even though it has an 8 layer shielded board I can only get 7000 at higher timings. (well it's a Z690 so OK)
I have had a lot of MB's and this Taichi is the best board I have had once I got used to the bios layout.
I don't want to layout the money for a new Taichi for the 245K so bought a MAG Z890 Tomahawk WiFi. I've been playing UT for 23 years, I'm 73 years old.
BTW: I subbed your channel, that information you mentioned about the TREFI setting lowered my latency by 10ns! That was jaw dropping to me. I have never heard of that before...
23 years of UT is dedication! One thing I would focus on for your 245K is the e-cores ... these chips seem to rely on them much more, so boosting the frequency by 100 MHz can make a big difference. And yes, you can tweak a lot of different sub-timings but this is the only one that I've found to have a significant impact on all CPUs without impacting stability. Appreciate your sub and welcome to the Blackbird PC Tech community!
@@blackbirdpctech E cores can be bumped to 5K according to Intel. th-cam.com/video/P2OHRH7221w/w-d-xo.html
I think that depends heavily on silicon quality ... I certainly wasn't that lucky with my CPU.
I am new to pc and I recently bought a one that I mainly use for gaming. It has an intel core ultra 7 265kf, a PNY GeForce Rtx 4070 super 12 gb gpu, 32 gb of ddr5- 6400mhz, and an MSI Pro z890 S. I’ve mainly been playing rust and can get around 140-160 fps on high or medium settings but when looking at certain things it drops to 20-40 fps. I have most of the in game quality settings turned down but it still drops. I can get 100-200 fps on less intense games but I was just wondering if there pc just isn’t built for more graphic intensive games or something else.
I'm still on a i9-9900K MSI GTX 1060 Gaming X 6Gb with a gigabyte Aorus master Z390 i want to skip the 1440p middle man and go straight to 4K but 4K QD- OLED 240hz DP 2.1a monitors are so expensive I need a hole new build.
You don't need DP2.1 ... you can use DSC. Unfortunately the only cables that are certified DP 2.1 are 3ft cables at the moment ... 6ft cables should be here by the end of Q2.
Of course, Intel 265K is not bad at all, even this CPU is just the best value within the Ultra 200 series. The problem is the new platform that is only available on top-tier Mobo, while Ryzen even fits perfectly in the common B650M.
And their brother's 14700k with the current tempting price also will prevent buyers from stepping into the new 265K.
Agreed, the platform upgrade cost is high, especially if you include CUDIMM RAM, and the prior generation is better value ... not a great position to put yourself in to.
Micro Center has I7 265K marked for $299.00 in store only. I got 2 for $529.00. 1 rang up for $229.00 went back 4 hours later and they fixed the register extra $70.00 but not the price of $299.00. It should be $399.00 like everywhere else.
That’s a great price for this cpu … hopefully the new bios will boost performance even more.
@@blackbirdpctech I think they have it marked down an extra $100.00 on accident
Excellent video! I wonder why for MW3 the Intel 1% lows are much lower for both Cpus compared to AMD. 🤔
This came up during my 9950X vs 14900K testing and here was my explanation from that video:
If we take a look at Call of Duty: Modern Warfare III at 1080P minimum settings we see a significant performance difference between the two CPU’s, with the 9950X generating 37% higher average FPS and a whopping 73% higher 1% lows. To understand why this might be happening we need to take a deeper look at the benchmark results. For the 14900K you can clearly see that the CPU is the bottleneck 99% of the time, which means that the processor performance isn’t enough to keep up with the game at these settings. As I explained earlier, this is precisely what we want when testing the relative performance of CPU’s. However, when we look at the in-game benchmark data for the 9950X, we see that the CPU is not fully loaded, with the GPU being the bottleneck 25% of the time. This means that the processor performance is so good that even at 1080p/minimum settings, the 9950X is able to keep up with the game 25% of the time, thereby shifting the load to the GPU, which in turn results in a significant FPS boost. I tried lowering the render resolution even further but I was unable to fully load the CPU, it’s just that damn good.
This issue is coming up during my 9800X3D vs 7800X3D testing ... the 9800X3D is so good that it places the load back on to the GPU ... even a 1% shift in load back to the GPU will boost your average FPS, so it can make a weaker CPU look better than it really is relative to the stronger CPU. The way to really see which one is "better" is to look at the 1% lows.
@blackbirdpctech Thank you so much for this thoughtful and comprehensive explanation.
I want to build my pc using Intel cpu and I don’t care about the price but I’d want a good one for heavy games and to do like streaming what would you recommend?
I would recommend the 285K ... my review on it, along with a comprehensive tuning guide, will be released tomorrow.
It’s not a bad processor it’s the cost of the brand new platform that is behind the competition
It’s not a bad processor, it’s just bad value at the current price … hopefully the new firmware/bios updates will improve performance.
it is bad, because it's new and slower.
It significantly improved on 1% lows in games and power efficiency, so no, it's not a bad CPU ... it is however priced poorly with respect to the competition.
Plus a new motherboard for each next gen Intel CPU release. The efficiency is a step in the right direction but the CPU isn't there yet in terms of value and performance. Maybe next gen.
@@JohnWalsh2019 I really don't like the push to change socket frequently ... that's something they should consider changing in their design philosophy.
do you use baseline profile for the 14700k or not?,if not the chip is destroying itself
I assume that you are referring to the Intel Default Settings, and yes I used the Performance power profile. That said, your chip will NOT "destroy itself" if you don't use it, especially if you are running the latest 0x12B microcode ... I covered this extensively in prior videos.
I’ll be upgrading my i7 2600k to this cpu about time I upgrade built in 2012
That will be a significant boost in performance.
Great video, but unfortunately overclocking is not guaranteed, so buying the 265k is still very risky. It's much safer to go with the 9900x.
As stated in the video, the 9900X was tuned as well ... it would be unfair to compare an overclocked CPU to one at stock conditions.
Wondering if you could dive deep into Arrow Lake's Alchemist iGPU with 4XE Cores. I heard the iGPU's gaming performance sits between a GTX 1050 and a 1050Ti and slightly better than the Ryzen 7 5700G's iGPU.
The challenge for any content creator is time and deciding what to focus on. This topic would be interesting to investigate however I don't think it would be popular ... especially with all of the new products on the horizon. That said I will add it to my list, but it might be a while before I can get to it.
Is memory latency still an issue on all Arrow Lake CPUs?
The latency for a chiplet or tile based design will always be worse when compared with a monolithic design. That said, it's not anywhere near as bad as some of the initial reviews claimed and I expect the new firmware to improve it further.
@@blackbirdpctech Fingers cross on the update. Any chance for a revisit on this CPU Post Dec Update?
I plan to include it in future videos, so when the update is released I will be sure to use it.
24gb rams might be the issue with the latency. They might be slower than 16gb kits at same MTS.
It's not the RAM, it's the chiplet based design ... Intel moved away from a monolithic design for the first time with the Core Ultra series, so now the speed with which they pass information between the chiplets has a large impact on latency.
@@blackbirdpctech Have you tried dual 16GB kit 8200MTS ? Latency should be lower.
I will do that comparison for the 285K … it was easier to get the 48GB kits stable on raptor lake, but it might not matter on raptor lake.
It’s 10% slower than the 14700k ?!? Why wouldn’t I just by a 14700k for a cheaper price point and better performance?
Agreed ... it's tough to look past the 14700k if you are an Intel fan ... better performance at a lower price.
Why not just get the 9900x which is faster than both, and no need for overclocking, and very good power consumption.
@@abaj006 as stated in the video, the 9900X is tuned.
the 265k is 40 dollars cheaper than the 14700k at my local microcenter
@@DavidWilliams-ic1nn that is a great price for the 265K.
Thank you good sir!
You are welcome!
Is current deal at MC where we can get combo of 265k and z890 mobo for around 550$ makes this as smart choice ?
Absolutely ... I think they have the 265K at $299, which significantly improves it's value.
@@blackbirdpctech Yes, So they are giving 70$ extra off so I am getting 265k + Asrock z890 SL + 32GB 6000 CL30 for 590$. While on MC I can also get 7950X3D + 670E MSI MAG + 32GB 6000Mhz CL36 for around 730$
Could you help me choosing the correct one Which one I should go with ? My primary usage will be gaming + PCVR + Some other light weight stuff
I'm not super familiar with VR requirements but for gaming you will definitely see more performance with the 7950X3D. To really take advantage of the 265K you need to pair it with fast DDR5 RAM. In addition, there will be one more generation after the 9000 series on AM5, so that will give you a growth path ... not sure if socket 1851 will support another generation.
@@blackbirdpctech Yes, makes sense totally. Tough choices haha.
how did you run expo on a 6400 kit the highest expo kit ive seen is 6000. or was this an xmpo kit? thanks
There are expo kits at 6400 and 8000 … you can click on the links in the video description to see the g.skill expo kits offered at Amazon.
I’ll be upgrading my i7 2600k to this cpu about time I upgrade
Hi, i am from other region, i thinking of build a pc both productivity and gaming in tight budget 1.2k - 1.5k . And i confuse in cpu choice 13600k or 7900 with 4070s .or you can say under 300$ cpu,Can you have some advice?
And also insecure about 13/14 gen and the price of letest cpu's 200/9000 Intel and AMD high here in my region
The 7900 is a good chip.
What about 13600k …? Does it have any issues?
@@blackbirdpctech yes , it also have lower TDP 65w
(13600k)at this price 12700k also available
Well I consider upgrading from 5800x3D to something with more multicore power paired to RTX 4090 playing at 1440p. Currently I am already running 420mm AIO so heat isnt issue. I am always on the latest Windows version, wouldnt disable CCDs or ecores and expect smooth all around experience and gaming performance (my focus is 0.1% lows). I am thinking about 14700k, 14900k (150USD diff), 265k (50USD diff) since your testing showed that ecores had less performance downsides compared to dual CCD design. I am most likely not going to benefit from the IO upgrade since I run Optane P1600X for system drive. My prefered motherboards will cost around the same. I would be able to wait another generation or two if you expect noticeble improvements for my use case. I am glad that you mentioned improved support for 8000MT+ non CUDIMM I was asking that question myself since day one and nobody talked about it before.
If your focus is 0.1% lows, best CPU is AMD 9800x3d
At this point I would recommend waiting to see what the new firmware/bios update coming from Intel does to performance before buying a 265K or 285K. If it boosts performance in a meaningful way, as Intel suggests it will, then these chips might make for a good upgrade.
Money no issue what is the best setup for single cpu thread application?
Take a look at my 9950X vs 14900K video:
th-cam.com/video/H7L_SiIN2KY/w-d-xo.html
It shows the 9950X beating the 14900K in Cinebench Single Core.
Intel 285k feels better with low latency DDR5 , CL32 7000 are great much better in most scenarios than CL38 8000
You need to look at three parameters, not just the CAS latency, to understand what impact memory will have. In addition to bandwidth, the first word latency and RAM latency are the most important parameters that will directly impact performance. So a kit of DDR-7200 CL34 has a first word latency of 23.3ns and RAM latency of 9.4ns whereas a kit of DDR5-8000 CL38 has a first word latency of 21ns and RAM latency of 9ns.
That`s what you get if you plan to sell quad cores for the next century, you fall a sleep and the you wake up with a thundering headache, called AMD.
Don't forget, they also appointed two CEO's ... because all companies will run better with two people in charge 😉
More seriously, we really need them to step up and be competitive or prices will inevitably go up. At a minimum they should drop their prices for the Core Ultra series.
@@blackbirdpctech That`s the Intel way i guess, why would you do something with 1 if you can use 2.
Indeed, you can already see the AMD prices go up.
I`m glad i already have mine, bought it right away,
The 9950X is really awesome.
I'm quite sure there will be further improvements with bios and windows updates.
I agree, that’s why I decided to wait to test the 285K … I think it will be much more valuable to show performance after the bios update scheduled for early Dec.
should one be looking at the CPU DLVRin Vcore?
For what objective?
@@blackbirdpctech Because when I adjust the clocks, the cores, and also the ring of the CPU, the CPU DLVRin Vcore/VCCin tends to run about 1.5V or so. I'm just wondering if I should try to adjust the voltage pertaining to that because I don't want the chip to die on me. And I haven't been able to find anything on it.
That's interesting ... I will have to do some research on it ... thanks for bringing this up.
Anyone have information on CAMM2 memory? Reviews from last Tech Show MSI teased a MB with this type of memory, not sure of socket. However it was a Project Zero with hidden connectors.
I expect to see and hear more about it during CES in January ... it was a big deal a few months ago ... not sure what happened.
@ Thanks! Enjoy the weekend!
since you seem so wealthy, can you go back in time and bench: a intel a770 16gb Le with a z690 board and a i7 13700k with 128gb ddr4? nobody wants to review
intel's ARC A770 LE 16gb, on a REAL processor or realistic rig, with a REAL amount of ram so we've never saw deeplink tech at work, and we've never saw the a770 using
it's max potential. many people would like to see a review like this i promise you. (the upcoming B780 would be nice to see reviewed on a proper system too)
I tested an A770 on a Z690 with a 14700K with 96GB of RAM ... you can watch the video here: th-cam.com/video/3K_kqBsUuOY/w-d-xo.html
One very disappointing thing I heard directly from Intel during CES was that they do not plan to release a B770/780 ... not good news because I was looking forward to testing one.
Ah poor , poor intel. 16,6 Md$ lost in 24
They had a rough year but we need strong competition, so hopefully they can recover
Poor?
They are a multi billion dollar company.
They will be fine.
@@drewnewby the problems at Intel arose after Pat was fired … and I don’t see how having one x86 cpu maker is healthy for consumers.
@@drewnewby Pat was fired by Intel and then hired back after the two joint CEO morons screwed things up … probably a good idea to look into how Intel got where they are. And sure, if someone buys Intel then I agree, but that doesn’t change the fact that we need multiple strong competitors and at the moment that’s Intel and amd.
new intel cpu require new mb and a new ram type to perform their best. 12th gen cpu may need a new mb so thats cheaper if upgrading from older cpu. and upgrading 12th to 13/14th gen is cheaper since a 12th gen mb will support the 13/14th gen.
but mainly, the ultra cpu range are all brand new design architecture, and will have these teething troubles for a while. for gaming i would hold off unless you really need to replace that celeran dual core for gaming, and just go safe with 12th gen - if you can find the mb and cpu before the stock runs out, of course. or go AMD. if the hotter running cpus and gpus from them doesnt bother you.
I was with you right up until you said that AMD CPUs run hotter … that’s simply not true.
While Core Ultra 265K and Ryzen 9900X might seem a good comparison, If the goal to do Gaming they are mismatched. Ryzen 9900x should compete with Intel's core i5 as the performance core and ccd core number of 6 match there. Core Ultra 265K still has 8 performance cores and in gaming it is competing with 6core ccd.
There are some games that benefit from the extra cores on CCD1 being enabled, as shown in my 9950X review, so I do think that the 9900X is the correct direct competitor based on actual performance and price. One thing to be aware of with the 265K is that the efficiency cores play an important role in gaming, turn them off and your fps drops significantly.
I'm more than happy w/ my 14700K + 4090 gaming rig. Considering the current climate of games, there is no reason for me to upgrade for a while. I have a large library of great games that I still have not made the time to play so I'm good for at least two more gen of cpu & gpu cycles.
I like the 14700k, with the latest bios it should age well.
They should make it cheaper. Do what amd did when they were worse in games but better in some multi-core stuff. So 100 dollars below 9700x would make it good.
I agree ... if you can't compete on performance then at least compete on price.
Intel's core lineup might and should have been much better, but they chose to make it this way. The Hyper threading was retired for no reason. When they said we will not use Hyper Threading I though they found a way to achieve that performance without hyper threading which was a good idea. But they just tried everything to make the CPU s worse than AMD. Still memory bandwidth performance is much better than AMD and it means that for all memory bandwidth limited scenarios still Intel is the better choice.
Its better to use 13700K from intel core 7 265K
i had nearly a dozen customers with 13/14th gen i7 and i9 who died in months ... at my tiny lvl it's HUGE , nobody should buy 13/14th gen those chips are faulty by design and i don't believe any amount of microcode will save them
I feel for you because that must have been a nightmare to deal with. That said, my experience with 13th/14th gen CPUs was very different ... over the 2 years I used them extensively I never had an issue with any of them, and I pushed them pretty hard in bios. I'm reasonably confident that Intel has fixed the issue with their final microcode update ... they took way too long to address this issue properly and push out a solution, but I do think they finally got there.
My friend was bulid new pc with intel core ultra 9 285k for me, i know my brother and my old friend are preferred to Intel pc
But when relased New AMD most powerful for programs drawing or 3d prigrams on future (today AMD still powerful for gaming, you know.)
That's why i take intel for programs drawing and 3d programs only today.
I'm a developer and very happy with the Ryzen 9900X.
That's great to hear ... I've been getting a few questions related to professional workloads for these chips, so getting feedback from a developer is helpful.
Music?
Are you asking what the background music is?
@blackbirdpctech yeah, could you please share the name of the track? It is very pleasant to listen to.
@@CombinE54 there are two tracks in the video, one is called "Upbeat Tech Rock" and one is "Inspiring Technology Loop"
Thank you for replying. May I please ask you to share the authors names? Can't seem to find the songs.@@blackbirdpctech
Hey sorry, I purchased a license for these, so I will need to track down the file to find the author names.
I got a 30 plus FPS increase in stalker 2 with this cpu....
So.....
Intel XTU unsupported...
Compared to what?
The 265K is supported, just make sure you download the latest version.
Microcenter selling 265K for $299
Yes, someone else in the comments mentioned that deal and it's the kind of price reduction that makes the 265K a good buy.
1 p core = 4 e cores which intel put in to keep up wiff amds & beyound them
Ecores r like 4 donkeys pulling a heavy load slower but do heavy work rendering as to 1 P/horse
all this chit is what i think i absorbed and wont help me in anyway as i dont have this series & wont its been all a fooking gamble con job since the 9series i bought i7.9700k
They dangle a carrot 🥕 tied to string effect its weird how the older CPUS still do better or near
The donkey comment was funny 😂
If it is a mobile cpu , yes it is great for power/performance ratio
Desktop ? Who the hell need this thing ? 😅
Personally, I think a good amount of your performance improvements when running your higher capacity Rams are coming from your low clock cycles on these higher megahertz ram. You had 6,400 at 32 but then you had 7,200 at 34, that's just going to be faster at a default, then you had 8200 at 38, so generally speaking you're running at 5nano sec, 4.72 ns, and 4.63 ns so basically its just faster, and hence the general 6 & 8 percent improvements as they are 6 & 8 percent faster.
I don't really understand your point or your numbers. If you look at the DDR5-7200 CL34 kit for example, it has a first word latency of 23.3ns and a RAM latency of 9.4ns ... where do you come up with 4.72? Regardless, Intel systems tend to scale linearly with RAM speed, as mentioned in the video, so the benchmark results should come as no surprise ... the only challenge is running higher speed RAM stable.
OMG to use an Intel's CPU you need to be an ex rocket scientist😅
That’s pretty funny 😉
This cpu have so so irregular performance, intel betray me with this core 200 series im waiting for core 300 series
I'm still hopeful that the new firmware will boost performance but yes, it is currently very disappointing.
@@blackbirdpctech It can give 30% more performance by tuning it like frameChaser did but this platform is too difficult to tune, all the previous ones were easy and tuning it doesn't fix the irregular performance either.
I would be careful quoting my friend Jufes on this one ... it's not more difficult to tune than a 14700K, but if you are selling a PC tuning course then sure, it makes sense to push the narrative that it is 😉
@@blackbirdpctech justamente no quiero comprar el curso, pero no me atrevo a comprar la plataforma por el rendimiento tan malo, creo esperare la serie 300, el curso vale 500 dollars, I have the 10700k and it is very very easy to tune, I am not an expert but the platform is easy and I wanted to buy the 200 series and this tremendous betrayal on the part of Intel occurs
@@blackbirdpctech
I just don't want to buy the course, but I don't dare buy the platform because of the poor performance, I think I'll wait for the 300 series, the course is worth 500 dollars, I have the 10700k and it is very very easy to tune, I am not an expert but the platform is easy and I wanted to buy the 200 series and this tremendous betrayal on the part of Intel occurs
Nice video.
Thanks, glad you liked it!
😂🤣
Your video and things coming out of your mouth aren't matching buddy.
265 didn't do well even with those super high spec'ed DDR5.
Don't sugar coat things its bad case closed!
Did you watch the video before commenting? This is what I actually said in the video:
"... then it becomes clear that the 265K does not offer good value against any of its direct competitors. While it’s great to see Intel make improvements in efficiency, they clearly priced this CPU too high relative to the competition, which even includes their own prior generation chip. If you are looking for a great all around chip, then I would highly recommend the 9900X, it offers excellent performance at an equally great price. If however you are an Intel fan, then the 14700K is currently a much better option."
Do better buddy 😉
mb
This is sad a Intel fanboy trying his best to make this damn regular CPU good all the top people that do this stuff all agree it's crap
The same reviewers said the 9700X is crap too, but it’s not. Try taking a look at my content before commenting, or perhaps even watch the video you commented on, you might learn something or see that I recommended the 9900X over the 265K.
Wrong video I was responding to
@@york5893 Sure bro 😂
Watch the video first, save yourself the embarrassment.