watching from my gtx 1650 laptop, took 2 mins and 46 secs rendering the classroom scene haha. i'm planning to build a pc that's why i am watching this.
I had great luck with Gigabyte boards, and really like Asus boards. Get an X570 chipset, because the lanes and slots are better (more lanes, higher speeds).
Thanks man!!! I really needed this video I wanted to buy 1660 Super for Blender. After seeing this video, i have some other options too! Thanks again brother! ❤
Nice! This is the first test I've seen of a blender benchmark! Thanks! I really like how you're comparing the performance using creative software. One thing I'd really like to see is the comparison of high end cpus like the amd threadripper pro across different generations using simulations in software like blender and especially houdini to benchmark performance. It would also be interesting to see how the threadrippers perform again more consumer level cpus like the 7950x etc
Thanks! I hope to be able to test high end CPUs someday, when I can afford to build those systems. I am starting to learn about servers now though. I have a quad CPU xeon server with 96 cores coming in the mail. I can test simulations on that to start off!
@@ContradictionDesign ah nice I've been playing around with some server settings as well. One thing Ive been struggling with is getting ipmi set up on my supermicro m12swa-tf motherboard, a step by step for that would be amazing!
Hey! I have no idea how to do anything with servers yet, but you can bet I will know quite a bit within a couple weeks! We can talk back and forth to help each other figure it out! I need to figure out a way to run some livestreams so people can collaborate and ask questions in real time. Maybe a discord server would be better...
@@ContradictionDesign I would love to do some collaboration! I'm not terribly savy when it comes to servers myself but I'm trying to learn! As for live streaming, why not just use youtube live stream? I do really like discord and I think it's a great way to interact more with your audience and build a community!
@@theshadow6273 I will use livestreams, just need to figure out if I can screen record what I see on the server, or I might need to get my friend to help film for me. It will be fun to build a new set of info for servers though. People have told me to get a discord server started as well, so maybe it is time.
Just stumbled on your channel as I'm trying to figure out what GPU to upgrade to since mine is failing. I was thinking of the 4080 super(nutty as its in the $1100-1400 range) but then I decided to look at 4070 variants and seems like the 4070 ti super is a good deal as I've some in the ~$700 range. This card seems like a good bang for your buck. I'm going to pair it with my current cpu 12600k as I don't want to invest in a new platform and 13th/14th gen intel seems risky. Anyways, thanks for posting this information.
The 4070 Ti Super for $800 is a much better deal then the 4080 super for $1100+. 16 GB of VRAM is pretty good too. You are welcome! let me know if you have questions.
Holy moly that was 800 samples with no denoising on the barbershop scene. I really REALLY need to upgrade my GPU, I'm having trouble rendering stuff quickly with even just 64!
thanks so much for the info, now would you happen to know if in the long run is better to purchase the 4070 ti super over the 4070 super, in rendering in general, both of those cards perform so good in gaming but im interested if you happen to know if there is a LOT of diff in rendering programs between 4070tisuper and 4070 super. Thanks again for your videos
You are welcome! The 4070 Ti Super is a fair bit quicker in rendering than the 4070 super. I think for longevity though, the extra VRAM is the bigger selling point. As you get more into 3D work, you may find yourself using more and more VRAM and larger files. The extra memory will be the biggest benefit for coming years.
Thanks for all the tests and informations but i got zotac 4070 super twin, 11.60 optix, 10.2 optix oc.. how come you see 16.6? don't mislead nobody with this one, just an advice u to test again with different card..
We probably have different settings or you may have faster storage. This sounds like a loading time difference. How do the other, longer scenes compare to my results, and what version of Blender were you using? Thanks btw. It is always good to peer review data!
@@ContradictionDesign i did it again a few min ago: rendered the scene as it is, nothing changed (optix): zotac twin oc 4070 super default settings: 11:91, 125 undervolt 11.30, limitless: 10.71.. ddr4 3200 64gb cl 16, 14700kf, mld 500gb 3000 reading 2200 writing.. Blender 4.3 ...
@@penumbra3 Well, I can re-try but I am running SATA SSDs on this test bench. So that could be a part, where my loading time is hurting me. That is why longer scenes make more sense to compare with, because loading time error is a small factor in minutes long renders. But 4 extra seconds in the classroom scene is not that shocking with a slower SSD. And if I was running my scenes from my server and not local, this would slow it down as well in the loading phase. I will do some investigating, and I might as well re-run everything I have on Blender 4.3 so I can confirm or update my numbers. How long does barbershop take for your GPU?
Hey! You're welcome! I do have my 2080 ti kingpin on the list and a 1080. Should be done on most of the scenes. They are just pretty far down now. A mobile 4060 competes with a 2080 to now haha
@@ContradictionDesign My bad didn't noticed before I have a 1080 Ti and am looking to upgrade options un low budget that is why this kind of test are of interest to me, by the way a moment ago I went to test the same files some results came out a bit better some a bit worse but my system is different: 13600K 48G Ram, SSD storage Blender 4.0 Rendering Tests (Optix Cycles: BarberShop T(sec) 365 363.80
Lone-Monk T(sec) 1040.67 (First run slower by updating shaders) 915.87 Classroom T(sec) 91.91 90.81 Splash 3.3 T(sec) 361.59 359.35
@@omnymisa hey no problem! Just didn't want you to miss seeing the data. I will look at these to see how they look. If you are looking to upgrade, any rt enabled GPU will be a good jump, since they are basically perfect matches for rendering on cycles. Good luck with shopping, and let me know if you have any questions
Woww.. Amazing video man! Thanks for this. I'm building a computer to work with 3D and Unreal Engine. Based GPU is 4070Ti Super too. But I'm really wondering about the processor. Intel's 13-th and 14-th Series don't really inspire me against the background of the news about these CPU) My friend has been troubles with his 13900K with its periodic freezes. About a year has passed since the purchase and for a month now problems have been appearing. He works in Maya and Unreal Engine. In your opinion, which AMD processor (or may be 12700K - 12900K from Intel) with 4070 Ti Super is best for work (UE5, Maya and some another 3D)? I was thinking about the 7700X. But not shure now. Thank you!
Hi! Glad it was helpful. The 7000 series and 9000 series are great. The 7700X would be good, but more cores can't hurt either. I have seen some really good deals on the 7900X lately too, but if it is out of your budget, then that's a no go. In general you won't go wrong with 7000 or 9000 series though. Just have to pick your required core count. Are you going to run intense simulations or edit video? If so, you may want a few extra cores.
@@ContradictionDesignHi! Thank you very much for such a detailed answer! I'm working in UE5, ZBrush, Blender, Substance Painter. My laptop looks really sad when I'm working in UE5 :) Yeah 7900X looks good. What do you think about AMD Ryzen 9 7950X? i mean for 3D editors with 4070Ti Super? Or will this processor (7950X) be overkill for this card? Thank you!
@@SaramLex with creative apps, the CPU and GPU don't really have to match like they can with gaming. More GPU means faster renders and more CPU cores works on other parts of the software. So you can't really balance it per se. But I have the 7950x and rtx 4090. It works great. My CPU runs everything at 5.1 GHz or higher, all core. It's fantastic. The 4070 ti super with a 7900x or 7950x would be a fantastic machine as well.
Thank you for this very informative video. I am thinking about this card for Blender (Cycles) for motion design and product renderings. My current card is RTX 2080s. Sometimes I am running out of Vram. Do you suggest to buy this card now (2024 Q4) right before 5000 series? Thanks
Hey! The 4070 ti super is a great GPU. If 16 GB of VRAM is enough and you can handle the price, then it sounds like a good fit. I don't know what the new GPUs will be like. They sound like they may have some decent speed improvements, but I don't think they will be significantly faster at each price level anyway. I would also say except for holiday deals, there probably won't be too many price cuts on current hardware for a long time. So without the ability to see the specs for sure, I would say get what fits your current needs and don't regret it. Tech moves to fast to stay on top of the ladder for more than a few months anyway.
320 / 5 000 I am very curious how they would cope with the test 2x 4060 ti 16gb or possibly 2x 4070 ti super 16gb, because I am thinking about building such a computer this year, for now buy one card and then buy the second one, I am thinking about asus proart, because they are compact and fit well on motherboards and in cases
@@MrozuProjekt2 If you use multiple GPUs in Blender, the speed up is not linear with each new GPU added. So 2 cards will be 1.6x as fast as one, for example. To get the most out of the cards, you can run separate instances of Blender for each GPU, and give them each a separate frame to render at a time. This works just fine, but you need a higher end CPU and lots of system RAM, since you would be loading the scene multiple times.
What's your thought on the 4070 Super compared to 4070 Ti Super? I'm currently using a second-hand 3060 and thinking about getting an upgrade, but in my country, the 4070 Super is around $650 while the 4070 Ti Super is around $830, do you think the 4070 Ti Super worth the extra $180?
@@DemonZVirus I would say for the small speedup in Blender, the upgrade is not worth it. But if you plan on heavy Blender scenes, the 16 GB VRAM is pretty nice. So it will come down to that mostly, and then you would get a small speed boost on top of that.
I have only been paid for rendering a couple of times. I am eventually going to push harder to get clients. For now, I just use the farm for myself and by request
Yeah it will be cheaper next year. We'll see what happens with smaller cards this fall and winter. The 5090 should come out this year and then the lower models after that should push the 4070 ti super down
hi.my cpu is cori 5 13400.is it ok to install 4070 super ti on my system? Does the graphics card give me the best performance with this cpu? Please answer. It is very important. Thank you
They will work great together. Which part of your PC is the bottle neck will depend on your use. But the 4070 ti super will run at its max speed with the 13400 in every GPU bound workload. So it should be just fine!
I think the performance difference between the 4070 ti super and the 4080 super is probably similar to the price difference for 3D rendering. And they are both very fast GPUs. With both having 16 GB VRAM, you could easily go either way and have a good experience.
Great info! Thanks! My 4070 models are mostly lower end, so not surprised the clock speed catches up slightly. Also, when I record these, I am sure my recording resources use makes the GPU slightly underperform as well. How does your 3080 12 GB compare to a 3080 ti from my data?
@@ContradictionDesign I'm getting a little better results than the 3080 ti and mostly near the 3090. Btw I just rendered the Lone Monk and it took 173 secs. This model comes with slight oc and I added extra 75 mhz with the nvida app. Probably I'll keep running more of these tests and report back to you in this "thread"😅 since it can't hurt. Let me know if you need some screen shots or specs, but the cpu is an slightly undervolted (so slightly overclocked) 5950x.
@@Drumaier Well that is a really good result from a 3080! Yes feel free to share whatever you would like to. I think I will be recording some server CPU results tonight. You can compare some CPU benchmarks to those for fun too
Hey thanks! Yes most modern GPUs and CPUs will work great together. As far as being optimized, that combination would be fantastic for 3d work or gaming.
Chyba wiem skąd wyszła mi różnica w scenie scanlands vel St Michael 32s u Ciebie na 4070 super vs u mnie 4070 super - prawdopodobnie różnica w taktowaniu ramu - niestety posiadam 1333 MHz :( DDR3 pasowało by założyć 2133; 2400 . Odpaliłem samotnego mnicha - to się nie ruszyło w 4.2Blenderku. A teraz zaskoczenie 102:43s 4070 Super Lone-monk (shading landue - blender stop :D hyhy :D
@@ContradictionDesign mam Dell Precision T7610 , 2x xeon E5 2687w V2 80gb ddr 3 1333MHz ;( Tak się zastanawiam czy przejście z 1330 na 2133 lub 2400 DDR 3 coś zmieni
@@merkhava I think it would make a minor difference in computation but probably not a lot. RAM speed is not always a huge factor in computation. I would make sure you have all of the DIMM slots populated, so you run in quad channel instead of dual, if that dell has quad channel memory.
@@merkhava Yeah my T3600 machines should run at Pcie3.0x16, but they default to Pcie2.0x16, which was slow enough to bottleneck my 4070 and most new gpus. So I have to force the PC into gen 3 in the Windows registry I think. Pretty fun haha
I have a question should I get a 7900xt or 4070 it super if I’m only planning to use blender. Also I heard a 12 core cpu is the most I will need it this true. First time building a pc and blender looks like fun.
Blender is a blast. I have used it for a few years, and it never fails to be entertaining. 12 cores will allow you to do a ton in Blender. Even my highest resolution fluid simulations are bottlenecked by CPU speed, and not by core count. I have the 7950X with 16 cores. You could get by with 8 cores just fine, and 16 would help a little more, but 12 is a pretty great CPU size for Blender. I would recommend the 7900X, since it has great speed and good discounts since the 9000 series came out recently. Also, I have a 96 core server that I am testing for rendering and fluid simulations. I have never been able to use even half of the cores, and the slower clock speeds holds up the simulations. So massive amounts of cores do not solve very many problems. Avoid Intel for now, as their most recent generations have severe bios and code problems, which can cause the CPU to burn. The extra VRAM on the 7900 XT is nice, but the 4070 ti super should beat it in rendering speed. So for a beginner, I would say go for the extra speed. You will not need the extra VRAM as much until you know what kind of complexity your scenes will have. For instance, I have characters that use 16 or more GB of VRAM with nothing else in the scene. So I can only run those on my 24 GB GPUs or on CPU render, which is much much slower. You should learn to optimize your scenes and use tricks like multi pass rendering to get around these problems, but that experience will come with time if you get into blender very deep. Let me know if you have other questions! I really love to help with this stuff
@@bloodoftheunicorns2621 I use 64 GB of DDR5 at 6000 mhz. Get a kit with 2 sticks, and has AMD EXPO certification. This will be easily compatible with 7900X. It is theoretically possible to run 196 GB on the AM5 platform, but there are almost no stable 4 stick RAM kits that will work out of the box. 64 GB has been plenty for me. You can always try to add more later if they make some better kits. RAM is not as straightforward as it used to be it seems
@@bloodoftheunicorns2621 I use a lot of Samsung. They have a lot of cache and are very reliable. Gen4 M.2 drives are decently affordable. I use Samsung 980 pro 2 TB or 1 TB. Up to 7 GB/s of read/write speed, which is absolutely wild
I'm thinking about buying a new one because of the game ARK Survival Ascended. Either a 4070 Ti Super or a 7900 XTX... but since I'd also like to work with Blender, it's hard for me to decide... and I also use Premiere Pro 19
Yeah if you are using the GPU for that many different things, the choice is much harder. I think in a few of your uses, the 4070 ti Super will be faster. Examples would be Blender and gaming with heavy Ray Tracing. For Premiere Pro, it is hard to know which is faster, but the extra VRAM on the 7900 XT will be nice for editing, and even for Blender. So you have to weigh speed and VRAM size. What I would say, is think about your uses in depth. Can you afford a few less fps to have 8 GB more VRAM? If so, the AMD may be a better idea, just so you know for sure you have the ability to run bigger scenes. If you only use Blender a little, or don't use large scenes, then the VRAM will be fine at 16 GB for the 4070 Ti Super. So it really comes down to the balance of your uses.
Looks like it makes much more sense compared to the 4060 Ti 16GB. When 16GB 60 Ti's were going for 450, the Ti Super offers near double the performance for what, 1.75x the price? Lowest i saw was a 1.77x uplift with Classroom, all other scenes sit near or above 2x uplift. Linearly speaking, you're getting exactly what you're paying for, and a little extra perf depending on the scene. Imagine if AMD had better support for these applications, they have better price/perf and more silicon at the same price point but i feel like the RDNA 3 RT cores arent being utilized to the fullest. Damn shame you gotta pay for the NVIDIA premium.
Yeah having things at least stay linear is really nice. And kinda unique compared to some of the prior generations. Yeah AMD should keep up with Nvidia based on gaming results. But in 3D softwares, they just don't have the optimization. Funny enough, there is a software called Zluda, which tricks AMD gpus into running CUDA code. AMD GPUs are faster on CUDA in Blender than they are with HIP-RT haha.
@@ContradictionDesignoh really? Could you make a video testing zluda with the 7800 xt? As far as i know you're the only channel testing amd gpus in blender with actual render times (other techtubers use opendata and compare sample rate giving nvidia cards the win since the benchmark measures burst speed, you've showed that the 7800 xt shines when rendering heavy scenes thanks to its extra silicon)
@@TheUltraMinebox I can try to. I know Nvidia does not like people using it to make money, but I also do not make a lot on ads yet. So it should be fine. I did get it working one day though. Very interesting idea they had to develop that. Interestingly, Intel and AMD were involved in funding the zluda project before Nvidia took some legal actions.
It do stand out as the cheepest Nvidia 16 GB card that is relevant. The 4090 still looks like a no brainer to me if you do a lot of rendering or make your money rendering. Maybe If you are making you own render farm and only need 16 GB VRAM. Then you could get 2 4070 ti super in stead of 1 4090, and that would be faster when rendering multiple frames. But if you only have one workstation the 4090 still looks good to me. For thoughs that got the 4090 when it was new, the 4090 is beginning to look like a very good historical GPU. It still looks like the 5000 series is a ~year out in the future and nothing is indicating that the prices will be lower and hoping for a card that is twise the speed seems a little optimistic as things looks now.😅
The RTX 4090 definitely looks like a good buy for serious rendering. I wish it was more available for its launch price though haha. But yeah we will see what 5000 series brings. I am most excited to learn more about the "hardware denoiser" that their presentations have mentioned. Sounds like it could maybe make viewport denoising instantaneous. By the way, I am going to start configuring a CPU rendering farm using Blender Flamenco. I have ordered a couple servers to learn with. But I got a 96 core Xeon e7-8890 V4 system with 256 GB of RAM coming in the mail. I know GPUs will be faster, but all of the drivers/ non-matching api and settings, and VRAM limits are getting a little old. This CPU farm would let me just click start, and every project will just work well. Should be fun.
@@ContradictionDesign The way I guess the hardware denoiser would work is similar to how the encoding and decoding hardware is working now. So the render would just run as they do now accumulating samples and then in parallel the denoiser, if fast enough, could denoise every frame. (You would then have 2 copies of each frame) then denoising would not affect the render speed, even if it in some cases is slower than the samples and only can do every 5 or 10 samples this would still be cool because the render speed would be the same as with no denoiser. The problem if they make it 100 hardware is that if someone comes up with a better denoiser then they are stuck with the hardware they selected. If the render only used the RT cores and the denoiser could use the cuda cores, tensor cores, iNPU, iGPU or the CPU then you could select what algorithm you would like to use as your denoicer. Denoicers are still new and as i understand they use neural networks just as DLSS upscaling, and you can always improve the neural network. So if I understand this correctly, then I would rather have the denoicer running on the tenser cores of the GPU and then make the GPU such that the tensor cores and RT cores can run in parallel without getting in the way of each other. And then when NPUs (tenser cores) becomes a part of the CPU then I can select if I want to denoice on the CPU or the GPU using what neural network denoicer i prefer for the task. But that might be looking a bit into the future. I know nothing of flamingo as of yet, and nearly nothing about the E7-8890 V4. But 10x the RAM could open up some options that would not be possible on a GPU system. With a system like that you could make large smoke and fluid simulations much faster. Like 5-10 times faster than a consumer CPU. I would love to see you make some videos about that. Like making a scene that you could early scale in size. (A room filling with smoke, or water), then scale it to about the max of the 4090 and test the difference in rendering speed. And then scale to us even more memory just to show the difference in looks. I know it would take a long time to do, but I have not seen anyone do something like that. On the other hand I don't think the 4090 limits that many hobbies, I do get the benefits of everything just working every time, but as far as I can tell the RTX cards are very well supported. Can't wait to see what you are going to do with that munster of a pc.
Yeah the GPUs are easy enough to use, but building many systems and trying to get them all to run on GPU is harder than it should be. Flamenco is a simple local network renderer made by Blender. It defaults to CPU, and some code can force GPU renders. But they are a little harder to work with and with limited VRAM. I would like to try to run the server on Windows 10 Pro, so running apps like Blender on a workstation would be straightforward. But we'll see. I may just have to send files like fluid simulations to the shared storage, and run the simulation from the command line on the server. (not sure how this all works yet) Should be fun, and gives me a whole new direction to go in.
Amazing!!!❤ One quick question, can you render my animations for me. I have a due date for my school major project. As uk financially im not that good and i cant afford to buy a big machine. Weather if u can do or anyone u know that can do it for me ill be happy to contact them. Thanks a lot ❤
God damn! this video is GOLD man ! i was looking for such video for a week, i was deciding between 2060 super and 1080 this video got me thanks
Awesome! Glad it was helpful!
exactly
good to see you back, great video, can't wait for these cards to drop in price after the 5000 series comes out.
Thank you! Glad to be back! I have some big plans with servers.
And yeah, these will be fantastic if they get down to $500 or less next year.
watching from my gtx 1650 laptop, took 2 mins and 46 secs rendering the classroom scene haha. i'm planning to build a pc that's why i am watching this.
Awesome! Let me know if you need any help!
damn bro I'm in the same situation ahah
@@ContradictionDesign What motherboard do you recommend that is economical to put a Ryzen 9 5950x and a 4070 Ti Super on it? thank you
I had great luck with Gigabyte boards, and really like Asus boards. Get an X570 chipset, because the lanes and slots are better (more lanes, higher speeds).
I am definitely interested in seeing how those servers work out
I have them set up, just need to record some stuff. Should be soon!
Thanks man!!! I really needed this video
I wanted to buy 1660 Super for Blender.
After seeing this video, i have some other options too!
Thanks again brother! ❤
Awesome! You are welcome. Let me know if you have any questions
@@ContradictionDesign will do
Nice! This is the first test I've seen of a blender benchmark! Thanks! I really like how you're comparing the performance using creative software. One thing I'd really like to see is the comparison of high end cpus like the amd threadripper pro across different generations using simulations in software like blender and especially houdini to benchmark performance. It would also be interesting to see how the threadrippers perform again more consumer level cpus like the 7950x etc
Thanks! I hope to be able to test high end CPUs someday, when I can afford to build those systems. I am starting to learn about servers now though. I have a quad CPU xeon server with 96 cores coming in the mail. I can test simulations on that to start off!
@@ContradictionDesign ah nice I've been playing around with some server settings as well. One thing Ive been struggling with is getting ipmi set up on my supermicro m12swa-tf motherboard, a step by step for that would be amazing!
Hey! I have no idea how to do anything with servers yet, but you can bet I will know quite a bit within a couple weeks! We can talk back and forth to help each other figure it out! I need to figure out a way to run some livestreams so people can collaborate and ask questions in real time. Maybe a discord server would be better...
@@ContradictionDesign I would love to do some collaboration! I'm not terribly savy when it comes to servers myself but I'm trying to learn! As for live streaming, why not just use youtube live stream? I do really like discord and I think it's a great way to interact more with your audience and build a community!
@@theshadow6273 I will use livestreams, just need to figure out if I can screen record what I see on the server, or I might need to get my friend to help film for me. It will be fun to build a new set of info for servers though.
People have told me to get a discord server started as well, so maybe it is time.
Очень нужный контент брат, от души спасибо
You are welcome! I am happy to help!
Just stumbled on your channel as I'm trying to figure out what GPU to upgrade to since mine is failing. I was thinking of the 4080 super(nutty as its in the $1100-1400 range) but then I decided to look at 4070 variants and seems like the 4070 ti super is a good deal as I've some in the ~$700 range. This card seems like a good bang for your buck. I'm going to pair it with my current cpu 12600k as I don't want to invest in a new platform and 13th/14th gen intel seems risky. Anyways, thanks for posting this information.
The 4070 Ti Super for $800 is a much better deal then the 4080 super for $1100+. 16 GB of VRAM is pretty good too.
You are welcome! let me know if you have questions.
Thanks, you this is really helpfull for blender users.
You are welcome. I am glad it's helpful!
Great comparison !
Thank you!
Holy moly that was 800 samples with no denoising on the barbershop scene. I really REALLY need to upgrade my GPU, I'm having trouble rendering stuff quickly with even just 64!
Hahaha yeah they are pretty snappy now. What do you use currently?
@@ContradictionDesign an RX 6700XT, just can't handle this stuff this fast
@@DarkHarpuia Oh yeah for sure a couple tiers slower.
Is amd Radeon RX 7900 XTX any good ?for 3d blender and 2d art and 2d animation also 3d animation too ?and gaming too lol
It is great for gaming, and good for 2d/3d art. It just is not as fast at ray tracing in 3d software for the same money compared to Nvidia
@@ContradictionDesign k thanks for the help
you r a legend bro💣
@@sherzoderkinov4498 I appreciate that! Thanks!
thanks so much for the info, now would you happen to know if in the long run is better to purchase the 4070 ti super over the 4070 super, in rendering in general, both of those cards perform so good in gaming but im interested if you happen to know if there is a LOT of diff in rendering programs between 4070tisuper and 4070 super. Thanks again for your videos
You are welcome! The 4070 Ti Super is a fair bit quicker in rendering than the 4070 super. I think for longevity though, the extra VRAM is the bigger selling point. As you get more into 3D work, you may find yourself using more and more VRAM and larger files. The extra memory will be the biggest benefit for coming years.
Thanks for all the tests and informations but i got zotac 4070 super twin, 11.60 optix, 10.2 optix oc.. how come you see 16.6? don't mislead nobody with this one, just an advice u to test again with different card..
We probably have different settings or you may have faster storage. This sounds like a loading time difference.
How do the other, longer scenes compare to my results, and what version of Blender were you using?
Thanks btw. It is always good to peer review data!
I wish he would respond
@@ContradictionDesign i did it again a few min ago: rendered the scene as it is, nothing changed (optix): zotac twin oc 4070 super default settings: 11:91, 125 undervolt 11.30, limitless: 10.71.. ddr4 3200 64gb cl 16, 14700kf, mld 500gb 3000 reading 2200 writing.. Blender 4.3 ...
@@penumbra3 Well, I can re-try but I am running SATA SSDs on this test bench. So that could be a part, where my loading time is hurting me. That is why longer scenes make more sense to compare with, because loading time error is a small factor in minutes long renders. But 4 extra seconds in the classroom scene is not that shocking with a slower SSD. And if I was running my scenes from my server and not local, this would slow it down as well in the loading phase.
I will do some investigating, and I might as well re-run everything I have on Blender 4.3 so I can confirm or update my numbers.
How long does barbershop take for your GPU?
@@ContradictionDesign Junkshop 50 samples 5,68 - 300 samples: 14.28, Barbershop 800 samples 54.05.
It would be even more informative to see included some 1080 Ti and 2080 Ti in the mix, thank you for your great service!
Hey! You're welcome! I do have my 2080 ti kingpin on the list and a 1080. Should be done on most of the scenes. They are just pretty far down now. A mobile 4060 competes with a 2080 to now haha
@@ContradictionDesign My bad didn't noticed before I have a 1080 Ti and am looking to upgrade options un low budget that is why this kind of test are of interest to me, by the way a moment ago I went to test the same files some results came out a bit better some a bit worse but my system is different: 13600K 48G Ram, SSD storage
Blender 4.0 Rendering Tests (Optix Cycles:
BarberShop T(sec)
365
363.80
Lone-Monk T(sec)
1040.67 (First run slower by updating shaders)
915.87
Classroom T(sec)
91.91
90.81
Splash 3.3 T(sec)
361.59
359.35
@@omnymisa hey no problem! Just didn't want you to miss seeing the data. I will look at these to see how they look.
If you are looking to upgrade, any rt enabled GPU will be a good jump, since they are basically perfect matches for rendering on cycles. Good luck with shopping, and let me know if you have any questions
My 3090 ti had an easy life and now it’s humbled by a mid tier card.
Haha yeah I know. My 3090 is a little annoying now. Good performance but kinda looks really inefficient on power and such.
That's gpu wake up call right there.
Haha yeah my RTX 3060 just beat out my 2080 Ti Kingpin tonight. Just plain sad
i have only the GTX850M
@@astridphannTry turbo tools add on if you haven’t already. It really is a godsend for lower end gpus (especially if you’re only rendering stills)
Woww.. Amazing video man! Thanks for this. I'm building a computer to work with 3D and Unreal Engine. Based GPU is 4070Ti Super too.
But I'm really wondering about the processor. Intel's 13-th and 14-th Series don't really inspire me against the background of the news about these CPU) My friend has been troubles with his 13900K with its periodic freezes. About a year has passed since the purchase and for a month now problems have been appearing. He works in Maya and Unreal Engine.
In your opinion, which AMD processor (or may be 12700K - 12900K from Intel) with 4070 Ti Super is best for work (UE5, Maya and some another 3D)?
I was thinking about the 7700X. But not shure now.
Thank you!
Hi! Glad it was helpful. The 7000 series and 9000 series are great. The 7700X would be good, but more cores can't hurt either. I have seen some really good deals on the 7900X lately too, but if it is out of your budget, then that's a no go. In general you won't go wrong with 7000 or 9000 series though. Just have to pick your required core count.
Are you going to run intense simulations or edit video? If so, you may want a few extra cores.
@@ContradictionDesignHi!
Thank you very much for such a detailed answer!
I'm working in UE5, ZBrush, Blender, Substance Painter. My laptop looks really sad when I'm working in UE5 :) Yeah 7900X looks good. What do you think about AMD Ryzen 9 7950X? i mean for 3D editors with 4070Ti Super? Or will this processor (7950X) be overkill for this card?
Thank you!
@@SaramLex with creative apps, the CPU and GPU don't really have to match like they can with gaming. More GPU means faster renders and more CPU cores works on other parts of the software. So you can't really balance it per se.
But I have the 7950x and rtx 4090. It works great. My CPU runs everything at 5.1 GHz or higher, all core. It's fantastic.
The 4070 ti super with a 7900x or 7950x would be a fantastic machine as well.
@@ContradictionDesign Thank you so much, man!
You were very helpful in explaining this to me!
And, I subscribed to your channel👍
@@SaramLex I am happy to help! Thanks for being here!
Thank you for this very informative video. I am thinking about this card for Blender (Cycles) for motion design and product renderings.
My current card is RTX 2080s. Sometimes I am running out of Vram. Do you suggest to buy this card now (2024 Q4) right before 5000 series? Thanks
Hey! The 4070 ti super is a great GPU. If 16 GB of VRAM is enough and you can handle the price, then it sounds like a good fit.
I don't know what the new GPUs will be like. They sound like they may have some decent speed improvements, but I don't think they will be significantly faster at each price level anyway.
I would also say except for holiday deals, there probably won't be too many price cuts on current hardware for a long time.
So without the ability to see the specs for sure, I would say get what fits your current needs and don't regret it. Tech moves to fast to stay on top of the ladder for more than a few months anyway.
320 / 5 000
I am very curious how they would cope with the test 2x 4060 ti 16gb or possibly 2x 4070 ti super 16gb, because I am thinking about building such a computer this year, for now buy one card and then buy the second one, I am thinking about asus proart, because they are compact and fit well on motherboards and in cases
@@MrozuProjekt2 If you use multiple GPUs in Blender, the speed up is not linear with each new GPU added. So 2 cards will be 1.6x as fast as one, for example.
To get the most out of the cards, you can run separate instances of Blender for each GPU, and give them each a separate frame to render at a time. This works just fine, but you need a higher end CPU and lots of system RAM, since you would be loading the scene multiple times.
What's your thought on the 4070 Super compared to 4070 Ti Super? I'm currently using a second-hand 3060 and thinking about getting an upgrade, but in my country, the 4070 Super is around $650 while the 4070 Ti Super is around $830, do you think the 4070 Ti Super worth the extra $180?
@@DemonZVirus I would say for the small speedup in Blender, the upgrade is not worth it. But if you plan on heavy Blender scenes, the 16 GB VRAM is pretty nice. So it will come down to that mostly, and then you would get a small speed boost on top of that.
@@ContradictionDesign I see, thanks a lot!
@@DemonZVirus you're welcome!
Where is 4070 ti super on the list I can see only 4070 super
@@ranjithgaddhe9818 It should be listed on the end of the video. If I forgot to show it again let me know haha.
cuanto dinero te da minando estas tarjetas o las granjas de render cuanto dinero te dan?
I have only been paid for rendering a couple of times. I am eventually going to push harder to get clients. For now, I just use the farm for myself and by request
Useful data. I hope that the price of this card will drop next year, because with the current 12 GB, I started to hit the wall more often.
Yeah it will be cheaper next year. We'll see what happens with smaller cards this fall and winter. The 5090 should come out this year and then the lower models after that should push the 4070 ti super down
thanks for you hardwork man, this video really helpfull. Where i see the price to performance, RTX 4070 Super is really big deal
You're welcome! Yeah the 4070 and 4070 super are pretty good value
hi.my cpu is cori 5 13400.is it ok to install 4070 super ti on my system? Does the graphics card give me the best performance with this cpu? Please answer. It is very important. Thank you
They will work great together. Which part of your PC is the bottle neck will depend on your use. But the 4070 ti super will run at its max speed with the 13400 in every GPU bound workload. So it should be just fine!
Would you recommend getting the 4070 ti super or the 4080 super for blender? Would i be losing slot of performance by taking the cheaper one?
I think the performance difference between the 4070 ti super and the 4080 super is probably similar to the price difference for 3D rendering. And they are both very fast GPUs. With both having 16 GB VRAM, you could easily go either way and have a good experience.
@@ContradictionDesign alright thanks. I'll aim for the cheaper option then.
in rtx 4070 serise 4070 ti super is best for redering?
Yes, and has more VRAM. It is significantly faster than 3090
Reporting that my 3080 12gb gaming x trio with slight OC is rendering the scan lands test in 41.62 secs so almost equal to the 4070.
Great info! Thanks! My 4070 models are mostly lower end, so not surprised the clock speed catches up slightly. Also, when I record these, I am sure my recording resources use makes the GPU slightly underperform as well. How does your 3080 12 GB compare to a 3080 ti from my data?
@@ContradictionDesign I'm getting a little better results than the 3080 ti and mostly near the 3090. Btw I just rendered the Lone Monk and it took 173 secs. This model comes with slight oc and I added extra 75 mhz with the nvida app. Probably I'll keep running more of these tests and report back to you in this "thread"😅 since it can't hurt. Let me know if you need some screen shots or specs, but the cpu is an slightly undervolted (so slightly overclocked) 5950x.
@@Drumaier Well that is a really good result from a 3080! Yes feel free to share whatever you would like to. I think I will be recording some server CPU results tonight. You can compare some CPU benchmarks to those for fun too
Hey love the vid…but I have question here…is Rtx 4070 ti super compatible with i9 13900k ?
Hey thanks! Yes most modern GPUs and CPUs will work great together. As far as being optimized, that combination would be fantastic for 3d work or gaming.
Chyba wiem skąd wyszła mi różnica w scenie scanlands vel St Michael 32s u Ciebie na 4070 super vs u mnie 4070 super - prawdopodobnie różnica w taktowaniu ramu -
niestety posiadam 1333 MHz :( DDR3 pasowało by założyć 2133; 2400 . Odpaliłem samotnego mnicha - to się nie ruszyło w 4.2Blenderku.
A teraz zaskoczenie 102:43s 4070 Super Lone-monk (shading landue - blender stop :D hyhy :D
Interesting. Yeah slower RAM can change results a bit. Do you also have Pcie2x16 lanes for the GPU? Gen 2 has slowed down my render tests before.
@@ContradictionDesign mam Dell Precision T7610 , 2x xeon E5 2687w V2
80gb ddr 3 1333MHz ;(
Tak się zastanawiam czy przejście z 1330 na 2133 lub 2400 DDR 3 coś zmieni
PCI Express 3.0
@@merkhava I think it would make a minor difference in computation but probably not a lot. RAM speed is not always a huge factor in computation. I would make sure you have all of the DIMM slots populated, so you run in quad channel instead of dual, if that dell has quad channel memory.
@@merkhava Yeah my T3600 machines should run at Pcie3.0x16, but they default to Pcie2.0x16, which was slow enough to bottleneck my 4070 and most new gpus. So I have to force the PC into gen 3 in the Windows registry I think. Pretty fun haha
I have a question should I get a 7900xt or 4070 it super if I’m only planning to use blender. Also I heard a 12 core cpu is the most I will need it this true. First time building a pc and blender looks like fun.
Blender is a blast. I have used it for a few years, and it never fails to be entertaining.
12 cores will allow you to do a ton in Blender. Even my highest resolution fluid simulations are bottlenecked by CPU speed, and not by core count. I have the 7950X with 16 cores. You could get by with 8 cores just fine, and 16 would help a little more, but 12 is a pretty great CPU size for Blender. I would recommend the 7900X, since it has great speed and good discounts since the 9000 series came out recently.
Also, I have a 96 core server that I am testing for rendering and fluid simulations. I have never been able to use even half of the cores, and the slower clock speeds holds up the simulations. So massive amounts of cores do not solve very many problems.
Avoid Intel for now, as their most recent generations have severe bios and code problems, which can cause the CPU to burn.
The extra VRAM on the 7900 XT is nice, but the 4070 ti super should beat it in rendering speed. So for a beginner, I would say go for the extra speed. You will not need the extra VRAM as much until you know what kind of complexity your scenes will have. For instance, I have characters that use 16 or more GB of VRAM with nothing else in the scene. So I can only run those on my 24 GB GPUs or on CPU render, which is much much slower. You should learn to optimize your scenes and use tricks like multi pass rendering to get around these problems, but that experience will come with time if you get into blender very deep.
Let me know if you have other questions! I really love to help with this stuff
@@ContradictionDesign Thank you for the detailed response. Any suggestions on how much ram I should buy?
@@bloodoftheunicorns2621 I use 64 GB of DDR5 at 6000 mhz. Get a kit with 2 sticks, and has AMD EXPO certification. This will be easily compatible with 7900X. It is theoretically possible to run 196 GB on the AM5 platform, but there are almost no stable 4 stick RAM kits that will work out of the box.
64 GB has been plenty for me. You can always try to add more later if they make some better kits. RAM is not as straightforward as it used to be it seems
@@ContradictionDesign Do you have any recommendations for a SSD?
@@bloodoftheunicorns2621 I use a lot of Samsung. They have a lot of cache and are very reliable. Gen4 M.2 drives are decently affordable. I use Samsung 980 pro 2 TB or 1 TB. Up to 7 GB/s of read/write speed, which is absolutely wild
I'm thinking about buying a new one because of the game ARK Survival Ascended. Either a 4070 Ti Super or a 7900 XTX... but since I'd also like to work with Blender, it's hard for me to decide... and I also use Premiere Pro 19
Yeah if you are using the GPU for that many different things, the choice is much harder. I think in a few of your uses, the 4070 ti Super will be faster. Examples would be Blender and gaming with heavy Ray Tracing. For Premiere Pro, it is hard to know which is faster, but the extra VRAM on the 7900 XT will be nice for editing, and even for Blender. So you have to weigh speed and VRAM size.
What I would say, is think about your uses in depth. Can you afford a few less fps to have 8 GB more VRAM? If so, the AMD may be a better idea, just so you know for sure you have the ability to run bigger scenes.
If you only use Blender a little, or don't use large scenes, then the VRAM will be fine at 16 GB for the 4070 Ti Super.
So it really comes down to the balance of your uses.
@@ContradictionDesign mhh.. yea.. thanks..
very helpful 🎉😊
Good! I am glad it helps
Do a test on rx 7700xt
Hey! I will test one sooner or later.
Looks like it makes much more sense compared to the 4060 Ti 16GB. When 16GB 60 Ti's were going for 450, the Ti Super offers near double the performance for what, 1.75x the price? Lowest i saw was a 1.77x uplift with Classroom, all other scenes sit near or above 2x uplift. Linearly speaking, you're getting exactly what you're paying for, and a little extra perf depending on the scene. Imagine if AMD had better support for these applications, they have better price/perf and more silicon at the same price point but i feel like the RDNA 3 RT cores arent being utilized to the fullest. Damn shame you gotta pay for the NVIDIA premium.
Yeah having things at least stay linear is really nice. And kinda unique compared to some of the prior generations.
Yeah AMD should keep up with Nvidia based on gaming results. But in 3D softwares, they just don't have the optimization. Funny enough, there is a software called Zluda, which tricks AMD gpus into running CUDA code. AMD GPUs are faster on CUDA in Blender than they are with HIP-RT haha.
@@ContradictionDesignoh really? Could you make a video testing zluda with the 7800 xt? As far as i know you're the only channel testing amd gpus in blender with actual render times (other techtubers use opendata and compare sample rate giving nvidia cards the win since the benchmark measures burst speed, you've showed that the 7800 xt shines when rendering heavy scenes thanks to its extra silicon)
@@TheUltraMinebox I can try to. I know Nvidia does not like people using it to make money, but I also do not make a lot on ads yet. So it should be fine. I did get it working one day though. Very interesting idea they had to develop that. Interestingly, Intel and AMD were involved in funding the zluda project before Nvidia took some legal actions.
It do stand out as the cheepest Nvidia 16 GB card that is relevant.
The 4090 still looks like a no brainer to me if you do a lot of rendering or make your money rendering.
Maybe If you are making you own render farm and only need 16 GB VRAM. Then you could get 2 4070 ti super in stead of 1 4090, and that would be faster when rendering multiple frames. But if you only have one workstation the 4090 still looks good to me.
For thoughs that got the 4090 when it was new, the 4090 is beginning to look like a very good historical GPU.
It still looks like the 5000 series is a ~year out in the future and nothing is indicating that the prices will be lower and hoping for a card that is twise the speed seems a little optimistic as things looks now.😅
The RTX 4090 definitely looks like a good buy for serious rendering. I wish it was more available for its launch price though haha. But yeah we will see what 5000 series brings. I am most excited to learn more about the "hardware denoiser" that their presentations have mentioned. Sounds like it could maybe make viewport denoising instantaneous.
By the way, I am going to start configuring a CPU rendering farm using Blender Flamenco. I have ordered a couple servers to learn with. But I got a 96 core Xeon e7-8890 V4 system with 256 GB of RAM coming in the mail. I know GPUs will be faster, but all of the drivers/ non-matching api and settings, and VRAM limits are getting a little old.
This CPU farm would let me just click start, and every project will just work well.
Should be fun.
@@ContradictionDesign The way I guess the hardware denoiser would work is similar to how the encoding and decoding hardware is working now. So the render would just run as they do now accumulating samples and then in parallel the denoiser, if fast enough, could denoise every frame. (You would then have 2 copies of each frame) then denoising would not affect the render speed, even if it in some cases is slower than the samples and only can do every 5 or 10 samples this would still be cool because the render speed would be the same as with no denoiser.
The problem if they make it 100 hardware is that if someone comes up with a better denoiser then they are stuck with the hardware they selected. If the render only used the RT cores and the denoiser could use the cuda cores, tensor cores, iNPU, iGPU or the CPU then you could select what algorithm you would like to use as your denoicer. Denoicers are still new and as i understand they use neural networks just as DLSS upscaling, and you can always improve the neural network.
So if I understand this correctly, then I would rather have the denoicer running on the tenser cores of the GPU and then make the GPU such that the tensor cores and RT cores can run in parallel without getting in the way of each other. And then when NPUs (tenser cores) becomes a part of the CPU then I can select if I want to denoice on the CPU or the GPU using what neural network denoicer i prefer for the task. But that might be looking a bit into the future.
I know nothing of flamingo as of yet, and nearly nothing about the E7-8890 V4. But 10x the RAM could open up some options that would not be possible on a GPU system.
With a system like that you could make large smoke and fluid simulations much faster. Like 5-10 times faster than a consumer CPU.
I would love to see you make some videos about that. Like making a scene that you could early scale in size. (A room filling with smoke, or water), then scale it to about the max of the 4090 and test the difference in rendering speed. And then scale to us even more memory just to show the difference in looks. I know it would take a long time to do, but I have not seen anyone do something like that.
On the other hand I don't think the 4090 limits that many hobbies, I do get the benefits of everything just working every time, but as far as I can tell the RTX cards are very well supported.
Can't wait to see what you are going to do with that munster of a pc.
Yeah the GPUs are easy enough to use, but building many systems and trying to get them all to run on GPU is harder than it should be. Flamenco is a simple local network renderer made by Blender. It defaults to CPU, and some code can force GPU renders. But they are a little harder to work with and with limited VRAM.
I would like to try to run the server on Windows 10 Pro, so running apps like Blender on a workstation would be straightforward. But we'll see. I may just have to send files like fluid simulations to the shared storage, and run the simulation from the command line on the server. (not sure how this all works yet)
Should be fun, and gives me a whole new direction to go in.
@@ContradictionDesign It sounds like something used in a smaller film production.🙂
@@Petch85 That is the goal eventually! And hopefully to get some studios who need my help too
Amazing!!!❤
One quick question, can you render my animations for me. I have a due date for my school major project. As uk financially im not that good and i cant afford to buy a big machine. Weather if u can do or anyone u know that can do it for me ill be happy to contact them.
Thanks a lot ❤
I can probably help. Will you use cycles or Eevee? How many frames?
4080 super 💀🤡
Yeah I will try to get one maybe. People have also requested the 7700 XT, so I have a list now haha.
I CAN WAIT 11 SECONDS IN HALF THE PRICE
Yep. I completely agree with that feeling.
NICE ! MAN you will get rich
Hey I will need it. Got 4 kids now haha