AMD Rx 7600 XT Blender Rendering Speed and Value Discussion

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ม.ค. 2025

ความคิดเห็น • 27

  • @bezalelsukdulary1511
    @bezalelsukdulary1511 18 วันที่ผ่านมา +3

    keep up the work ,nice info

    • @ContradictionDesign
      @ContradictionDesign  17 วันที่ผ่านมา +1

      @@bezalelsukdulary1511 thank you! Glad it's helpful

  • @danilshcherbakov940
    @danilshcherbakov940 17 วันที่ผ่านมา +2

    From what I found online, the b580 doesn't seem to work that well in Blender right now, but it appears to be because of a driver issue.

    • @moravianlion3108
      @moravianlion3108 17 วันที่ผ่านมา +3

      According to latest Puget benchmarks, it seems to be just a hair behind A770 in various productivity apps.

    • @ContradictionDesign
      @ContradictionDesign  17 วันที่ผ่านมา +1

      @@danilshcherbakov940 well I guess we'll see what happens with drivers in the next month or so. I really doubt it will take as long to tune it this time. But who knows anymore with Intel?

    • @ContradictionDesign
      @ContradictionDesign  17 วันที่ผ่านมา +2

      @@moravianlion3108 Well I hope they figure out drivers then, if that's it. Really though they didn't get that much better of a chip vs the A770 based on specs. So hopefully there will be a B770

  • @uthmanaffan305
    @uthmanaffan305 13 วันที่ผ่านมา +1

    Thank you for the work! can u make a benchmark for cpu? simulation, viewport, sculpting benchmark? That would be great since all cpu reviewer just focuses only cpu render ( which we not using cpu rendering )

    • @ContradictionDesign
      @ContradictionDesign  13 วันที่ผ่านมา

      I actually have a fluid simulation benchmark video. You can run it in vanilla Blender. It has two versions of the same scene that are different levels of intensity. It has links to download the files from Patreon for free. Let me know if you don't find it!

  • @manwithbatsuit
    @manwithbatsuit 13 วันที่ผ่านมา +1

    Hello sir, I have a question.
    I'd like to build something in between a render farm and PC, something I can send my blender files to while I continue working on other projects. I'd also like it to be somewhat modular - easy to tear apart, can be upgraded easily. I'd like to start with 2 GPUs and slowly get up to more.
    How would I go about building this?
    Thanks for all your information on this topic.

    • @ContradictionDesign
      @ContradictionDesign  12 วันที่ผ่านมา

      Well if you have enough budget, I would start with an AMD threadripper system. 16 cores would be fine, and get whatever generation you can stand to buy. Newest threadrippers are in the 7000 series. So Threadripper Pro 7955WX is a 16 core, or Threadripper 7960X is a 24 core.
      The motherboards for these use a special socket, so you would need to get one that is compatible.
      These systems give you large motherboards with tons of Pcie 16 lane connections. Unlike consumer boards, each slot will have full 16 lane bandwidth no matter how many GPUs are attached. They do only fit about 4 blower style GPUs, or maybe more custom water-cooled GPUs.
      If you want a specific build list, I can put one together. But keep in mind, this route will cost $4k USD or more, without a single GPU.
      Let me know if you would like a build list!

    • @manwithbatsuit
      @manwithbatsuit 12 วันที่ผ่านมา

      @@ContradictionDesign A build list would be great.
      Do you also have an alternative with regular desktop CPUs?

  • @404element-dj3jz
    @404element-dj3jz 17 วันที่ผ่านมา +1

    rx 7600xt 16gb 3D blender
    can i use it as Traning 3d modeling?

    • @ContradictionDesign
      @ContradictionDesign  16 วันที่ผ่านมา +1

      Yeah it will work great for learning. It's a little slow for it's price, but lots of VRAM is nice

    • @404element-dj3jz
      @404element-dj3jz 16 วันที่ผ่านมา +1

      thx :) ​@@ContradictionDesign

    • @ContradictionDesign
      @ContradictionDesign  16 วันที่ผ่านมา +1

      @@404element-dj3jz you're welcome!

  • @mars2nd898
    @mars2nd898 16 วันที่ผ่านมา +1

    Rx 7700 xt when?

    • @ContradictionDesign
      @ContradictionDesign  13 วันที่ผ่านมา

      Honestly, I maybe can get to it in January. It has not been requested as much as others. But I can try to get it in the lineup.

  • @Petch85
    @Petch85 18 วันที่ผ่านมา +2

    Ok the 7600 XT is a little disappointing.
    The A770 also makes me very interested in seeing the B580. The B580 only have 20 RT cores where the A770 have 32. They are faster, but the difference seems large to me.
    Also the 4070 ti super 16 GB card looks ok in these test. It cost the double of the 4060 ti 16 GB but it is also ~2x the speed. And the 4090 is 2x the cost but not 2x the speed.
    So the 4070 ti super might actually be the best value. If you have that kind of money to spend on a GPU. The 4090 do have 24 GB VRAM, so you can make larger scenes with it.
    But I don't think it is a good time to buy the 4090 or the 4070 ti super right now. The 5090 will have 32 GB VRAM and the 5000 might be much faster. Also the 4090 is getting harder and harder to find and more and more expensive. So I don't think it is a good time to buy the top cards right now.
    Interesting testing, and the new lower end cards seems further away in the future thus the timing might not be as bad for them right now.

    • @ContradictionDesign
      @ContradictionDesign  16 วันที่ผ่านมา

      Yeah I am really just excited for a 5090. I know most people won't get one, but it looks like it will be stupid fast, and hopefully have some new features that are not just architecture.
      They have mentioned "neural rendering" in the info leaked by zotac the other day. So 50 series should be interesting at very least.
      Yeah I would wait to buy anything at this point, because there have barely even been discounts lately. And everything is selling out anyway.
      I am hoping to have these videos timed well with the new products coming out. Search results have been higher for GPUs right now, which is great

    • @Petch85
      @Petch85 16 วันที่ผ่านมา +1

      @@ContradictionDesign Well if the romers are true and it have 170 RT cores vs 4090 128, the it looks to be about 30-50% faster than the 4090. If it is less than 30% I will be a little disappointed, if it is much more than 50% faster I will be surprised.
      I did see the Nvidia dev blog "Real-Time Neural Appearance Models" earlier, this summer I think. But they used a 4090 and it could do 75 FPS as the reference and 250 FPS with 2 of 16 neurons. But honestly I don't think it looks that good. 3 layers of 64 neurons looks better but that was about 100 FPS. 33% is not nothing and maybe the 5090 is much faster at this. But for rendering in blender I don't think it is worth it. I would prefer the better look of physical calculated path tracing.
      And for games the training took them about 4-5 hours for each material on the 4090. (This seems insane to me for only 3 layers of 64 neurons in 300k iterations, so I might have misunderstood something. Also maybe they can train more than one material at a time 🤷‍♂). But that seems like a lot of training time for a modern AAA game.
      I cannot see how this cannot be run on any RTX card, but maybe Nvidia just think they are too slow. 🤷‍♂
      Also "neural material" in games will probably take a few years to catch on. Honestly I am not that excited by it.
      They could also be referring to neural texture compression. Again the is a older blog about it on research nvidia. "Random-Access Neural Compression of Material Textures"
      I have seen this idea many times. Compress the textures in the VRAM and uncompress them on the fly in the GPUs rendering pathway. This will reduce the amount of VRAM a game needs quite a bit, since high resolution textures take up a lot of space. The problem for me though is that the user is using high resolution textures because they make the game look better. If you use lossy compression on the right resolution textures you are introducing new artifacts that makes the game look worse. So you need to make sure the high quality textures still looks better than the medium quality textures. But they do go from like 256 MB to 3.5 MB for a 4k texture, so the compression level is insane. And in there example it still looks better than a 1080p version.
      To be fair Nvidia seems to already have some texture compression that is slightly better than AMD, I do not understand this, cause I thought they all used BC1 to BC7 from DirectX. Honestly i thought they used lossless compression but BC1-7 are not lossless it looks like it works similar to display stream compression. 🤷‍♂
      Still if you can reduce texture VRAM usage to only ~2% and you can rund the neural network in "real time" to decompress them on the fly. Then you could have an insane number of textures on screen at the same time. They say compression only take about 1-15 min per 4k texture on a 4090, so a game studio could probably do this. The decompression is still slower than using the BC type compression. But they seem to take about 1-2 ms. (BC took 0.5 ms in there test)
      But again I don't see how this not could be used on any RTX cards. They are just using the tensor cores. The 2060 have 240 tensor cores and the 4090 have 512. The 5090 is only rumored to have 680. So fair it can do it faster, but probably not much faster than 0.5 ms. So I don't see this changing much.
      Honestly I have no idea what they mean when they say "neural rendering", maybe it is something completely different. But honestly I don't expect much more than marketing. 😂
      To be fair if the US adds tariffs on GPUs it might be a good time to buy now. But the 5090 is so close and I would expect that it takes some time before the tariffs get passed and take effect. But maybe scalpers buy everything. Who knows. But we are so close to the 5090 so I still think waiting is the better move.
      The GPU market has been tuff for people spending less than 500$ on a GPU for a while now. If we could get some interesting cards in the 2-400$ price range people would be much more excited for PC gaming. I must say I missed the old days where a new GPU always was clearly the best buy. They were just so much faster and the price was about the same. I have not been truly impressed with GPU's since the GTX 10xx series. (The RTX 20xx cards were cool with all there new tech, but the price and the lack of game support just took the top of the excitement for me. The 4090 was also an incredible card but the price is out of my reach and the total power also is a little too much for me. 200W already feels like a lot.) I also liked the 3060 12GB when it was new. But it was unavailable for a year and the expensive the next year and then the new card came out 🤦🏼‍♂. But at the right price the 3060 12GB could have been the new GTX 1060.
      We need the cheaper cards to bring in new people that find pc hardware interesting and fun to play with. Then people would search more on GPU stuff, watch more video etc.

    • @ContradictionDesign
      @ContradictionDesign  16 วันที่ผ่านมา

      @@Petch85 Well you definitely know more than I do about the texture compression. I wonder if that would have much impact in Blender rendering anyway. Really the bvh building and loading part of rendering animations is not the problem anymore.
      Yeah we'll see. Intel and AMD have a chance to make budget gpus again. Should be lots of fun!
      I appreciate your insights

    • @Petch85
      @Petch85 16 วันที่ผ่านมา

      ​@@ContradictionDesign I am just speculating. The two blogs are well know and not that hard to read. Also there a pictures 😂. I can recommend them.
      Also I might be a little more jaded than you are. I have just been tricked into being excited for no reason too many times. But it might just be a case of "A little knowledge is a dangerous thing" 😂
      I don't understand how blender handles textures well enough. I would have thought that if there was not VRAM enough to contain all textures then you could just get it from memory at the price of rendering time. I think that would have worked well for tile based rendering, since there is a good chance that all the paths in the same tiles only hits a subset textures. But now with GPU rendering the tiles are often so huge that there only is one. Fair you might have a lot of geometry that might not be hit by any rays for a given frame, but still. And the times where I run out of VRAM is not due to textures, because I do not use a lot of textures, it is always due to simulations that create a very large amount of geometry. So compressing textures will not help me with that. Fair geometry could also be compressed, but I have never heard of anyone doing this in practice. I expect there is a good reason why games uses multiple levels of details on geometry, mesh shaders and nanite instead of compressing the mesh data and uncompressing it on the fly. 🤷‍♂
      I have never used Mipmaps (swopping textures automatically depending on distance) or level of detail (swopping geometry depending on distance) in blender, but I am sure that I have seen a video on level of details and seen mipmaps in the menu somewhere. So I think blender supports both. So I don't really see the benefit of compressing textures. You could maybe have 1 high resolution and one neural texture compressed version of all textures, and the swop them when then the total path length to the camera is longer than x. That would probably work well for both specular and diffuse reflection, and direct rays. And then you only have two texture levels for each texture in your scene. 🤷‍♂
      But making a 1080p version of a 4k texture would take only a few ms where as training the neural network takes 1-15 min in that blog post. So I can't really see how we could benefit from this. But maybe someone else can 🤷‍♂.
      I am wondering what exactly is happening doing the loading part. The bounding volume hierarchy is probably calculated by the CPU, then you have texture and geometry that needs to be loaded from the SSD and past through the PCI-E port to the VRAM, and if you have shaders that needs to be compiled or geometry nodes that needs to be calculated. Honestly it looks to me like it is using every part of the pc except the GPU 😂. CPU, RAM, SSD and PCI-E connection. But the load time depends a lot on the scene. Next time I have one that takes a while I might just look at task manager to see what is happening. But it is probably loading the geometry from a simulation from the SSD, followed by a huge BVH calculation on the CPU because I have way too much unnecessary geometry in the scene and it needs to make way to many levels of bounding volumes 😂.
      well that got a bit long, most just listing all the things I know, I don't know 😂.
      Have you seen the channel called Augury (@augur.y), he made a recursive geometry node setup that I think looks very cool. (Also an easy way to get a lot of geometry 😂) I think the video is called "How to Make 3D Fractals in Blender with Recursive Instancing". I thought that was a very cool and fun idea.
      I think that might be my next little project in blender. I need practice in geometry nodes. 🤓

  • @Roid935
    @Roid935 18 วันที่ผ่านมา +2

    First

    • @Roid935
      @Roid935 18 วันที่ผ่านมา +2

      Intel Arc A770 is 220 USD on New Egg

    • @ContradictionDesign
      @ContradictionDesign  17 วันที่ผ่านมา

      @@Roid935 Oh that is a great price actually. That's the 16 GB model?

    • @Roid935
      @Roid935 17 วันที่ผ่านมา +1

      ​@@ContradictionDesign yes

    • @ContradictionDesign
      @ContradictionDesign  17 วันที่ผ่านมา

      @@Roid935 yeah I tried to find one, and it was sold out for that price.