Nvidia RTX 4060 Blender and Cinebench 2024 Benchmarks

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ธ.ค. 2024

ความคิดเห็น • 19

  • @heirzy
    @heirzy 2 วันที่ผ่านมา +2

    Awesome vid man. Any chance you can test out whether dual 4070 (ti/sup) can beat single 4090 rendering speed?

    • @ContradictionDesign
      @ContradictionDesign  2 วันที่ผ่านมา +1

      Hey thanks! If you divide the test results in half, they will represent two GPUs running separate frames, which is the fastest way to render in Blender.
      I could maybe test dual GPUs on a scene together, but they don't run at 2x speed. There is some loss

  • @sum9796
    @sum9796 2 วันที่ผ่านมา +1

    would love the b580 review

    • @ContradictionDesign
      @ContradictionDesign  2 วันที่ผ่านมา

      @@sum9796 yeah I am for sure getting one as soon as they're available. Hopefully in a couple weeks they'll be in stock.

  • @burnedmemory
    @burnedmemory 2 วันที่ผ่านมา +1

    im getting 4060 tomorrow

    • @ContradictionDesign
      @ContradictionDesign  2 วันที่ผ่านมา

      @@burnedmemory nice! It is fast for it's class. Using it for 3D work?

    • @burnedmemory
      @burnedmemory 2 วันที่ผ่านมา +1

      @@ContradictionDesign 5d render, i had gtx 1070

    • @ContradictionDesign
      @ContradictionDesign  2 วันที่ผ่านมา

      @burnedmemory awesome! That will be a huge upgrade!

  • @Petch85
    @Petch85 2 วันที่ผ่านมา +1

    It is crazy how much faster than the 7600 XT the 4060 is. 😱

    • @Roid935
      @Roid935 2 วันที่ผ่านมา +1

      Software optimization and CUDA

    • @ContradictionDesign
      @ContradictionDesign  2 วันที่ผ่านมา +1

      Yeah it is pretty quick. I was surprised by that. But the VRAM situation makes it hard to recommend for 3D at least. You can pick a slow 7600 xt for good VRAM, or a fast 4060 with low VRAM. They all have the marketing figured out for this suff

    • @ContradictionDesign
      @ContradictionDesign  2 วันที่ผ่านมา +1

      Yep CUDA is so much better than hip-rt, that even AMD GPUs run better on CUDA with the zluda workaround. Pretty funny really.

    • @Petch85
      @Petch85 2 วันที่ผ่านมา +1

      @@ContradictionDesign Is this true... Is zluda faster than hip-rt....
      I expected the AMD GPU's to be slower because they don't really have dedicated RT cores. Looks like AMD is trying to avoid have to much specialized hardware in there chips. But they have something they call Ray Accelerators (I think), but it is build in to the rest of the arkitektur and not so much standalone specialized cores. (I think of it like CUDA cores with some ray accelerators).
      So on a Nvidia GPU the RT cores can do the bounding volume hierarchy traversal and the CUDA cores can do the more generalized shading computations. (if I understand it correctly). But on AMD the compute cores does all the work, they just have special hardware to make the bounding volume hierarchy traversal faster. But they still need the compute cores to do this work. I think the thinking of AMD is for a given die size you can have more compute cores and thereby have a high utilization of all areas of the GPU while running ray tracing. And when you don't run ray tracing you can stil use all this extra compute area thus your card might be even faster at non ray tracing stuff.
      Nvidia seems to think that having part of the die area specialized to do one job super fast and efficient and then just have the rest of the die do nothing is better because the specialized area is so much more efficient at doing it's work that it does not matter "half" of the GPU does nothing for a few ms.
      So if I understand this correctly it would surprise me very much is translating CUDA into zluda and using Blenders CUDA algorithms are faster than using the HIP-RT. On the other hand if that is true, then there is hope that HIP-RT can be improved and make every AMD GPU faster.

    • @Petch85
      @Petch85 2 วันที่ผ่านมา +1

      @@ContradictionDesign I guess that is why the have the 4060 Ti with 16 GB VRAM... Just to get you to spend a little more than you wanted to.
      That said, there are a lot of optimizations/compromises you can do in blender to reduce the amount of VRAM you need. Thus you can still make a lot of cool shit on a 4060 with only 8 GB VRAM. Even though it should not be a thing in 2024. I think 12 GB VRAM would have mede the 4060 much easier to recommend.
      That said, the 4060 feels a lot like a 4050 to me (except for pricing). Making the 4060 ti the 60 class card. Nvidia is trying to confuse us and bring up the pricing with there GPU naming scam. 😂