Two GPUs in a Desktop computer, x8 PCIE Performance.

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ก.ย. 2021
  • #Nvidia #RTX #Blender
    Comparing 8x vs, 16x PCIE performance . Desktop vs. workstations GPU set up.
    Redshift: www.redshift3d.com/
    Attic Scene: drive.google.com/file/d/1zqnU...
    Discord: / discord
    Facebook: / mediamanstudioreview
  • ภาพยนตร์และแอนิเมชัน

ความคิดเห็น • 297

  • @vaibhavtailang
    @vaibhavtailang 2 ปีที่แล้ว +14

    Just the video I wanted. Thank you very much sir! You've helped a lot in clearing my doubts. I went with 5900x setup instead of Threadripper Workstation purely bcoz of price factor. Also because as an Independent artist I could afford the performance loss as compared to spending a thousand bucks extra for threadripper workstation which I couldn't afford at this point anyway. But I surely get the point that at some point later I will need to shift to a workstation. I also get that apart from performance, I also don't have much hardware flexibility and upgrade path with a desktop but for now 5900x setup is just the optimum sweetspot for me.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +2

      Hi vaibhav tailang, I understand you situation as I myself have a 5900x for the same reasons. The system is great for most work. I edit all my videos on the 5900x and I really like the performance i get for the money.
      Good luck
      Thanks for watching

  • @tojiyev_cg
    @tojiyev_cg ปีที่แล้ว +6

    Great video. It is the ONLY video about this topic on TH-cam. Thank you guys.

    • @phasepanther4423
      @phasepanther4423 10 หลายเดือนก่อน

      But why though?
      Just because sli nv link and crossfire aren't supported anymore there's no reason to ignore this at all.

    • @tojiyev_cg
      @tojiyev_cg 10 หลายเดือนก่อน

      @@phasepanther4423 I ment that it is the only video about using dual gpus on consumer motherboards, which usually have 16x pcie lanes, in this case you can use 2gpus with x8 lanes for each of them. There is no such video about this topic. I didn't say anything about not supporting sli of crossfire

  • @rifatkabirsamrat5237
    @rifatkabirsamrat5237 9 หลายเดือนก่อน +4

    You guys are lifesavers. Literally the only video on this topic on TH-cam. Thank you so much!

  • @DerelictNacho
    @DerelictNacho ปีที่แล้ว +1

    I greatly appreciate the way you produced this video. To the point. No click bait title. Straight forward. Thank you!

  • @guillermocosio7362
    @guillermocosio7362 2 ปีที่แล้ว +3

    Hey Mike! Using PCIe Gen4 x8 is the same bandwidth as PCIe Gen3 x16. A 3090 at fullspeed will be 1% (aprox) bottlenecked by PCIe 3 x16, so a 3070 shouldnt be bottlenecked at all with PCIe 4 x8, However using a older system with only PCIe Gen3 (Intel 10th Gen or Ryzen 2000) and dual graphics card may result in more of a performance difference.
    Also I believe that during Rendering if the scene fits within the GPU VRAM the PCIe interface is only used to feed the GPU during loading, so PCIe speed could really hurt is when the scenes don't fit in the VRAM and the GPU need the PCIe interface needs to access system RAM as a cache. So maybe in huge scenes the performance difference may actually be huge aswell.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi Guillermo Cosio, I totally agree, thanks for sharing this with the channel.

  • @ChrisDallasDualped
    @ChrisDallasDualped 2 ปีที่แล้ว +2

    Amazing info you've given us and have subbed to your channel today, you have the best rendering info on the net that I've seen, thank you for this.

  • @itsaman96
    @itsaman96 ปีที่แล้ว

    Thank you for making this video its kinda really detailed Theorie that always comes in my mind

  • @laszloperesztegi
    @laszloperesztegi หลายเดือนก่อน

    Thank you for the video. I'm here for comparing the two methods for Video Transcoding performance.

  • @clausbohm9807
    @clausbohm9807 ปีที่แล้ว +3

    These are the question few review, thanks for the effort.

  • @rayornelas8459
    @rayornelas8459 8 หลายเดือนก่อน

    Really informative. It would be great to note your full specs in your videos.

  • @jinomian01
    @jinomian01 ปีที่แล้ว

    Thank you for doing this. This helped me understand what the impact is, if I go down the desktop dual gpu scenario. I don't need 2 at the moment(And have never done that on any of my builds) but I would like the option in my next build.

    • @MediamanStudioServices
      @MediamanStudioServices  ปีที่แล้ว +1

      Two GPU does work in a desktop if you can fit the cards into the system and MOBO. and for 3D rendering, the 8x PCIe speeds does not affect the performance that much. It only about 2-3% in final time to render due to the transfer of the scene to the GPU. But desktop is a viable solution for dual GPU set ups. thanks for watching

    • @jinomian01
      @jinomian01 ปีที่แล้ว

      @@MediamanStudioServices Thank you Mike, that is great to know.

  • @kylecook4806
    @kylecook4806 ปีที่แล้ว +2

    I've 5950x and was thinking of switching to a workstation with next gen stuff, this vid though was exactly what I was looking for, getting a second GPU and I'll be staying on my current system for a long while.

    • @xMaFiaKinGz
      @xMaFiaKinGz ปีที่แล้ว +1

      You can actually go as low as 4x with pcie 4.0 without losing performance. I have rtx 3070 on pcie 3.0 8x and it works flawlessly with no performance loss.

  • @elaymondes7205
    @elaymondes7205 4 หลายเดือนก่อน

    Hi,
    First of all, thank you very much for the great video, I'm currently planning to put together a workstation for Unreal Engine with two RTX4090 cards with a motherboard for the I9 14900K chip.
    Do you have a good tip which motherboard is suitable for working on both lanes at full speed without splitting them?
    I've been searching for days but unfortunately without success because I'm not that deep into the subject.
    Thank you
    LG

  • @peterxyz3541
    @peterxyz3541 ปีที่แล้ว +2

    THANKS! Im less confusing. Question does the MB auto balance work load? I’m planning to use 2 different Nvidia for AI gen text & images

  • @Afflictionability
    @Afflictionability 9 หลายเดือนก่อน +2

    thank you I needed this

  • @vinee.mp4
    @vinee.mp4 ปีที่แล้ว

    Huge thanks on this informative video!

  • @andreybagrichuk5365
    @andreybagrichuk5365 2 ปีที่แล้ว +1

    Amazing content, thank U, Mike !!!

  • @clausbohm9807
    @clausbohm9807 2 ปีที่แล้ว +1

    Great compares! Great explanation. Great video! Again, please do a 3060 12GB test since it has 4GB more then 3070's

  • @TheArtofSaul
    @TheArtofSaul 2 ปีที่แล้ว

    Something to keep in mind especially in RS case is how out of core works with slower PCIE bandwidth. PCIE speed isn't too much of an issue when everything is loaded at once in a single go and fits in VRAM. But as Out of core is pretty robust in RS you can work on much much larger scenes but the PCIE speed can make a large difference in the total bandwidth of data moving back and forth from System RAM / Scratch Disk (worst but not bad on M.2) vs a single load to the VRAM.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi Saul, I am not an expert on Out of Bang Vram usage, but if this video is any indication of how the GPU performance when they run out of memory, this I say more Vean is always better. At the end of this video I render a very large scene with RS and the RTX A6000 crushed the RTX3090 due to the limited Vram.
      th-cam.com/video/M2I69uK9WVU/w-d-xo.html
      Thanks for watching

  • @OfficialHankIII
    @OfficialHankIII ปีที่แล้ว +1

    THANK YOU FOR YOU DIRECT TO THE POINT VIDEOS!!!!!

  • @Frederic_walls
    @Frederic_walls 2 ปีที่แล้ว

    Excellent! Thank you very much for the information, it is very helpful

  • @3dpixel583
    @3dpixel583 2 ปีที่แล้ว

    Hi! Thanks for your comparison. This is exactly what I am looking for !
    I have a question though. Do you think the CPU difference actually contributes to the time difference?

  • @Gettutorials123
    @Gettutorials123 2 ปีที่แล้ว +1

    Really informative!

  • @rockyvicky3848
    @rockyvicky3848 2 ปีที่แล้ว +2

    Sir you are really making youtube usefull

  • @fenghanlin4284
    @fenghanlin4284 2 ปีที่แล้ว +1

    Thanks for the sharing. I had the same experience. I used Supermicro 2024US-TRT with two A100-40GB cards for GPU-accelerated computing (simulation). I made a mistake by putting one of the card into a X8 slot. I can see from nvidia-smi that both cards can run perfectly to 99%, until oneday I noticed that one GPU in X16 and the other in X8. I had the question whether if affects anything. So I load task individually to the card in X16, and to the one in X8, separately. I didn't see any difference at all! Then I was wondering if it changes with different load, by which I meant the different usage of memories. To confim that, I test different load from occupying 10GB memories until 40 GB in full. I didn't see any difference. Later I did the same tests on two 3090 cards (turbo fan) on a 10980XE workstation (of Dell 5820X). Sorry I can't test A100 in workstation because of heat issue. I put one of the RTX3090 card only in a X16 slot and later a X8 slot. This time I could see about 20% of difference in terms of simulation speed. Note that on 5820X, only PCIE 3.0 is provided. The I was wondering if the difference is caused by the difference between Geforece and Tesla. But I can't put them into the AS-2024-US chasis because the RTX 3090 cards are too long (msi-aero). So I roughly had a conclusion that, PCIE3.0X16 or PCIE4.0X8 may be good enough for my applications, but it may vary, subject different user cases. Thanks alot for your sharing.

  • @sanaksanandan
    @sanaksanandan 2 ปีที่แล้ว +2

    Very useful test. I guess with older gen GPUs like RTX 2070 or 1660, the performance drop will be negligible.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      the performance difference should be the same across the generation of GPUs.
      thanks for watching

  • @SaladFX
    @SaladFX 10 หลายเดือนก่อน +3

    Really wanted to know how two different GPUs would work for production workloads. especially rendering in xpu.

  • @Slacker28
    @Slacker28 ปีที่แล้ว

    I don't know if you can answer this but I'm just rendering with a high-end gaming PC using stable diffusion and I just want to know if x 8 pcie Gen 4 would reduce the quality because I want to put more nvme drives into the machine and I don't want to do it if it's going to compromise my Ai rendering.

  • @Atsolok
    @Atsolok 2 ปีที่แล้ว

    I was looking for this answer and you're the only one that could tell me.... Subscribing!

  • @fadegboye
    @fadegboye 2 ปีที่แล้ว +2

    Hi Mike! Thanks for the video, always has lots of useful information. Just one thing, at 4:03 you mention VRay benchmark scene but that is a Redshift benchmark scene. Is that correct? Cheers.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      yups, my bad. but can not change it now. TH-cam will not allow any edits after you submit the video. I would have to delete the video and repost it.
      Thanks for pointing that out

    • @fadegboye
      @fadegboye 2 ปีที่แล้ว

      @@MediamanStudioServices No problem. Good content.

  • @junaidcg
    @junaidcg 2 ปีที่แล้ว

    Hi Mike, another great video from you. Do you think on the instances when you rendering heavy rendering, is it likely that there will be significant difference between desktop processor vs workstation processor, just like the test you did for RTX A6000 vs RTX 3090, where the difference was negatable until you tested with memory intense rendering?

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      I dont think so as the CPU is close is specs. So it should not be much difference in the test results.

  • @DigitalDesigns1
    @DigitalDesigns1 2 ปีที่แล้ว

    Looking to build a machine to help my realtime cycle rendering in the viewport. I've seen the 3090 nearly rendering certain scenes in realtime I want this level of performance but getting a 3090 now a days is hard. would 2 3080s out perform 1 3090? in rendering, video editing etc?

  • @comodore3684
    @comodore3684 ปีที่แล้ว +1

    great channel. for the video, I was expecting a benchmark comparison between one desktop gpu vs 2 gpu. is it worth to add a second gpu for rendering or is it worst or is it the same ? maybe you have another video that deal with this question, otherwise would be nice to do one. keep up the good work

    • @MediamanStudioServices
      @MediamanStudioServices  ปีที่แล้ว

      Hi Comodore, I have a few dual GPU video's on my channel. Check them out.. Thanks for watching

  • @georgigazeto4650
    @georgigazeto4650 ปีที่แล้ว +2

    are you going to make a new dual gpu video with 4080 or 4070 ti?

  • @deadmikehun
    @deadmikehun ปีที่แล้ว

    Awesome video, however i have a question. Do Sata drives use any pcie lanes, or are they using the dedicated 4x that is needed for the chipset? I have an asus b450-f gaming motherboard that i currently use with a gtx 1070. I have ordered an rtx 3080Ti and i want to use them simultaneously in blender, but i'm not sure whether i should upgrade my motherboard. The motherboard manual says that in a dual gpu configuration GPU1 would get 8x, GPU2 would get 4x, the NVME SSD would get 4x. I'm also using 4 Sata HDD-s for storage, but if they use any additional PCIE lanes other than the 4 required for the chipset i will loose a lot of performance on the GPUs. Does it even worth using a secondary 1070 with a 3080Ti? It's still cuda cores though... Thank you!

    • @mikebrown9826
      @mikebrown9826 ปีที่แล้ว

      Sata uses your chipset and not pcie lanes. As for using the 1070. There is some advantages but with only 4 lanes. It may be better to just use the 3080ti at full speed and the 16 lanea

  • @DigitalDesigns1
    @DigitalDesigns1 2 ปีที่แล้ว

    Love your videos. new subscriber..

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      hi Dewayne Dailey, for video editing, Unless you use Resolve Studio, the paid version. most video editing software does not use the GPU for much acceleration. And rendering in the viewport is a single GPU task. Just final 3D rendering can use the multi GPU set ups

  • @lejeffe1663
    @lejeffe1663 2 ปีที่แล้ว

    awsome been eyeballin a threadripper pro/r9 dual system build :)

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi Jeffe, I wish you luck finding a system in this messed up market.
      All the best.
      Thanks for watching

  • @yasherok
    @yasherok 2 ปีที่แล้ว +1

    Hi Mike,
    Could you please say what motherboard do you use for the Desktop machine in this video, and what cooling system? As I see you use blower versions of GPUs in this rig, right?
    Also, what do you think about GIGABYTE X570S AORUS MASTER motherboard for using with two fan style GPU and Ryzen 5950x? Looks like it has enough room for airflow between two GPU and it has decent VRM for this processor? Tring to figure out best config for my situation.
    Thank you for your channel, there is a lot of useful and unique information here!

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      Hi Vlr. I am using the Gigabyte X570 pro wifi.
      www.gigabyte.com/Motherboard/X570-AORUS-PRO-WIFI-rev-10#kf
      for the cooling system. I did my own custom loop. check out the video i did here.
      th-cam.com/video/r_Rl30LhE2M/w-d-xo.html
      th-cam.com/video/dwfn5IQXnIE/w-d-xo.html
      The GIGABYTE X570S AORUS MASTER is a good choice.
      Thanks for watching

  • @nawafalrawachy394
    @nawafalrawachy394 2 ปีที่แล้ว

    Hi Mike,
    Thank you for taking the time to provide this valuable information.
    If I may ask. What is the keyboard you use in your videos. Thank you.

  • @pascaljean2333
    @pascaljean2333 ปีที่แล้ว

    So do the cards share memory? It almost sounds like dual 3060 12gb would be a sweetspot

  • @andreasrichman5628
    @andreasrichman5628 ปีที่แล้ว +1

    Does it affect southbridge temperature? significantly I mean

  • @KiraSlith
    @KiraSlith 9 หลายเดือนก่อน

    It sounds like a lot of that performance reduction comes from switching time loss. I was going to build a gaming system and just add a second GPU later, but I ended up cowering out and just throwing a pair of 2080tis into a T7820. $1k and I got a rig that happily does both Blender with RT and AI production work.

  • @DasunGunathilaka
    @DasunGunathilaka 2 ปีที่แล้ว

    Hey.. thanx for the video.
    I have a question .
    Is there a difference in video quality when you export with hardware acceleration enable (nvidia, amd, Qsync) compared to just using cpu for vedio decoding and encoding..?

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      Hi Dasun Gunathilaka, there is always a difference when using a different encoder, be it hardware or software, but the difference should be very small and the quality should be about the same.
      thanks for watching

  • @arasdarmawan
    @arasdarmawan 2 ปีที่แล้ว

    Hey Mike. Is there a way to stack 6 GPU RTX 3080/3090 blower on a single workstation motherboard ? I've been looking as well towards mining motherboard (that can fit 6-12 GPU) - but they reduce the pcie lanes speed x1 - Your video here showed a very little difference between x16 and x8 - So i'm curious if x1 is pushing it way too far for a render gpu farm? need your advice.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +2

      My question to you is why would you want to do this. Just buy a GPU server for this task. Gigabyte makes a great server for just this style of workflow. Anything over 3-4 GPU in one workstation requires a server chassis.

  • @santiag0pictures
    @santiag0pictures 9 หลายเดือนก่อน +1

    Thank you so much! It's hard for many of us investing our savings in hardware trying to get the best performance at a lowest cost. Have you ever considered power consumption? because having to many machines working 24/7 affects the cost of the electricity bill. So, is it cheaper having many GPUs in one PC vs Many PCs with single GPUs?.

  • @lucapupulin6990
    @lucapupulin6990 2 ปีที่แล้ว +1

    Very useful information!
    Thanks for the awesome benchmarks you do!
    It would be interesting to see how much simulation (Houdini Pyro or Vellum) speed is impacted using x16 vs x8 PCIE

  • @M.D.design
    @M.D.design 2 ปีที่แล้ว +1

    Thanks a lot for the review
    I was planning to build a system with a ryzen 9 5950x and two 3090 or 3080/80ti, mainly depending on what I can find on the market because the gpus cost a lot where I live (2500€ on average for a 3090 custom).
    But obviously I would be more inclined to 3090 also to try the nvlink.
    I also considered a workstation platform like the asus wrx80 with a 3955wx (because I do 80/85% of the work with the gpu and don't need a lot of core) but it's more expensive

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi Man, you could try a Lenovo P620 with the 12 or 16 core, although the space inside the case is limited, but I have installed two RTX3080 in the case and it worked. If you can get your hands on two RTX3080ti founder cards then would fit no problem in the P620 and Lenovo is offering the 3080ti if you can get a local SI to do a special order for Lenovo. Don't buy form the web site as the prices are better from a reseller.
      But a Ryzen 3950 is a great system as well, just pick a good MOBO, the X570 Taichi is a great board with good VRMs
      www.asrock.com/mb/amd/x570%20taichi/
      you will need a 4 spacer NvLink bridge with this board. but you will want the extra space with the big RTX3090s.
      Or better yet a custom loop water cooled system is your best choice. check out my video on bending PETG tubing.
      th-cam.com/video/dwfn5IQXnIE/w-d-xo.html
      thanks for watching

  • @eric6606
    @eric6606 2 ปีที่แล้ว +1

    Would love to see a review of the new Mac Pro/Max vs RTX 3090/A6000. Great content btw.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      Hi Eric, not that I am a Mac hater, but there is not much support for GPU rendering on a Apple system, so I do not have any desire creating a Mac video on 3D rendering performance.
      Thanks for watching

  • @pranavsuthar5864
    @pranavsuthar5864 ปีที่แล้ว +1

    Hey mate! I am bit confused whether to buy gaming PC or workstation PC, for CAD works. So I searched for custom build pcs website and found Z690 chipset supports both workstation(Nvidia rtx a2000) & gaming(Nvidia rtx 3080ti) graphics. But my question is can I put both these graphics on chipset at same time & while working on CAD software it uses workstation graphics (for compatibility as per hardware certification) & while gaming it uses gaming graphics.

    • @mikebrown9826
      @mikebrown9826 ปีที่แล้ว +2

      Why use two GPUs. Just get the 3090 or 4090 as they also work well with CAD application.

  • @jinron24
    @jinron24 2 ปีที่แล้ว

    Here's two questions, one if two x8 is only 5-9 percent slower why and second is there a way to fit complex scenes in limited memory cards?

  • @user-rw8kl7xy5n
    @user-rw8kl7xy5n 3 หลายเดือนก่อน +2

    Thank you for this video. It helped explain alot. I have a dual gpu setup with two 4070 supers. My current board has 1 pcie x 16 5.0 and 1 pcie x 16 4.0. I am in the process of getting a new motherboard with 2 pcie x16 5.0 slots. Would this improve performance?

    • @wtfskilz
      @wtfskilz 2 หลายเดือนก่อน

      It would increase, however it's still going to run at 8x

  • @tay.0
    @tay.0 2 ปีที่แล้ว +1

    I think most of the differences you are running into is caused by CPU differences between the two builds, since Light Cache and some Global Illumination are still using CPU even during GPU renderings.

  • @drengot1066
    @drengot1066 2 ปีที่แล้ว

    really informative

  • @JohntheNX
    @JohntheNX ปีที่แล้ว +2

    can u make a new test but with pci 5.0? Im planning to build a new rig with 7950x and x670 mainboard. I read the manual that if i use 2 pcie 5.0 slot, It will share each x8, but im not sure if there s any reduction considering it s faster

    • @jinomian01
      @jinomian01 ปีที่แล้ว

      I would love to see this as well. Similar build I am considering.

  • @superstite21
    @superstite21 2 ปีที่แล้ว

    Great video! How do rendering softwares handle two GPUs in a non SLI/Crossfire/NVlink configuration vs a SLI/Crossfire/NVlink??
    Thank You

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +4

      when I get some more GPU I will test this out and do a video
      thanks for watching

  • @TheBlueVaron
    @TheBlueVaron ปีที่แล้ว +1

    Would Directx12 use both GPUs as one for gaming?

  • @anyonecansayeverything
    @anyonecansayeverything 6 หลายเดือนก่อน

    good point

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 2 ปีที่แล้ว

    Interesting information, so the bandwidth of the pci4.0 is at this moment hardly used if a 16* slot and a 8* slot can stay withing 10% of eachother. And now we already have PCIe5.0, while PCIe4.0 is hardly fully used, maybe storage attached to the PCIe bridge, makes better use of the available bandwidth.

  • @ahmedsouody1765
    @ahmedsouody1765 ปีที่แล้ว

    Thanks for the great content .. what is the power supply capacity for the desktop?

  • @garyc5245
    @garyc5245 3 หลายเดือนก่อน

    Well even though this is 2 years old, still very helpful. My situation is using a NVlink bridge on two 3090TI's for a client that wants the VRAM of a A6000 without the price. Was able to find a nvlink bridge, but it difficult finding a motherboard with the 81mm spacing I need between the two 5.0 x 16 slots

    • @hareram4233
      @hareram4233 2 หลายเดือนก่อน

      Did u find one? I also need suggestions for this motherboard

    • @garyc5245
      @garyc5245 2 หลายเดือนก่อน

      @@hareram4233I ended up getting the TRX 50 Aero-D by Gigabyte, has a 3 slot spacing and found a 3 slot NvLink online. Still in the works, but looks like Linux can support NvLink even if windows does not....at least I hope so

  • @TheGamingREZ
    @TheGamingREZ ปีที่แล้ว +2

    Guys could someone help?
    I have a Gigabyte Z690 Aorus Elite DDR4 CPU, and have a rtx 3080 in there.
    Is it worth it if I put another one in there (I already have another 3080, and also a 1070 ti)? I use the PC for gaming / video editing.

    • @mikebrown9826
      @mikebrown9826 ปีที่แล้ว

      For gaming, it's not going to help you as you can not sli/nvlink the GPUs. For video editing. Only resolve can use multiple gpus

  • @dan_hale
    @dan_hale 2 ปีที่แล้ว

    Thank you very much for this video! I would really like to see this testing done again with RTX 3090s. Also, I'm curious on the responsiveness when updating a scene in the viewport. I would think the increase in badwith with 16x would significantly reduce latency making the viewport much more snappy.
    I currently am running my duel 3090s with 5800X, however, I'm planning on upgrading to TR Pro after watching this video. Thank you!

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi Daniel, So only one GPU is used when interacting with the viewport in 3D programs, so if you want to know the performance, Juts take out one of your GPUs and this will be full 16x on PCIe gen 4 in your system. You can then compare the performance between the two GPUs running at 8X.
      Thanks for watching

    • @dan_hale
      @dan_hale 2 ปีที่แล้ว

      @@MediamanStudioServices Thanks! I also wonder if the performance difference has more to do with the CPU/platform instead of the PCIe lanes. Otherwise, the custom build of blender that I use utilizes all GPUs even in the viewport (its quite spectacular). So I didn't even think to try this.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      @@dan_hale The Ryzen 5xxx CPU as very fast and I di not think this is a limiting factor in my tests, in fact the 5900x is clocked faster that the 16core TR-Pro cpu i used. So i still believe the 8X PCIe is the limiting factor is this set up.
      I would also be very interested in your custom set up as no system can use two GPU in the viewport for interacting with the software as this? Can you supply more information on this?

    • @dan_hale
      @dan_hale 2 ปีที่แล้ว

      @@MediamanStudioServices Download the latest Blender Experimental Cycles X build. There are also various other builds of blender that support multi-gpu in viewport various different ways such as K-Cycles and E-Cycles for blender. I've switched entirely to rendering in Blender due it's unbelievable leverage of GPU power. It's seriously, seriously fast, and the new versions and custom builds are just even faster.

  • @hardwire666too
    @hardwire666too ปีที่แล้ว +1

    GPU prices are way down now, and theused market is starting to flood. I really want a 3090 mainly fo the Vram, but the prices on used GPUs is hard to ignore. I already have a 2070 and used they are starting to pop up for $250-ish bucks. It may not increase my Vram and let me worry less about scene size, but if it can shave some significant time off my renders in Blender 3.xx I'd call it a win. Would be curious to see what you think.

    • @MediamanStudioServices
      @MediamanStudioServices  ปีที่แล้ว

      Well I think the 3090 would shave off quite a bit of time vs. the 2070. if you can get the 3080 or 3090 at a good price. then you should get one.
      thanks for watching

  • @dulichauch
    @dulichauch 2 ปีที่แล้ว +1

    That's the info! Thank you so much! I want to build this for rendering and modelling in Blender, PSU and case with upgradeability in mind. 5950x is out of budget so I hope the 5900x will be able to handle mobo and future 2 GPUs. My list Ryzen 5900x, X570 Taichi, 32Gb Vengeance LPX, RTX 3060, plus a 2nd GPU in the Future, Case Thermaltake Level 20 HT, Deepcool 360EX, 2NVMe WD 750SN, Seasonic Prime PX1300. Thank you for any input.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +2

      Hi dulichauch, if you reduce the PSU to 1000watts you could get some extra ram. I would suggest 64GB. Even with two RTX3080 at 350 watts that still leaves room for the 105watt CPU.
      All other parts are a good choice. The Taichi is a good choice for two GPUs.

    • @dulichauch
      @dulichauch 2 ปีที่แล้ว

      @@MediamanStudioServices Oh, thats surprising! At beQuiet's PSU Calculator, adding the watercooler, 2 GPUS, 3 additional fans and 2xM.2 and one SSD I received 1000W usage, so I got the big PSU, still think of keeping it. For RAM: would it be totally absurd to order 2 separate LPX Kits of 16x2GB for 170€ or is it strongly recommmended to go with one kit only 2x32gb for 280€?

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      @@dulichauch it was just a suggestion to save money and get the much needed RAM required for creative workloads. More PSU is always better if you can get it.
      and I would get the 2x32gb and leave room for upgrade later.

  • @ChrisDallasDualped
    @ChrisDallasDualped 2 ปีที่แล้ว +1

    Hey Mike, I'm looking to build a PC for workflow mainly in Premiere Pro and Filmora, is it more wise to go for an RTX 3090 or 2 RTX 3070's which would benefit me the most? I don't care for gaming this is strictly for my workflow. Also, what boards and which ram would you recommend? Obviously faster ram makes a difference? What about hard drive? I would prefer ones with at least 3K write speeds. And which water cooled system should I use? I've also narrowed it down to either the Ryzen 5900X or 5950X which I have access to. I will await your reply and thx a million for your hard work.

    • @GuidoGautsch
      @GuidoGautsch 2 ปีที่แล้ว +1

      Premiere Pro isn't able to take advantage of GPUs efficiently and is more CPU-limited us my understanding, so even a single 3070 would be fine. Invest in the 5950x out of those two options. If you were to drop PrePro for Resolve, however, two 3070s would beat out the single 3090 by ~30%

    • @ChrisDallasDualped
      @ChrisDallasDualped 2 ปีที่แล้ว +1

      @@GuidoGautsch thx for the input, I hate resolve btw Adobe is so much easier to work with and so is Filmora btw. Yeah 5950X is what I am aiming for.

    • @GuidoGautsch
      @GuidoGautsch 2 ปีที่แล้ว +1

      @@ChrisDallasDualped yeah, I hear you. I prefer PP over Resolve as well for editing - just wish it'd be able to take advantage of GPU grunt as effectively as Resolve does.

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      I agree with PP as this is my choice for editing the Mediaman videos. Just wish Adobe would get better GPU support.

  • @antoniosomera
    @antoniosomera ปีที่แล้ว +3

    Thanks!, Great information and tests!, I have one doubt, Is it possible to render in V-ray with RT cores and 2 RTX GPUs but from different generation (E.g. RTX 2090 + RTX 3090) or different models (E.g. RTX 3090 + RTX 3080)? obviously without nvlink or sli. Well, thanks in advance!

    • @TILK3D
      @TILK3D ปีที่แล้ว +2

      @andrew I have the same question, maybe you already got an answer

    • @antoniosomera
      @antoniosomera ปีที่แล้ว

      @@TILK3D I haven't solve my specific doubt regarding 2 RTX GPUs but from different generation (E.g. RTX 2090 + RTX 3090) , But I have found this: 1.-You can render with several Nvidia GPU's using Vray and Vray will manage to use the available CUDA cores in the GPUs (This should work with CUDA cores of the same GPU generation even if they are different model cards (E.g. RTX 3090 + RTX 3080), but I still I don't know if different CUDA cores generations can work (E.g. RTX 2090 + RTX 3090) I have a RTX 3090 and a 980 Ti, I need to make some experiments. 2.-It is no necessary to use SLI or NVLink in order to use all the available CUDA cores of those same generation Cards, 3.-The NVLink will help sharing GPU's RAM but the improvements in rendering time is small (+/- 10%), 4.-I'm deducting (I still don't have evidence) that you can render with Vray and RT cores of several RTX GPUs of the same generatios just like the cuda cores (E.g. RTX 3090 + RTX 3080) also without an NVLink, Vray will manage to use the available RT cores. 5.-I don't know if you can render with VRAY using different RT cores generations GPU's (E.g. RTX 2090 + RTX 3090) Hope this is useful to you, let me check the sources where I found this information and I'll share it here. Please let me know if you have found something else. Best regards.

    • @TILK3D
      @TILK3D ปีที่แล้ว +1

      @@antoniosomera THANK YOU FOR YOUR INFORMATION I WILL BE LOOKING FOR MORE INFORMATION AND IF I SOLVE MY DOUBT THE SAME I WILL SHARE IT GREETINGS, SUCCESS

    • @Outrider42
      @Outrider42 ปีที่แล้ว +1

      You can use any combination of supported GPUs. The CUDAs will stack, the VRAM will not. So if you use a 2070 with a 3090, and the scene exceeds the 2070's VRAM, performance will suffer. But aside from this, you can mix and match however you want, and with different generations.
      This should be the case with most CUDA based renderers.

    • @antoniosomera
      @antoniosomera ปีที่แล้ว

      @@Outrider42 Thanks!

  • @ColinCharles3D
    @ColinCharles3D 2 ปีที่แล้ว

    Hey Mike, thank you for your videos, they help me a lot concerning my future build.
    Indeed, as a 3D Artist, I am looking to upgrade my build that has lasted me for my studies since 2016.
    As I'll work with a workstation at work, I am looking to build a personal desktop PC for 3D rending (gpu based Octane Render/C4D & Blender/Cycles), video editing (adode suite) and gaming (most likely 1440p, 34' ultrawild).
    My current build would be :
    - 12th gen i7 (liquid cooled AIO)
    - 64gb RAM DDR4 (DDR5 looks a little too expensive)
    - 2TB m.2 nvme + 500gb ssd sata
    - RTX 3080 10gb (+ GTX 1060 6gb from current build)
    - 1200W PSU
    Which motherboard would you recommand that can handel 2 GPUs with at least 2x 16 PCIE 4.0 or 5.0 ?
    Doesn't look like a lot of choise yet for z690 chipset that doesn't break the bank... maybe I should wait a bit ?
    And do you think 1200w psu is enough for 2 gpus and the overall build ?
    Thanks a lot Mike, just subscribed, your content is priceless !

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +3

      hi Colin, no desktop MOBO will support 2 GPUs in 16x. You would need to move up to a HEDT or Threadripper MOBO. Now you can still get good performance from a X570 running the GPUs at 8X. And I would recommend the Asrock X570 Taichi for running two GPU. This board have great VRMs to manage the power and the spacing to house the two GPU.
      Good luck with the build

  • @aceaquascapes
    @aceaquascapes 2 ปีที่แล้ว

    @3:26 dear the screenshot score is differnt from what ur audio said.. thanks for the great work.
    I finally got a workstation

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      ya, i rushed this video and there is a few mistakes. But thanks for watching
      what workstation did you get?

  • @lightning4201
    @lightning4201 ปีที่แล้ว

    Would the results be comparable if it was 3090s instead of the 3070s?

  • @frankbrancato3997
    @frankbrancato3997 ปีที่แล้ว

    Can you say what mother board you use on the desktop? Did you have any heat problems with two boars being so close to each other? I'm running a 3080 and a 3070ti and they run at about 70C.

    • @powerso1380
      @powerso1380 ปีที่แล้ว +1

      After digging the rabit hole by watching two other videos from this channel: "Water cooling for Content Creators (Part 1)" (box behind) and "How to bend PETG tube, perfect bends every time! Water cooling part 2" (closer look). I guess the motherboard is X570 AORUS PRO WIFI. Hope this help
      edit: typo

    • @vucha340
      @vucha340 ปีที่แล้ว

      Can i use rtx 2060 And 3080 working in same time , and do i see some better fps in games? Like some small increase in fps or huge?

    • @zonaeksperimen3449
      @zonaeksperimen3449 ปีที่แล้ว

      @@vucha340 no you cant get extra performance, for non SLI setup you will get single main Gpu performance, i used dual rtx 3060

  • @ozztheforester
    @ozztheforester 2 ปีที่แล้ว

    thanks this is very informative! would it be possible for you to conduct a similar test, using tb3 egpu solutions daisy chained?

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi ozztheforester, if I could get my hands on such equipment I will do a video. Here is a link to some reviews.
      egpu.io/best-egpu-buyers-guide/

  • @alial-shibily5359
    @alial-shibily5359 2 ปีที่แล้ว

    Do the vrams add up that way to 16gb ? whether it's desktop or workstations

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      you mean the Vram on the GPU? it is physically on the GPU card itself, so its the same no matter what computer you plug in into.

  • @myzingonline
    @myzingonline ปีที่แล้ว

    I also wanted to run multiple I have 1660 super + 9600 gt + Motherboard grphics uhd 630 with 650 watt supply... do you think I can run all of them?

    • @constantinosschinas4503
      @constantinosschinas4503 ปีที่แล้ว +1

      Tried with a GTX 1050ti + GTX970 and was a nightmare. No matter the setup or drivers, pc was completely unstable and would not function properly.

  • @Verlaine_FGC
    @Verlaine_FGC 10 หลายเดือนก่อน

    Does the gen of the PCIE slot factor in this as well?
    Will two gpus that are meant to work on PCIE 4.0X16 slots still get a performance hit on each of them even if they're on PCIE 5.0x8 lanes?
    So no matter what, the overall performance of the lane is the highest MUTUAL rating between the slot and the gpu that is on it?
    So a gpu that is meant to run on a 4.0x16 slot will be reduced to a 4.0x8 performance when mounted to a 5.0x8 slot?

    • @omidnt2992
      @omidnt2992 6 หลายเดือนก่อน

      if your graphic card is PCIE gen 4 , then putting it in a Gen 5 PCIE slot will not increase your performance cuz your card only know how to talk in "GEN 4" language, now what if you put two PCIE gen 4 graphic cards into 2 PCIE gen 5 slots with only 20 or 24 PCIE lanes, you will have two PCIE gen 4 x8 .
      But for rendering or AI that speed reduction wont impact your performance that much.
      And for gaming, you will never want to use 2 graphics cards cuz almost no game has coded to work with multi GPUs. for gaming maybe you can assign Physics to process on one card and everything else on your main graphic card , but as i said , don't do it for gaming too much heat and electricity for a little or no improvement (sometime the heat will even reduce your gaming FPS)

  • @Andee...
    @Andee... 24 วันที่ผ่านมา +2

    I wish you included card temps

  • @theWanderer521
    @theWanderer521 2 ปีที่แล้ว

    Great content as always! I don't know if this is worth making a video - but I would love to see a video about NVlinked GPUs - performances,installation,configuring the settings
    I was creating a scene in Blender the other day, I ran out of Vram on my 3090 lol...I might have used the wrong settings, didn't optimized the scenes better or used high res textures that I shouldn't be. My future plan is to build a workstation with multiple gpus (currently I have my eyes on the H3 platform, but I might need more vram + gpu power to render) I'm using 3970x with a 3090
    Thanks!

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi theWanderer521, I would live to do a Video in two RTX3090, trouble is that I do not have two 3090 anymore. But if I can get my hands on two gpus I will do a video on the set up of the NVlink.
      Thanks for watching

    • @rakshithr4230
      @rakshithr4230 2 ปีที่แล้ว

      @@MediamanStudioServices Waiting to see this kind of video. Please make one.
      I have planned to build pc with i9 12900k with 2 x RTX 3090. l want to know what would be the performance with NVlink.

    • @rakshithr4230
      @rakshithr4230 2 ปีที่แล้ว

      @@MediamanStudioServices On your experience, tell me is that a good idea or should I go with workstation.

  • @jakobw135
    @jakobw135 18 วันที่ผ่านมา

    Can you use 2 DIFFERENT VIDEO CARDS (AMD, Nvidia), and ONE montitor??

  • @GuidoGautsch
    @GuidoGautsch 2 ปีที่แล้ว +1

    This was just the video I'd hoped you'd make! Super interesting, thank you Mike. It seems that an 570X board would do the trick with relatively minimal sacrifice. There's a pretty significant price difference in Australia between the 3060 Ti (~ $1200) and the 3070 (~ $1700), so for a dual GPU rig for Blender, two 3060 Tis would be cheaper and faster than a single 3080 ($2300), 3080 Ti ($2500) and even a 3090 ($3300). Yes, GPUs are expensive down under. I was thinking of pairing them with an MSI Tomahawk x570, a Ryzen 5950x and potentially an 850w PSU?

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      hi Guido, I was looking at the asrock taichi x570 when I built my system, but it was not available when I purchased my equipment. The Tomahawk is also a nice board.
      Good luck on the GPU purchase.
      Thanks for watching.

    • @nawafalrawachy394
      @nawafalrawachy394 2 ปีที่แล้ว

      Hi Guido,
      Just a word of advice. If you have triple slot cards (air cooled) then it is recommended to have at least one empty slot between them. This way, the top card won't be choked of air. I personally recommend the MSI x570 Godlike. Another thing is for dual 3060ti a 1000 Watt gold or higher PSU is recommended from a reliable psu maker like Seasonic or EVGA (EVGA gpu's are bad but their PSU's are excellent). Generally, using dual GPU's in rasterization based engines like EEVEE or Unreal engine (by using multiple instances to speed up rendering / multi instance rendering) will use more power than a pathtraced engine like cycles or octane would use. Thus tripping the OCP on your PSU. Speaking from experience. Hope this helps.

    • @GuidoGautsch
      @GuidoGautsch 2 ปีที่แล้ว

      @@nawafalrawachy394 thanks, that's really good to know! That motherboard costs $1200 here 😵 more expensive than TRX40 boards, could even get an WRX80 for that. So maybe I'll stick to a single GPU after all.

    • @GuidoGautsch
      @GuidoGautsch 2 ปีที่แล้ว +1

      @@MediamanStudioServices thank you. The Taichi looks solid

    • @nawafalrawachy394
      @nawafalrawachy394 2 ปีที่แล้ว

      @@GuidoGautsch It could still work with a regular layout board if the cards are between 2.7-2.8 slots wide. Though the top card will struggle a bit to get air which will increase its temps. To combat this, I reduce the power limit on the top card to 95% and give it a custom fan curve to keep it from going above 80c.

  • @crobar1
    @crobar1 2 ปีที่แล้ว +1

    Interesting video

  • @SubirajSinghFitness
    @SubirajSinghFitness 2 ปีที่แล้ว

    Can you please tell the system requirements for a simple rtx 3060 graphics card, as i have one of the desktops which is quite old enough lets say 5th gen intel i5 with motherboard such old to, would this graphics card fit in it, or would i need to change my motherboard and processor as well to an 10-11th gen?

    • @seanspliff
      @seanspliff 2 ปีที่แล้ว +1

      Idk if it helps but I use a 3060 ti with a intel I7 and a 6 year old motherboard

    • @SubirajSinghFitness
      @SubirajSinghFitness 2 ปีที่แล้ว

      @@seanspliff is your processor i7 even 6 years old?

    • @seanspliff
      @seanspliff 2 ปีที่แล้ว +1

      @@SubirajSinghFitness yup bought it in a prebuilt with the same mother board. My motherboard is a gigabyte x99 UD3

    • @SubirajSinghFitness
      @SubirajSinghFitness 2 ปีที่แล้ว +1

      @@seanspliff thanks bro 🍻

  • @georgelevantovskij8593
    @georgelevantovskij8593 2 ปีที่แล้ว +1

    This is extreamly valuable info! Thank you for your content!
    I have a following setup: Ryzen 5950X, Asus Prime Pro570X, 64Gigs of Ram (DDR4 Kingston 3200mhz), Asus Tuf Rtx3060 (12Gb), 750watt PSU.
    Do you know is it possible to add a second gpu (I have a leftover Asus Rtx3060ti), how it would effect my workflow and rendertimes in following programs: Blender, After Effects, Premiere Pro and also Unreal Engine 5? I was also thinking to get a bigger PSU (1000 watt).
    Thank you sir for your time and tips! This channel is pure gold!

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +4

      hi George, adding the second GPU will limit the Vram in the 3960 (12gb) to the same amount of Vram in the 3060ti (8GB) as 3D render software sends two packages, one for each GPU and they need to be the same package size. As for After Effects, Premiere Pro, you will not get much benefit as both of these programs have very little multi GPU support. Same with UE 5.
      thanks for watching

    • @georgelevantovskij8593
      @georgelevantovskij8593 2 ปีที่แล้ว +1

      @@MediamanStudioServices Thank you. Yes, I desided to build second backup rig which will help me with Blender rendering.

    • @bobbywiederhold
      @bobbywiederhold 2 ปีที่แล้ว

      @@MediamanStudioServices Is this the same for davinci resolve? Davinci being much more GPU reliant.

  • @karishmaansari7050
    @karishmaansari7050 2 ปีที่แล้ว

    If we use 2x3070 with 3ds max does it have double viewport performance than 1x3070???? And the vram would be double 8+8

    • @countdadcula4475
      @countdadcula4475 ปีที่แล้ว +1

      No. The vram does not double, even in SLI, but as long as your scene can be rendered with 8gb of vram or less, you will see almost 2x the speed when rendering.

  • @eriuz
    @eriuz 2 ปีที่แล้ว +3

    I think you don't have a big hit in performance because on the x570 you have two x8 pcie 4.0 that is equal to 2 x16 pcie 3.0 the big hit is gonna be when you use two gpu that are pcie 3.0 or use a board that doesn't support pcie 4.0

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      hi Eriuz , I just assumed that most people looking to get two GPU would be using a current MOBO.
      But you are correct, the older MOBO's are using PCIe V3 and a lower bandwidth.
      thanks for watching

  • @TILK3D
    @TILK3D ปีที่แล้ว

    hello, good video, sorry, an sli bridge is needed to join the two graphics cards for 3d rendering or simply by connecting the two cards to the board, it can be used, I have a 3090 ti and a 3080 zotac

    • @mikebrown9826
      @mikebrown9826 ปีที่แล้ว +3

      No SLI or Nvlink required to render on both GPUs

    • @TILK3D
      @TILK3D ปีที่แล้ว

      thank you very much for answering and clearing my doubt greetings and success

    • @mockingbird1128
      @mockingbird1128 ปีที่แล้ว

      @@TILK3D so i cant use 2 rtx 2060?
      it doesnt support sli

  • @mohammedumar2348
    @mohammedumar2348 ปีที่แล้ว +1

    Can you make a tutorial video on how to setup 2 different graphics card (rtx 4090 and 4080) in on pc

    • @MediamanStudioServices
      @MediamanStudioServices  ปีที่แล้ว +2

      I will be making a new content really soon. Thanks for the idea for a video

  • @TheMrJackzilla
    @TheMrJackzilla 2 ปีที่แล้ว

    Great videos. Can you make one talking about pcs and workstation to architectural work

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      Hi Lucas, this is a great idea for a video. Let me look into the specs required. But I am thinking it will mostly be the same as 3D and VFX computers. But I will do some research to confirm this.
      Thanks for watching.

    • @TheMrJackzilla
      @TheMrJackzilla 2 ปีที่แล้ว +1

      @@MediamanStudioServices thanks, it will be very helpful. I really miss content for small 3d artists. You have been helping me a lot!!!

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      @@TheMrJackzilla Glad to be of assistance. Please share on your FB page to help grow the channel.

  • @jeremiahMndy
    @jeremiahMndy ปีที่แล้ว +1

    I'm about to run 2x RTX 3090 FE in NVlink configuration for a total of 48GB Vram pool. il let you know how it goes, would be cool to see this on your channel before I rig it up for Blender and Iclone.

    • @TheCool1986vfx
      @TheCool1986vfx 5 หลายเดือนก่อน

      aaaaaand how is going? thanks

  • @FelixTheAnimator
    @FelixTheAnimator ปีที่แล้ว +1

    Does the mobo HAVE to be crossfire/sli ready to have two gpu cards?

    • @mikebrown9826
      @mikebrown9826 ปีที่แล้ว

      No. Just support 16x or 8x pcie. Preferably directly from the CPU

  • @divyasjj1136
    @divyasjj1136 ปีที่แล้ว +2

    Hey I want to know !! is that possible i can se 1660 super and 3060 ti at the same pc !!! i want to use it for blender render like can i render it with one graphic card work with the other one ??

    • @8D2BFREE
      @8D2BFREE 11 หลายเดือนก่อน +1

      you can ues to different models working together. but i never tried a 1660

    • @swdw973
      @swdw973 10 หลายเดือนก่อน

      Blender will look for the card with the least memory so it doesn't crash. My understanding is if you have a GPU with 8GB, and one with and 12GB, it will treat both as 8GB GPUs when assigning rendering. The Blender forums will be able to give you a definite answer.

    • @viniartee
      @viniartee 3 หลายเดือนก่อน

      @@swdw973 It appears that when you have a GPU with more VRAM, and the less powerful GPU runs out of memory while your system has ample RAM, Blender can utilize the system's RAM to support the less powerful GPU. Consequently, the more powerful GPU can continue to operate at full capacity using your system's RAM. For instance, if you possess both a 3090 and a 3060 Ti GPU, along with a system equipped with 64 GB of RAM, Blender might use the 64 GB of system RAM to match the performance of the 3090. The efficiency of this process is enhanced by faster RAM. Additionally, optimal performance would benefit from a well-configured RAID system with satisfactory HDD speeds. This setup should allow Blender to function smoothly without constraining the performance of the more powerful GPU, correct?

    • @swdw973
      @swdw973 3 หลายเดือนก่อน

      @@viniarteeSystem RAM will never get anywhere near the speed of vRAM. My answer was based on info from a Blender developer on one of the Blender forums.
      I actually run 2 different GPUs, a 3060 and 4070, but I made sure they both are the 12GB versions, because of what the developer told me.

  • @NarekAvetisyan
    @NarekAvetisyan 2 ปีที่แล้ว

    I'm using an RTX 3070 with a Ryzen 2700X on B450 motherboard. It's running at x8 because I have a second GPU. I've measured about 2-3% performance drop on x8. I also have it overclocked +200MHz on the core and +1300MHz on the memory with power limit set to 100% and I gain 14% performance boost on my Blender renders.

  • @IamJeshua
    @IamJeshua 2 ปีที่แล้ว +6

    Is there a problem if I use different cards? i have a 3070 ti but i want to buy a 3080ti, all this mounted on a z690 motherboard with an i7 12700f

    • @studyjourneyofficial
      @studyjourneyofficial 2 ปีที่แล้ว

      Following!

    • @KazCanning
      @KazCanning 2 ปีที่แล้ว +1

      There shouldn't be any issue... I have an rtx 2080 and a 3080 ti both running 8x. I was actually shocked that it worked perfectly immediately, I literally didn't have to do anything, just plugged the card in, turned on my computer and it just worked. Shocking.

    • @studyjourneyofficial
      @studyjourneyofficial 2 ปีที่แล้ว

      @@KazCanning I have two 3070ti back to back connected to my mobo. Do you think they're gonna overheat once I put them underload? If so, what should I do to cool them down better? I have a 680x corsair case. Thank you!

    • @KazCanning
      @KazCanning 2 ปีที่แล้ว

      @@studyjourneyofficial hard to say. I know my 3080tis get pretty hot when working on renders. I also have the founders edition cards so they have the pass through heat fins, basically blowing hot air on the top card.

    • @ShawarmaBaby
      @ShawarmaBaby ปีที่แล้ว

      @@KazCanning which psu you have? I have 3080 and 3090 woth a 1300w and when i render my monitors go off

  • @exarkun8250
    @exarkun8250 ปีที่แล้ว +3

    If you use one of the GPUs for gaming, will having 2 GPUs affect its performance?

    • @UTFapollomarine7409
      @UTFapollomarine7409 9 หลายเดือนก่อน

      in some cases yes and no, there are actually maybe 100 games more or less that actually use multi GPU support, out of those games some are AMD and some are GEFORCE while some are both, thankfully a few of my favorite games use two GPU from both sides, fallout 4, fallout nv, fallout 3, deus exmankind, so i will like to try two gpu and 8k them, also i believe a few of the metro games work and tomb raider and also FOX engine games like metal gear, so i been wanting to aim for two gpu

  • @slikjmuzik
    @slikjmuzik ปีที่แล้ว +2

    So, I don’t do rendering or gaming. I’m trying to have 5 or 6 monitors. I trade stocks and I’m finding myself wanting more charts and scanners running during the session. I currently have a Zotac 3060ti and I’m using all 4 outputs. I have an asrock x570 taichi razer edition with a 5900x oc’d to 4.4 at 1.25v, 2x32 Corsair Vengeance cl18 at 3800mhz and an 850w bequiet platinum psu. Samsung 980 Pro 1tb. Can I pop in something simple like a Zotac gtx 1660 and get more monitor outs? I remember seeing that using the next pci-e will only give me 3.0, but idk if I need to enable something in the bios or anything in order to just run more monitors? Please advise and thanks in advance :)

    • @youreverydayshorts9283
      @youreverydayshorts9283 ปีที่แล้ว

      he didnt answer

    • @bryanc4054
      @bryanc4054 ปีที่แล้ว +1

      DisplayLink adapters (usually from USB or USBC to HDMI, DP, etc.) may be worth looking into in your case as it would be cheaper.
      Otherwise, you should be able to pop in any other GPU to get additional display out ports with no other configuration needed.

    • @slikjmuzik
      @slikjmuzik ปีที่แล้ว

      @@bryanc4054 display link adapters?

    • @slikjmuzik
      @slikjmuzik ปีที่แล้ว

      @@bryanc4054 How is the quality from those? It would really just be for longer timeframe charts. I’m really trading from my main outs of my 3060ti, but I’d like to move some of the things I have on my current screens off to other screens, and while I am ok losing some resolution, I really don’t want it to be vga quality, ya know?

    • @bryanc4054
      @bryanc4054 ปีที่แล้ว

      @@slikjmuzik My bad for not seeing your reply until just now. Quite a few adapters offer up to 4K resolution output, via DP and HDMI, but keep in mind there will be a CPU utilization hit (albeit, on a 5900X, probably not noticeable at all lmao). Unfortunately, I don’t think the DisplayLink adapters will work since your CPU does not have an IGPU, so a different solution may be to get a cheapo GPU as you said. There’s a few workstation GPUs out there from Nvidia, AMD, and now Intel that only use PCIe power and use PCIe x4 or x8 so those might be a much better option for you use case.

  • @Visokovart
    @Visokovart 2 ปีที่แล้ว

    Hey. I have a problem connecting two 3090s through the Nvlink bridge. I use blender 3.0 stable to render my works
    b550m pro4 motherboard
    Processor x5900
    I can not find information on the Internet on how to correctly connect the bridge and speed up the blender. Please help

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      your mother board needs to support Nvlink. But have you tried to run a render with the two separate GPU? Connecting with Nvlink is not going to necessary speed up your rendering.

    • @djuansayjones6178
      @djuansayjones6178 2 ปีที่แล้ว

      @@MediamanStudioServices hey I saw that the 3080 or less only uses about 8x. But the 3090 uses more and would lose a lot of performance. Have you tested this out yet? Or do you know it's true?

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว +1

      @@djuansayjones6178 I am not sure of the PCIe bandwidth for each card, but I am making a good guess here, the RTX3080 does use more than the bandwidth of just 8x when at full performance, so yes untrue.

    • @djuansayjones6178
      @djuansayjones6178 2 ปีที่แล้ว

      @@MediamanStudioServices ok thanks a lot!!!

  • @seanspliff
    @seanspliff 2 ปีที่แล้ว

    Would using a 3060ti and a 2060 for a dedicated physx card, improve or hurt gaming performance?

    • @MediamanStudioServices
      @MediamanStudioServices  2 ปีที่แล้ว

      Hi Sean, I don't know. I don't do much gaming configurations. But try it out and share the results with the channel

    • @bocahdongo7769
      @bocahdongo7769 ปีที่แล้ว

      If you do gaming, most likely you need SLI or Crossfire support on both mobo and GPU.
      And for SLI, you do need exactly same GPU on top of that

  • @miladmasoumi6697
    @miladmasoumi6697 11 หลายเดือนก่อน +1

    What psu will be needed for two RTX 3070 gpu ?!

    • @hunlyden5556
      @hunlyden5556 10 หลายเดือนก่อน +1

      I think 1000w psu better for that. Because i use 3060ti+3070. But my rtx 3060ti have 2x 8pin and 3070 1x8 pin 1x6pin. Better choose 1000w or above because next gen of gpu will need extra power of psu.

  • @vicx05
    @vicx05 5 หลายเดือนก่อน

    Is it possible to use 2 different GPUs individually in a dual boot setup? I need to use an RTX 20xx and 40xx in the same system, but not together simultaneously.

    • @Mr__Proxy
      @Mr__Proxy 5 หลายเดือนก่อน +1

      yes it is possible for blender to utilize multiple gpus, not sure about other software, keep in mind that the Vram will not be joined, however it will still render faster due to the extra cuda cores / rt cores.
      (I am not an expert, so, I am not sure)

    • @vicx05
      @vicx05 5 หลายเดือนก่อน

      @@Mr__Proxy I mean only use one GPU at a time for each dual boot.

    • @Onouphrius
      @Onouphrius 5 หลายเดือนก่อน +3

      @@vicx05 what you'd be looking at is using one host OS (say linux) and then running a vm inside it for the other os (say windows) and passing on one of your gpus to the windows vm. Look up GPU passthrough and VFIO. Doing it this way has very minimal performance penalty compared to running it normally (around -5%)

    • @n64slayer
      @n64slayer 4 หลายเดือนก่อน

      Is it crazy to use your old gpu and a new one every time you upgrade or is it just a waste of time

    • @clarencestephen
      @clarencestephen 4 หลายเดือนก่อน +1

      @@n64slayer if you have them set up correctly, the slower GPU is the bottleneck bc the faster GPU has to wait for it to finish before they can continue working in parallel. Said differently, VRAM is not additive when working in parallel. This is different if you have them set up with a slow worker, say handling data async, and the other faster GPU processing/building models--this isn't necessarily additive, but if you are iterating with the faster GPU on a smaller dataset and then waiting for data updates you can keep coding while processing the data. In the gaming case 2 GPUs doesn't make sense unless you are gaming on one monitor with your new GPU and doing some other lower VRAM task with your slow GPU on the other. Not worth the brain space, just sell the old one and save up for 2 new ones or just use the new one if you're just gaming (guessing by your handle?)

  • @ravitejaporapu2362
    @ravitejaporapu2362 ปีที่แล้ว

    is it possible to use two different brand graphics cards

  • @si1entshotzdtom
    @si1entshotzdtom 3 หลายเดือนก่อน +1

    I run 2 GPU, on my gaming rig. Non SLI, etc. bring the 16x to 8x and I barely notice any performance loss as in FPS.
    6950xt for gaming and etc 3050 for stream. But 5950x and Dark Hero VIII.
    Core count and good MOBO help a lot, I am sure.
    But I game at 2k ultra and stream 2k ultra

    • @itschops
      @itschops 3 หลายเดือนก่อน +1

      i have a question for you. did you have to make any bios changes or were you able to just plug in and play(stream) with no issues? i have noticed that when i plugged in my second GPU (1080) i noticed some lag on the monitors connected to it. i did a clean install of drivers for my 4090 and the 1080 and still noticed some stutters on the monitors connected to the 1080. my monitors arent the same hz because my gaming monitor is a 170hz when the other 4 are just 60hz. i can get the stutters to stop when i make my gaming monitor the same hz but whose trying to game at 60hz when you have a 4090.. lol. any tips?

    • @si1entshotzdtom
      @si1entshotzdtom 3 หลายเดือนก่อน

      @@itschops I did not, but I did not plug any monitors in to the stream gpu. The stream gpu is just for work. Give that a try, if not. You make have to fresh install windows.
      Because when I upgraded to a 4080s, to replace my 6950xt. It would only read the 3050 out of the box. Yet the 4080s would output displays lol.
      I uninstalled all drivers even and, and same thing. I uninstalled all drivers again and reseated GPUS and amame thing.
      Fresh install helped, but couldn’t fully test it as the 4080s is huge and my 6950xt was liquid cooled via AlphaCool.
      So I am just using the 4080s and had to do a fresh install of windows. Works flawless

    • @itschops
      @itschops 2 หลายเดือนก่อน

      Thanks for letting me know. the main reason for wanting to have the 1080 added to my PC was to have the extra display connections since i have 5 monitors but the stutters were happening too often so i removed it for now. I didnt try a fresh install of windows but are you running 2 4080s now? do you think its running smooth because they are the same series card? @@si1entshotzdtom

  • @johnjr.8824
    @johnjr.8824 2 ปีที่แล้ว

    Thanks, was going insane not knowing why my 3080 was running on PCIE 3.0 x8 instead of x16.
    2 M.2 SSDs and intel i9-10900k chip. Cant get it to PCIE 4.0 since the chip doesn’t support it