long story short: adding a second GPU won't help gaming performance much. This is because graphics cards are already so fast that any extra distance between components can actually slow it down. That's why the memory modules on a graphics card always surround the main chip. Also, you *can* use an AMD and Nvidia card at the same time in the same pc, for example running a game on AMD and using Nvidia for OBS; they can both do separate tasks at the same time, but not the same task at the same time. But because having two cards installed complicates things, it's honestly better to never worry about running two at once and instead either try overclocking or replacing your card entirely if it isn't performing to your needs
@@arenzricodexd4409 with an SLI/Crossfire compatible gpu with another gpu of the same model, and vendor would work, but amd and nvidia cards were never designed to work together sadly
The closest you can get to it is getting a second GPU that's dedicated to frame generation (using Lossless Scaling) while the first one does the rendering, depending on the model you can turn 60 fps into 120, 180 or even 240 fps, I love it but it's not real "GPU performance combined", but rather two GPU's doing different things that benefit the guy playing the game
@@hideakikovacs2859 yes and no. Yes you can do it, but it can lower performance due to transferring the image between GPU. Most GPU have encoder chips baked onto them so it's faster to encode with the GPU your using to game
Fun Fact - back in 2010 there was a short lived solution to combine amd and nvidia - some motherboards like the ASUS Crosshair IV Extreme had a "hydra" chip on them
@@stuwilldoo7035 DX12 (and vulkan) support a thing called explicit multi adaptor that kind of worked as a hardware agnostic SLI/crossfire solution - including letting that intel iGPU in the processor get in on the action only issue is only like 5 games supported it because it is 100% on the developers of the game to implement this feature because it requires really low level GPU programming to do
@@FuentesDj some games physx are not updated and the cpu physx is extremely unoptimized (Mafia 2 and Mirror's Edge), so having the dual setup still improves that experience. It's probably only a few games though. BL2 physx is about the same on modern cpu as it is on nvidia gpus, it's definitely the version of physx in the game. I know this, because iirc, you can swap Mirror's Edge physx files with updated ones and it fixes issues with the cpu physx, but this is not possible on Mafia 2.
@@_ch1pset u need check old Borderland games. ( i have on steam screens from amd and nvidia cards) "the slime" physix was wiht amd cards other as with nvidia cards. It was in the same time as FXAA is relesead. It was more as only the slime..... but this effect was extreme. Today u can switch the files but in this time i am unser it was possible.
Was about to write that. Its basically family business now. So one side can make worse gpus so the other side makes better ones to make cash and change sides when needed.
Random Dude here. 2 things your looking for. First a driver from the past before Nvidia decided to add driver blocks to hybrid gpu setups. Second is a motherboard that supports Bifurcation allowing the 2 PCIE slots depicted by the mobo diagram in the manual to use 8x/8x splitting PCIE lanes. Most GPU's do not saturate x16 slot and nearly nothing is lost performance wise. Good thing is your current Motherboard may already support it. Bad thing is finding a driver without the locks in place. Do an old search for Hybrid GPU you may find what you're looking for. Yes we did used to do this without issue. GL
Not to pull a Umm Akshually but maybe linux would be useful here? When it comes down to this sort of crazy stuff linux let's you do way more than windows, however since the driver is still nvidia that might be a problem? Looking a bit into it, (search for PRIME and OPTIMUS on the Arch Wiki, don't worry the Wiki isn't only for arch, other distributions work too) there does seem to be support for AMD+NVIDIA cards, but it is mainly used for Laptops so ymmv. It does seem like using the open source nvidia driver makes this easier, but makes performance worse. Oh and also yes bifurcation is important and mainly available on higher end boards, but you can definitely check if yours works, preferably using bios options to electronically change it. 8 Lanes of pcie should be enough for most people. If it doesn't work there are some workarounds but that's far, far down the unsupported road. You will face dragons.
4:32 It's completely fine to use a GPU at a slower slot. I use dual gpu for gpu passthrough virtualization, both the GTX 1060 6GB and RTX 3060 are running at their full rated speeds despite the motherboard throttling the pcie speeds to accommodate both cards.
2 Gpus actually make sense for content creators and streamers. 1 card can be used for gaming, the other can be dedicated in OBS for recording/encoding. This way you can still stream/record without losing performance at all and no delay whatsoever. The card dedicated to the recording doesn't need to be a powerhouse, a simple 1050Ti can record 1080p 60fps flawlessly if there's another card handling the game itself.
Thats what I do. I run GTX 1080 as main card and an Quadro P1000 as secondary to record video. I record at 100.000Kbps which I found seem to almost be 1:1 to what I see on the screen. H265 4:4:4.
The reason your second GPU is losing performance is not due to a chipset bottleneck on your motherboard, the chipset only manages the last PCIe and M.2 slot. The real issue is that your GPU is plugged into a PCIe slot with only 1 lane, giving you just 1 GB/s of bandwidth. Also, you don’t need a $1,700 motherboard. First of all, SLI only works with two NVIDIA GPUs, and not in the way you're thinking. If you want to connect a second GPU to the PCIe lanes of the CPU, use your top M.2 slot with a riser. It has 4 PCIe lanes that are directly connected to the CPU.
that depends on the motherboard, there are cheaper motherboards in the $300 range that can dp 8x/8x but the guy who uploaded this video doesnt actually know what he is doing or what he is talking about.
MSI 870A and P55 Lucid Hydra motherboards supported multi GPU from AMD and Nvidia 14 yrs ago, while it worked it was hit and miss on being better than SLI and Crossfire, the only benefit is you could use any GPU that was from the same brand so GTX XXX or HD XXXX , or just mix them up, and was killed off due to lack of support just like SLI and Crossfire themselves.
the GTX 1080 ti was such an insane beast for it's time, I hadn't felt such an overkill ratio of card performance to games since the voodoo 5 6000, or the GTX 8800 Ultra, just totally unparalleled. I really hope we get an era like that again, but with developers forgoing the art of optimization in favor of ai upscaling it just feels so out of reach. Great video
You can actually use a rtx 30/40 series gpu as the main rendering gpu for games/applications that supports dlss, and a rx 6000/7000 series gpu for AFMF. That's what people actually did to get 200~300 fps on max settings of cyberpunk when it first got first released.
@Kyanite. You are talking about afmf 2. The original afmf was available since last year. Also guessed that cyberpunk gs been around longer than i thought, woops.
I did something similar to this back in the day, I had an R9 Fury, paired with a GTX 750 TI. The Nvidia card did all the rendering for my streams, and the AMD card did all the rendering for my games. It was actually a very harmonious relationship between the two.
I remember having to get a 2D (ATI Mach64) card and 3D accelerator card (Voodoo 1). Until the Voodoo3 and Riva TNT cards can do booth processes, but i think you needed an AGP slot.
Actually you could combine AMD (ATI at the time) with an Nvidia GPU with technology called "Lucid Hydra" -- it was a feature on my Crosshair IV AMD board. I I believe I had a 6870 (primary) paired with a GTX275 (secondary). Funny enough, as far as I can remember, it didn't really cause any major glitches in games, but many games didn't see much of an improvement, while others saw massive gains. I remember Red Faction Gorilla getting a major 20 - 30fps boost. DX & OpenGL would potentially show different results. Great vid btw!
@ArnoId-Schwarzenegger yes, he does, I wish youtube would boot this misinformation off the platform also 1500fps in fortnight isn't impossible, in fact I'd be more impressed with getting 900fps in cs2
xfx gpus are especially stable, and reliable, they are probably the best vender for AMD cards, and I'm happy you chose that particular AMD card, putting team red's best foot forward.
I use my 3060ti as the main render GPU and my 5600XT as the GPU for Lossless Scaling. I will not spoil much, but it varies. The main bottleneck is mostly my 5600XT and its 6GB VRAM, and partially the x4 slot it is in.
ive been playing around with SLI recently on x99 and x299 motherboards, ive even managed to break a couple 3d mark world records that no one will probably even be trying to beat any time soon lol
@@brycemueller7330 yeah I was thinking the x99 would eliminate the bifurcation problem if both cards can comfortably run on PCIe 3.0. That would be the cheap solution running a cheap board of Ali express or second hand.
This reminds me of when physx came out on nvidia, I had a hd 5770 gpu and a gtx 460. In the same pc, i used the 460 (i think it was anyways) to compute the physx calculations and the amd card for raster. Worked really well, but not in everygame because it requires you to manually edit config files to tell it which gpu is for what. Lots of hassle but felt worth at the time. Theres so much distance between computer components, i wish they'd all come together to be nice and speedy.
There's another option, but you'd likely need to do several work arounds and find some custom drivers to do it. Setting the 6900 xt as the main GPU, you could potentially find a means to set the 1080 ti to act as an accelerator card. Unlike Crossfire, NVlink etc, where the GPUs operate together to as one GPU. Using the 1080 ti as an accelerator card (like is done for AI) may improve game performance. In effect you could write a custom driver for the 1080 ti to act as a second CPU that is dedicated to a specific application task and offload the tasks normally handled by the CPU in gaming. The 1080 ti's optimization and faster computational scope (in spite of being nearly a decade old) would allow it to handle things like in game AI movement, pathfinding, etc, which is normally done by the CPU. However, this is only theoretically possible. It'd take a great deal of time, skill, and coding to create this custom driver, even then it's likely many games wouldn't work. Although some older titles might if they supported SLI, Crossfire or other multi-GPU communication protocols.
I build a computer 10 years ago as a daily driver using an r9 390 and a gtx 970. The monster is still chugging along happily today. If only Vulcan worked as intended >.
Bro, this is not how you dual GPUs work. You cannot make an AMD card work with an Nvidia card. Look at your task manager while in game and you'll see only one card is doing anything at all. You don't just plug in dual GPUs and magically get a bigger resource pool for gaming 🤦
The host also is claiming in 2024 that a rx 6900xt is the best amd card which is bs, if you had a 7900xtx it would smoke both cards, on top of this the 4090 is actually the fastest consumer grade gpu on the market. This video is full of so many holes I can't even use this cheese on a grilled cheese sandwich.
Right, but there is one trick I think could work to make more FPS out of 2 GPUs. Use lossless scalling and make your main GPU render a game and second GPU generate frames in LLS app. I tested it and it works, but unfortunately my second GPU was too weak and generated frame drops, because whole system had to wait for it to generate fake frames, but it is worth trying if someone has something than 1050 paired with rtx 3070.
@LeitoAE if you're referring to GPU passthrough, I guarantee your 1050 is not doing anything to benefit your frames. I don't know what your exact setup is but the fact that you're saying you're getting less frames from whatever your doing does in fact, not make more frames.
I have a dual GPU setup for exactly the reason you mentioned! Radeon RX 6800 in the PCIe 4.0 x16 slot for gaming. RTX 4060 ti 16gb in the PCIe 3.0 x4 slot for CUDA, tensor stuff, etc.
hey if u don’t mind could u explain how this benefits the performance and what u use it for? i have a 6800xt and this video made me think about a secondary 1080ti
@@sra771 I use the AMD GPU for gaming, and the Nvidia GPU for productivity / transcoding and AI tasks. Generally AMD GPUs do not perform well for AI tasks in Windows.
You could try to run the 1080 ti with a riser cable from an m.2 slot, as there are 4 lanes directly connected to the cpu in the up most slot. In that way both gpus would be connected at the cpu at the same time
If only all motherboards were SLI/Crossfire Capable... 2:56 Cool how you also put a Furby Party Rocker in your Gaming PC too. Also, thats the same one I have too
So did you just plug both GPUs in and they worked automatically? Or did you have to install new drivers? Or do you plug your monitor into the GPU you want to 'host' everything? What's the process? You made it seem like you just added a second GPU into your motherboard and they both worked together flawlessly.
@@agoogleuser2507 Also the app can select what GPU it will be using, and if its different then the one monitor cable plugged into, then this card would work in pass-trough mode if its rendering something for screen to be displayed, aka games (which can SEVERELY limit the output if you use like CMP90HX in pass-trough mode trough your iGPU).
@@agoogleuser2507 Games, rendering software, pretty much anything. And if it doesnt support it natively (in settings menu), you can just temporarily disable one gpu, launch the thing you need on working one, and then enable it back from device manager.
You can’t get this to work for multitude of reasons aside from the fact that no software exists that can take advantage of both GPU vendors at the same time for compute. First, a GPU is designed to split certain task’s between different threads. This action itself already requires a lot of semaphore tinkering and synchronization. This is quite literally impossible to achieve between 2 different GPUs let alone 2 different vendors unless it’s extremely basic operations. Second, Neither Vulkan nor Direct3D have a “Device pooling” feature and it has to be provided by the vendor itself through extensions and applications specifically have to take advantage of said extensions. Both vendors have their own method of doing this and both are completely different from each other.
On a side note, you can use 2 GPUs from different vendors inside a setup. e.g You can offload the rendering of a game to your NVIDIA GPU while using your AMD GPU for display output or using AMD’s driver specific “enhancements” and visa versa. You just can’t render a game/do a compute task on both GPUs at the same time.
@@light-gray The closest you can get to it is getting a second GPU that's dedicated to frame generation (using Lossless Scaling) while the first one does the rendering, depending on the model you can turn 60 fps into 120, 180 or even 240 fps, I love it but it's not real "GPU performance combined", but rather two GPU's doing different things that benefit the guy playing the game
@@light-gray Well its actually isnt true at all. DirectX 12 has a multi-gpu feature, that has to be incorporated by the game developer. And that thing... *allows you to combine any gpu with any gpu, at the same time, at the same task.* Tho it was really implemented in like, only one game. Very much like SLI, but even more obscure despite the potential.
There is another way to make dual GPU useful as I know, AMD's 6000 and 7000 series card has fluid motion frames, which work like dlss frame gen and compatible with any card. So u can now have a card dedicated running frame generation, decreasing the load for the card for rendering.
There's older amd gpus that actually had this as a feature without crossfire, but without one of those your best mixed setup would be to get a cheap Intel Arc A380 and run your display off it.
Absolute waste of time. How about this : There was someone in China that combined 4090 (rendered the game and with Frame gen) and feed the data into a RX 6600 which used AFMF and then he outputted to the display for massive fps. You should replace the 1080TI with a 4070 super and do the same thing
I have a gtx 1050ti with an i5 8th gen @2.3 GHz. 10 series is so good that I can play Witcher 3 next gen at 60fps (using free synch on a tv so no matter the gfx, frame rate is capped to my gpu frame rate). After a few tweaks and registry edits to my laptop, I can play it smoothly at high settings, keeping the cpu intensive effects at low (due to laptop being 6 years old and I use a small fan next to my laptop for cooling). It runs so well and I haven’t noticed much of the wear on the gpu. I can run fifa 23 and Ac Oddyssey at 1440p at low settings too which is not bad for an old laptop gpu
The passthrough system of 2d GPUs and 3d accelerators in the 90s would be wonderful here. Imagine being able to plug one GPU into another and that other one goes to the monitor. Fullscreen applications just whichever one's better does the rendering, windowed mode, UI done by one and everyrhing within the window's borders is done by the other. I hoped something like this would come to systems featuring both an iGPU and a dGPU.
In my experience, several different gpu manufacturers lead to the ability to simply connect several additional monitors, it is worth considering that the rendering on the monitor will take place on the GPU to which it is connected (thanks captain), but the most interesting thing is the technology of the manufacturers. I found a case it the web like using gt1030 to work with physx in borderlands 2 or 3, I can't remember, although all the main rendering took place on rx580. Another option, at the moment I use EGPU with a laptop, the laptop has an igpu amd 780m and via EGPU I connect rtx3060 in baldur's gate 3 on a stationary computer with rtx3050 I did not have the opportunity to enable FSR only dlss, and in my EGPU build I have FSR and DlSS, so it seems to me that the increase of perfomance with diferent manufacturers gpu will only be if the user himself will parallelize tasks, for example, the main monitor with the game is a high-performance gpu and a funny video with cats on the second monitor is rendered by a weaker gpu. SLI and crossfire themselves are very lame solutions, for example: with nvidia both gpu will have the same frequencies and the most powerful will adjust to the weak one and in most tasks one card checks the work of the other and therefore very often there were problems with sli configurations. AMD Crossfire has no frequency limitation, but I very much doubt that legacy AMD video cards with old drivers in Crossfire can show stability in games or work tasks.
I actually have run something similar to this for streaming. AMD 6800XT for playing games and Nvidia 1050 Ti for streaming and video encoding. Only reason I don't do that anymore is because I threw the 1050 Ti in a Xeon workstation I use for a media server.
I did this with a rtx 3060 ti and an rx560 with no issues. I was surprised it was running flawlessly. I have a video of it running on both ff7 rebirth and baldur's gate 3 at ultra settings in both games
For anyone asking setup is a ryzen 3700x, with 16gb DDR4 3200, 1tb SSD, Asus 570x mb, Nvidia rtx 3060ti 8gb, and a rx560 4gb, running in one system. Just in case anyone asks
Mixing and matching used to be a thing too. Back in the days of Arkham Asylum...many people used to use ATi as their main GPU, and a cheap nVidia card for PhysX. Because trying to run PhysX on the CPU, like a Phenom II just sucked. You could use CPU-based PhysX on games like American McGee's Alice Madness.
I used to run ski/crossfire cards. I kind of wish they would bring that back fully supported. It was sick to buy 2 cards at a $300 price point that had nearly exactly the same performance of the $1000 card
I was not expecting the video to end after the price reveal lmao. SLI was dropped so fast that any board with it is an absurd price. I was interested in it for a time too.. sad to see it disappear
Well, if you get an m.2 to pcie adapter, you can use the x4 lanes from the top m.2 slot for the gpu. 90% of the new ATX motherboards only wires pcie X1 to the physical x16 plastic for each slot. mATX motharboards have x4 and x8 slots from the chipset because of the size limit.
Yes, you are correct. This dude put the 1080 ti into a Gen 3 Pcie x16(x1) slot. The "(x1)" means it only has a single lane of bandwidth lmao. Using the m.2 to x16 adapter would have been better. Even so, he didn't play a single game that could utilize his setup. The only games that can utilize an AMD and Nvidia gpu together are the rare titles that have Direct X 12 explicit multi-gpu support. Edit: I was incorrect. Rise of the Tomb Raider has explicit multi-gpu support, but I doubt he had anything configured correctly.
But I do, yes I do do do. Operating AMD and Nvidia parallel eliminates the bottlenecks when I do AI or video editing stuff on the nvidia, I can still e.g. gaming along and stuff on several monitors, without any problems. Take a good power supply. Upgrading the power supply and swapping in a second hand RTX was the best tuning hack in years.
If your motherboard supports bifurcation, you can split the top PCIe 4.0x16 into two 4.0x8 lanes, and use an adapter to split it into two x16 slots. Each gpu will be limited to PCIe 4.0x8 speeds but it’s better than using the bottom slots which run at x1 speeds
I did this a few years back. I had a AMD Fury and added a 980 I came across for cheap and was using for mining purposes. Snice I had hem both I went ahead and tried them out. Low and behold they could work together in certain instances. I tried mGPU in a couple things but most of the time just left the 980 mining while I played on the Fury. The fun trick was after NVIDIA updated their drivers and stopped blocking access to PhysX when you had a AMD card installed. Had a few games that instantly started running better by enabling PhysX on the 980 instead of having to do it on the CPU and still rendering on the Fury.
"sicko mode or mo bamba ? why not both ?" ahh video
fr
Fr
FR
@@lecctron edging rn
crazy reference
Ryzen 4070 mentioned 🗣️🗣️🔥🔥
RAHHH🔥🗣️🗣️🔥🗣️🗣️🗣️🔥🦅🦅🦅🦅🦅🔥🔥🔥🔥🗣️🗣️🗣️🗣️🔥🔥🔥🔥🔥🗣️🔥🗣️🗣️🔥🗣️🔥🔥🔥🦅🦅🦅🦅🦅
ZTT FOR LIFE🗣🔥🦅
RYZEN 4070
the leaks said ryzen 4090 is releasing November 23rd 2025
@@latestcueleak said ryzen 5090 will release in February 30, 2026
Guess you're buying an $1800 MB because you can't leave us on a cliffhanger!
Im gonna pull a spiderverse and drop the sequel years later 😂
@@lecctron we’ll be waiting
@@lecctronbro that last spiderverse movie was a whole cliffhanger
@@lecctron lol it would probably be cheaper to change your cpu + mb
waiting
long story short: adding a second GPU won't help gaming performance much. This is because graphics cards are already so fast that any extra distance between components can actually slow it down. That's why the memory modules on a graphics card always surround the main chip.
Also, you *can* use an AMD and Nvidia card at the same time in the same pc, for example running a game on AMD and using Nvidia for OBS; they can both do separate tasks at the same time, but not the same task at the same time. But because having two cards installed complicates things, it's honestly better to never worry about running two at once and instead either try overclocking or replacing your card entirely if it isn't performing to your needs
@@nepp- no. Combine them to make them run a single game together. Time to Fuzion.
@@arenzricodexd4409 with an SLI/Crossfire compatible gpu with another gpu of the same model, and vendor would work, but amd and nvidia cards were never designed to work together sadly
The closest you can get to it is getting a second GPU that's dedicated to frame generation (using Lossless Scaling) while the first one does the rendering, depending on the model you can turn 60 fps into 120, 180 or even 240 fps, I love it but it's not real "GPU performance combined", but rather two GPU's doing different things that benefit the guy playing the game
How about for recording, as in using the NVENC encoder while I game on the radeon cards, does that work or am I missing something here?
@@hideakikovacs2859 yes and no.
Yes you can do it, but it can lower performance due to transferring the image between GPU. Most GPU have encoder chips baked onto them so it's faster to encode with the GPU your using to game
Fun Fact - back in 2010 there was a short lived solution to combine amd and nvidia - some motherboards like the ASUS Crosshair IV Extreme had a "hydra" chip on them
i thought it was coming with either directX 11/12 ?
@@stuwilldoo7035 DX12 (and vulkan) support a thing called explicit multi adaptor that kind of worked as a hardware agnostic SLI/crossfire solution - including letting that intel iGPU in the processor get in on the action
only issue is only like 5 games supported it because it is 100% on the developers of the game to implement this feature because it requires really low level GPU programming to do
I did this in 2017 to use FreeSync while rendering my games with my GTX 1080
@@sickdisgusting5989 did it worked?
I'm doing it right now with 3080 for game rendering and 6750xt for afmf2 and freesync. Works fine.
sucks when someone with a good idea cant carry it out because of money problems. love you bro
happens all the time on this channel lol we got so close 💔
@@lecctron send me the banking details, ill sponsor a dollar.
Easy fix, have more money
Can't carry it out because he's inept, thats all
@@xXRat_BOSSXx ??? Bro money doesnt grow on trees tf.
Fun fact: The current CEOs of AMD and NVIDIA are cousins once removed.
And AMD also provided NVIDIA their AMD Epyc CPUs in DGX A100 workstations.
@@PhonkMachine220YT I remember back in the day people pairing nvidia with radeon for physx
@@FuentesDj some games physx are not updated and the cpu physx is extremely unoptimized (Mafia 2 and Mirror's Edge), so having the dual setup still improves that experience. It's probably only a few games though. BL2 physx is about the same on modern cpu as it is on nvidia gpus, it's definitely the version of physx in the game. I know this, because iirc, you can swap Mirror's Edge physx files with updated ones and it fixes issues with the cpu physx, but this is not possible on Mafia 2.
@@_ch1pset u need check old Borderland games. ( i have on steam screens from amd and nvidia cards)
"the slime" physix was wiht amd cards other as with nvidia cards. It was in the same time as FXAA is relesead. It was more as only the slime..... but this effect was extreme.
Today u can switch the files but in this time i am unser it was possible.
Was about to write that. Its basically family business now. So one side can make worse gpus so the other side makes better ones to make cash and change sides when needed.
Random Dude here. 2 things your looking for.
First a driver from the past before Nvidia decided to add driver blocks to hybrid gpu setups.
Second is a motherboard that supports Bifurcation allowing the 2 PCIE slots depicted by the mobo diagram in the manual to use 8x/8x splitting PCIE lanes. Most GPU's do not saturate x16 slot and nearly nothing is lost performance wise.
Good thing is your current Motherboard may already support it. Bad thing is finding a driver without the locks in place. Do an old search for Hybrid GPU you may find what you're looking for.
Yes we did used to do this without issue. GL
yea thats why some gpus have nvme slot :D these days cus GPUs not use full x16
wow suddenly i'm excited to try this again, thanks so much !
Microsoft: we're going to add stuff in the OS to let people do hybrid multi gpu setups!
Nvidia: middle finger
Not to pull a Umm Akshually but maybe linux would be useful here? When it comes down to this sort of crazy stuff linux let's you do way more than windows, however since the driver is still nvidia that might be a problem? Looking a bit into it, (search for PRIME and OPTIMUS on the Arch Wiki, don't worry the Wiki isn't only for arch, other distributions work too) there does seem to be support for AMD+NVIDIA cards, but it is mainly used for Laptops so ymmv. It does seem like using the open source nvidia driver makes this easier, but makes performance worse.
Oh and also yes bifurcation is important and mainly available on higher end boards, but you can definitely check if yours works, preferably using bios options to electronically change it. 8 Lanes of pcie should be enough for most people. If it doesn't work there are some workarounds but that's far, far down the unsupported road. You will face dragons.
I'm also certain that for his second gpu, while it is an x16 slot, it was running at x1 speeds.
The forbidden combo ahh video
@@6XVK ahhhhhhh
fun fact AMD and Nividia's founders are cousin Jensen and Lisa are related not joking
@@RTWT990 oh that’s sick
🎉
😂
4:32 It's completely fine to use a GPU at a slower slot.
I use dual gpu for gpu passthrough virtualization, both the GTX 1060 6GB and RTX 3060 are running at their full rated speeds despite the motherboard throttling the pcie speeds to accommodate both cards.
2 Gpus actually make sense for content creators and streamers.
1 card can be used for gaming, the other can be dedicated in OBS for recording/encoding. This way you can still stream/record without losing performance at all and no delay whatsoever.
The card dedicated to the recording doesn't need to be a powerhouse, a simple 1050Ti can record 1080p 60fps flawlessly if there's another card handling the game itself.
Modern hardware encoders have minimal to no effect on gaming performance.
Thats what I do.
I run GTX 1080 as main card and an Quadro P1000 as secondary to record video.
I record at 100.000Kbps which I found seem to almost be 1:1 to what I see on the screen. H265 4:4:4.
The vid we all been waiting for
yoo korvie
Yo its that stone guy
Hi
The reason your second GPU is losing performance is not due to a chipset bottleneck on your motherboard, the chipset only manages the last PCIe and M.2 slot.
The real issue is that your GPU is plugged into a PCIe slot with only 1 lane, giving you just 1 GB/s of bandwidth.
Also, you don’t need a $1,700 motherboard. First of all, SLI only works with two NVIDIA GPUs, and not in the way you're thinking. If you want to connect a second GPU to the PCIe lanes of the CPU, use your top M.2 slot with a riser. It has 4 PCIe lanes that are directly connected to the CPU.
Sli capable means it gives 16x slots to both gpus. So yes it would help in this case even without actually using an sli bridge.
that depends on the motherboard, there are cheaper motherboards in the $300 range that can dp 8x/8x but the guy who uploaded this video doesnt actually know what he is doing or what he is talking about.
@@thelasthallow exactly. Guy has no idea what he's talking about.
Awesome Vid man. Loving the Croft Manor background music from TB Legend btw.
MSI 870A and P55 Lucid Hydra motherboards supported multi GPU from AMD and Nvidia 14 yrs ago, while it worked it was hit and miss on being better than SLI and Crossfire, the only benefit is you could use any GPU that was from the same brand so GTX XXX or HD XXXX , or just mix them up, and was killed off due to lack of support just like SLI and Crossfire themselves.
the GTX 1080 ti was such an insane beast for it's time, I hadn't felt such an overkill ratio of card performance to games since the voodoo 5 6000, or the GTX 8800 Ultra, just totally unparalleled. I really hope we get an era like that again, but with developers forgoing the art of optimization in favor of ai upscaling it just feels so out of reach. Great video
every Gamer Dream is combined AMD and Nividia Gpu together
I've had my gtx 1080ti for a long time and it still performs to even the most graphics intense games in 2024 i love it so much
You can actually use a rtx 30/40 series gpu as the main rendering gpu for games/applications that supports dlss, and a rx 6000/7000 series gpu for AFMF. That's what people actually did to get 200~300 fps on max settings of cyberpunk when it first got first released.
what? AFMF came out this year, Cyberpunk 4 years old lol
@Kyanite. You are talking about afmf 2. The original afmf was available since last year.
Also guessed that cyberpunk gs been around longer than i thought, woops.
@@taktarak3869 i really need to know how to do this
Bro been here since like 250- subs luv to see you’ve grown so much❤
@@Cxrved I’m here since almost 1k
Know it’s a good day when leccteon post
im a fridge
I'm a apple
Brrrrr.
@@Ateszk000 I’m a freezer😏
Cool, bro.
Hi fridges I am a discount coupon
My dude said "2010s" like it was the 1980s.
me sitting here thinking you were gonna do some witchcraft and connect them both with an SLi bridge and some hardware level hacking.
blud completed mission impossible 😭🙏
I did something similar to this back in the day, I had an R9 Fury, paired with a GTX 750 TI. The Nvidia card did all the rendering for my streams, and the AMD card did all the rendering for my games. It was actually a very harmonious relationship between the two.
Best ending: amd and nvdia unite
I remember having to get a 2D (ATI Mach64) card and 3D accelerator card (Voodoo 1). Until the Voodoo3 and Riva TNT cards can do booth processes, but i think you needed an AGP slot.
Bro mention lunchly which was crazy 😂
Actually you could combine AMD (ATI at the time) with an Nvidia GPU with technology called "Lucid Hydra" -- it was a feature on my Crosshair IV AMD board. I I believe I had a 6870 (primary) paired with a GTX275 (secondary). Funny enough, as far as I can remember, it didn't really cause any major glitches in games, but many games didn't see much of an improvement, while others saw massive gains. I remember Red Faction Gorilla getting a major 20 - 30fps boost. DX & OpenGL would potentially show different results. Great vid btw!
Hmm, I'm not sure you fully understand how games work with GPUs...
@@10Sambo01 you don't know
@Manemyonem I'v ebeen an IT professional for almost30 years. I know.
@@10Sambo01 yup I agree
@ArnoId-Schwarzenegger yes, he does, I wish youtube would boot this misinformation off the platform also 1500fps in fortnight isn't impossible, in fact I'd be more impressed with getting 900fps in cs2
@@lockinhinddanger934 Fortnite dosent even support SLI or CrossfireX you can even see the Gtx staying at 0%
xfx gpus are especially stable, and reliable, they are probably the best vender for AMD cards, and I'm happy you chose that particular AMD card, putting team red's best foot forward.
2:46 THE NETTSPEND BACKGROUND IM DEAD (i need it pause)
Been using a 1080 for years, and with Upscaling i have no issues running any game. Such a great value
Day two of asking for this video idea... "Using lossless scalings frame generation on the GT 710"
that will be part of the next video stay tuned bro
I use my 3060ti as the main render GPU and my 5600XT as the GPU for Lossless Scaling. I will not spoil much, but it varies. The main bottleneck is mostly my 5600XT and its 6GB VRAM, and partially the x4 slot it is in.
3:35 the poster on the wall is so nice
Holy crap, the legendary ryzen 4070 🔥 I knew it existed!!
I use a RTX 3080 for gaming and the APU of my 5600G to encode my Livestreams. It works flawlessly. No problems, no colored screens whatsoever.
Dammmm That AMD GPU is _gasping_ for air
I like your reasoning and your voice so I subscribed lol
Rx 6950XT* 1:02
6950xt came out 2 years after the 6900xt
@@lecctron are you sure?
Oh sorry, you said 2020
I literally can't believe LTT hasn't made a video like this before
@@Nebula_YT23 They did, 10 years ago.
Now that is a original idea
@@frenaedits Its been done before, and I thought about it years ago- just no money to execute the experiment.
Is it? There are official product try to do this about 15 years ago.
ive been playing around with SLI recently on x99 and x299 motherboards, ive even managed to break a couple 3d mark world records that no one will probably even be trying to beat any time soon lol
@@brycemueller7330 yeah I was thinking the x99 would eliminate the bifurcation problem if both cards can comfortably run on PCIe 3.0. That would be the cheap solution running a cheap board of Ali express or second hand.
That steam survey is conducted on 100k-500k people, not 132 million.
Even then the averages should still be similar across the 132 million users
I'm pretty sure the 132m user are the overall laptop/pc users @@atriedonisme4174
Steam doesnt need you to fill out a survey to know what hardware youre using.
You did it well, also you might get a kick out of it by selecting a handheld with thunderbolt for the extra gpu...
That dude at 0:40 installing the second card gave me a brain aneurysm...
This reminds me of when physx came out on nvidia,
I had a hd 5770 gpu and a gtx 460. In the same pc, i used the 460 (i think it was anyways) to compute the physx calculations and the amd card for raster.
Worked really well, but not in everygame because it requires you to manually edit config files to tell it which gpu is for what. Lots of hassle but felt worth at the time.
Theres so much distance between computer components, i wish they'd all come together to be nice and speedy.
7:25 triggered ptsd
@@Marius04020 literally
nooooo
I litelary stoped the video to check if it really was there or it was in my head
@@Marius04020 Bruh 😭
didn't know so many people knew abt ddlc lol
There's another option, but you'd likely need to do several work arounds and find some custom drivers to do it. Setting the 6900 xt as the main GPU, you could potentially find a means to set the 1080 ti to act as an accelerator card. Unlike Crossfire, NVlink etc, where the GPUs operate together to as one GPU. Using the 1080 ti as an accelerator card (like is done for AI) may improve game performance. In effect you could write a custom driver for the 1080 ti to act as a second CPU that is dedicated to a specific application task and offload the tasks normally handled by the CPU in gaming. The 1080 ti's optimization and faster computational scope (in spite of being nearly a decade old) would allow it to handle things like in game AI movement, pathfinding, etc, which is normally done by the CPU.
However, this is only theoretically possible. It'd take a great deal of time, skill, and coding to create this custom driver, even then it's likely many games wouldn't work. Although some older titles might if they supported SLI, Crossfire or other multi-GPU communication protocols.
Bro made team yellow 😐
I build a computer 10 years ago as a daily driver using an r9 390 and a gtx 970. The monster is still chugging along happily today. If only Vulcan worked as intended >.
Bro, this is not how you dual GPUs work. You cannot make an AMD card work with an Nvidia card. Look at your task manager while in game and you'll see only one card is doing anything at all. You don't just plug in dual GPUs and magically get a bigger resource pool for gaming 🤦
The host also is claiming in 2024 that a rx 6900xt is the best amd card which is bs, if you had a 7900xtx it would smoke both cards, on top of this the 4090 is actually the fastest consumer grade gpu on the market. This video is full of so many holes I can't even use this cheese on a grilled cheese sandwich.
@@lockinhinddanger934 you foaming out the mouth I can’t even lie
@@lockinhinddanger934 You know this video was done for fun right? You really need to go outside and find something real to do.
Right, but there is one trick I think could work to make more FPS out of 2 GPUs. Use lossless scalling and make your main GPU render a game and second GPU generate frames in LLS app. I tested it and it works, but unfortunately my second GPU was too weak and generated frame drops, because whole system had to wait for it to generate fake frames, but it is worth trying if someone has something than 1050 paired with rtx 3070.
@LeitoAE if you're referring to GPU passthrough, I guarantee your 1050 is not doing anything to benefit your frames. I don't know what your exact setup is but the fact that you're saying you're getting less frames from whatever your doing does in fact, not make more frames.
You should look into the Threadripper CPU/motherboard combos, they have multiple full bandwidth x16 slots with plenty of lanes to spare.
Lets help him to get 1800$ for the Motherboard 🔥🔥
nvidia 50 series AI fps + amd Raw power duo bout to go wild
I have a dual GPU setup for exactly the reason you mentioned!
Radeon RX 6800 in the PCIe 4.0 x16 slot for gaming.
RTX 4060 ti 16gb in the PCIe 3.0 x4 slot for CUDA, tensor stuff, etc.
hey if u don’t mind could u explain how this benefits the performance and what u use it for? i have a 6800xt and this video made me think about a secondary 1080ti
@@sra771 I use the AMD GPU for gaming, and the Nvidia GPU for productivity / transcoding and AI tasks. Generally AMD GPUs do not perform well for AI tasks in Windows.
Because this video, i was subscribed to your channel. Smart an interest content about hardware, but on funny way. Keep it going :)
8:32 cries on single gpu
You could try to run the 1080 ti with a riser cable from an m.2 slot, as there are 4 lanes directly connected to the cpu in the up most slot. In that way both gpus would be connected at the cpu at the same time
Godzilla had a stroke trying to understand why you've done this and fragging died.
If only all motherboards were SLI/Crossfire Capable...
2:56 Cool how you also put a Furby Party Rocker in your Gaming PC too. Also, thats the same one I have too
So did you just plug both GPUs in and they worked automatically? Or did you have to install new drivers? Or do you plug your monitor into the GPU you want to 'host' everything? What's the process? You made it seem like you just added a second GPU into your motherboard and they both worked together flawlessly.
had to install drivers, other than that it worked pretty much flawlessly across the 2 monitors
@@lecctron I see. What drivers?
@@agoogleuser2507 Also the app can select what GPU it will be using, and if its different then the one monitor cable plugged into, then this card would work in pass-trough mode if its rendering something for screen to be displayed, aka games (which can SEVERELY limit the output if you use like CMP90HX in pass-trough mode trough your iGPU).
@@alexturnbackthearmy1907 What app?
@@agoogleuser2507 Games, rendering software, pretty much anything. And if it doesnt support it natively (in settings menu), you can just temporarily disable one gpu, launch the thing you need on working one, and then enable it back from device manager.
bro just casually trash talking my 4070 super in the first 3 minutes
''With the power of two!'' ahh pc 😭🙏
To this day, none of the 1600 series could beat the legendary 1080 ti! My goodness!
You can’t get this to work for multitude of reasons aside from the fact that no software exists that can take advantage of both GPU vendors at the same time for compute.
First, a GPU is designed to split certain task’s between different threads. This action itself already requires a lot of semaphore tinkering and synchronization. This is quite literally impossible to achieve between 2 different GPUs let alone 2 different vendors unless it’s extremely basic operations.
Second, Neither Vulkan nor Direct3D have a “Device pooling” feature and it has to be provided by the vendor itself through extensions and applications specifically have to take advantage of said extensions. Both vendors have their own method of doing this and both are completely different from each other.
On a side note, you can use 2 GPUs from different vendors inside a setup. e.g You can offload the rendering of a game to your NVIDIA GPU while using your AMD GPU for display output or using AMD’s driver specific “enhancements” and visa versa. You just can’t render a game/do a compute task on both GPUs at the same time.
@@light-gray The closest you can get to it is getting a second GPU that's dedicated to frame generation (using Lossless Scaling) while the first one does the rendering, depending on the model you can turn 60 fps into 120, 180 or even 240 fps, I love it but it's not real "GPU performance combined", but rather two GPU's doing different things that benefit the guy playing the game
super insightful ! thank u
@@light-gray Well its actually isnt true at all. DirectX 12 has a multi-gpu feature, that has to be incorporated by the game developer. And that thing... *allows you to combine any gpu with any gpu, at the same time, at the same task.* Tho it was really implemented in like, only one game. Very much like SLI, but even more obscure despite the potential.
There is another way to make dual GPU useful as I know, AMD's 6000 and 7000 series card has fluid motion frames, which work like dlss frame gen and compatible with any card. So u can now have a card dedicated running frame generation, decreasing the load for the card for rendering.
pov me when i see lecctron new vid
*looks at own 4070 Super
Sir, how could you do this to me?
Next video - I combined AMD & Nvidia CPU in one PC
Son, we have been waiting for 3 months now, have you buyed the motherboard yet !!! ?
So... it's not worth trying.
There's older amd gpus that actually had this as a feature without crossfire, but without one of those your best mixed setup would be to get a cheap Intel Arc A380 and run your display off it.
Absolute waste of time.
How about this : There was someone in China that combined 4090 (rendered the game and with Frame gen) and feed the data into a RX 6600 which used AFMF and then he outputted to the display for massive fps.
You should replace the 1080TI with a 4070 super and do the same thing
ill see if i can get myself a newer nvidia card to try this out next bc that sounds very interesting
@@lecctron You can find more info if you search for websites called "Up to 3x FPS boost: NVIDIA and AMD Frame Generation"
@@lecctron And is kinda wasteful, cause its essencially lossless scaling thing...but worse.
I have a gtx 1050ti with an i5 8th gen @2.3 GHz. 10 series is so good that I can play Witcher 3 next gen at 60fps (using free synch on a tv so no matter the gfx, frame rate is capped to my gpu frame rate). After a few tweaks and registry edits to my laptop, I can play it smoothly at high settings, keeping the cpu intensive effects at low (due to laptop being 6 years old and I use a small fan next to my laptop for cooling). It runs so well and I haven’t noticed much of the wear on the gpu. I can run fifa 23 and Ac Oddyssey at 1440p at low settings too which is not bad for an old laptop gpu
YOOO RYZEN 4070 IS REAL!!!!!!!
Outside of software, The main cause for dual gpus not being performant is communication between the two GPUs that is fast in a practical manner.
The passthrough system of 2d GPUs and 3d accelerators in the 90s would be wonderful here. Imagine being able to plug one GPU into another and that other one goes to the monitor. Fullscreen applications just whichever one's better does the rendering, windowed mode, UI done by one and everyrhing within the window's borders is done by the other.
I hoped something like this would come to systems featuring both an iGPU and a dGPU.
This is why I bought an open form factor case. Convective cooling with no case fans needed
i love materials like this, where some1 has no clue what he's talking about and kids are gonna hawk tuah on those informations
I did this back in around 2015 using ATI HD 7600 and a GTX 980. Playing Witcher 3 on a Radeon GPU with Nvidia Hairworks was fun.
In my experience, several different gpu manufacturers lead to the ability to simply connect several additional monitors, it is worth considering that the rendering on the monitor will take place on the GPU to which it is connected (thanks captain), but the most interesting thing is the technology of the manufacturers. I found a case it the web like using gt1030 to work with physx in borderlands 2 or 3, I can't remember, although all the main rendering took place on rx580. Another option, at the moment I use EGPU with a laptop, the laptop has an igpu amd 780m and via EGPU I connect rtx3060 in baldur's gate 3 on a stationary computer with rtx3050 I did not have the opportunity to enable FSR only dlss, and in my EGPU build I have FSR and DlSS, so it seems to me that the increase of perfomance with diferent manufacturers gpu will only be if the user himself will parallelize tasks, for example, the main monitor with the game is a high-performance gpu and a funny video with cats on the second monitor is rendered by a weaker gpu. SLI and crossfire themselves are very lame solutions, for example: with nvidia both gpu will have the same frequencies and the most powerful will adjust to the weak one and in most tasks one card checks the work of the other and therefore very often there were problems with sli configurations. AMD Crossfire has no frequency limitation, but I very much doubt that legacy AMD video cards with old drivers in Crossfire can show stability in games or work tasks.
Guys. Lets all donate money to see the finished computer. We can do it.
I actually have run something similar to this for streaming. AMD 6800XT for playing games and Nvidia 1050 Ti for streaming and video encoding. Only reason I don't do that anymore is because I threw the 1050 Ti in a Xeon workstation I use for a media server.
I did this with a rtx 3060 ti and an rx560 with no issues. I was surprised it was running flawlessly. I have a video of it running on both ff7 rebirth and baldur's gate 3 at ultra settings in both games
For anyone asking setup is a ryzen 3700x, with 16gb DDR4 3200, 1tb SSD, Asus 570x mb, Nvidia rtx 3060ti 8gb, and a rx560 4gb, running in one system.
Just in case anyone asks
this guy: "i combined two rival gpus to create a hollow gpu. imaginary gpu: amdvidia
Mixing and matching used to be a thing too. Back in the days of Arkham Asylum...many people used to use ATi as their main GPU, and a cheap nVidia card for PhysX. Because trying to run PhysX on the CPU, like a Phenom II just sucked.
You could use CPU-based PhysX on games like American McGee's Alice Madness.
I used to run ski/crossfire cards. I kind of wish they would bring that back fully supported. It was sick to buy 2 cards at a $300 price point that had nearly exactly the same performance of the $1000 card
My broke ass watching this vid with barely a dime in my pocket:
I was not expecting the video to end after the price reveal lmao. SLI was dropped so fast that any board with it is an absurd price. I was interested in it for a time too.. sad to see it disappear
Well, if you get an m.2 to pcie adapter, you can use the x4 lanes from the top m.2 slot for the gpu.
90% of the new ATX motherboards only wires pcie X1 to the physical x16 plastic for each slot.
mATX motharboards have x4 and x8 slots from the chipset because of the size limit.
Yes, you are correct. This dude put the 1080 ti into a Gen 3 Pcie x16(x1) slot. The "(x1)" means it only has a single lane of bandwidth lmao. Using the m.2 to x16 adapter would have been better. Even so, he didn't play a single game that could utilize his setup. The only games that can utilize an AMD and Nvidia gpu together are the rare titles that have Direct X 12 explicit multi-gpu support.
Edit: I was incorrect. Rise of the Tomb Raider has explicit multi-gpu support, but I doubt he had anything configured correctly.
For the mobo, I think you can also use the asus proart b650-creator. It has two pcie slots connected to the cpu and costs around 400 dollars
I searched for this because i got curious
I like this guy I subscribed after watching like 4 vids
But I do, yes I do do do. Operating AMD and Nvidia parallel eliminates the bottlenecks when I do AI or video editing stuff on the nvidia, I can still e.g. gaming along and stuff on several monitors, without any problems. Take a good power supply. Upgrading the power supply and swapping in a second hand RTX was the best tuning hack in years.
If your motherboard supports bifurcation, you can split the top PCIe 4.0x16 into two 4.0x8 lanes, and use an adapter to split it into two x16 slots. Each gpu will be limited to PCIe 4.0x8 speeds but it’s better than using the bottom slots which run at x1 speeds
I did this a few years back. I had a AMD Fury and added a 980 I came across for cheap and was using for mining purposes. Snice I had hem both I went ahead and tried them out. Low and behold they could work together in certain instances. I tried mGPU in a couple things but most of the time just left the 980 mining while I played on the Fury. The fun trick was after NVIDIA updated their drivers and stopped blocking access to PhysX when you had a AMD card installed. Had a few games that instantly started running better by enabling PhysX on the 980 instead of having to do it on the CPU and still rendering on the Fury.
look ya'll, it's the DSP of computer hardware.
Should've done the full trifecta and added the intel arc gpu as well