I actually ran into a situation where a SATA DVD drive didn't work, because as it turns out, some motherboards share certain SATA ports with M.2 slots and I just so happened to plug it into a port that was shared with the same lane as the SATA M.2 drive I also had plugged in. So, that was real fun to troubleshoot lol
I had that but worse, it was my boot drive that got disconnected. I just popped my first m.2 in and suddenly I couldn't find my drive. I was very confused!
I never really thought about it in all the years I've built machines, until recently when I found out using 3 of the m.2 slots (gen5) and X2 gen 4 halves the GPU, then if you use the last m.2 at gen 3 the board disables 2 sats ports😂
Absolutely share in that frustration. I had plans for what my dumb@ss wanted to do, and didnt check out the motherboards layout before purchase. There were parts i didnt get to use until I sold a kidney for a threadripper two years later. I seriously dont know why there werent more HEDT systems on the market back then. Well I guess there were, but it certainly wasnt well enough known to me then
It is also important to check your mobo before doing this kind of stuff as they can have behaviour you didn't realise or expect. For instance, I had a motherboard that would disable the second pcie slot (x8) if you plug in a second ssd.
This is very common on mid range chipsets. It even happens on most x570 boards having a third x16 PCI-E slot, where its 4 lanes are shared with the second M2 and 2 other Sata ports. You can have 2 out of 3, but not all of them at once as the chipset has to do some multiplexing magic to support the bandwidth.
@@fishbellyman in theory, you should be able to just split the lanes so that the m.2 gets 2 lanes, and the slot gets 2 lanes, but that adds some complexity and cost, and cheaper motherboards have to save every penny in manufacturing to stay competitive at lower price points while still being profitable
I think this is the very first Techquickie video where I learned something relevant for my personal setup. Namely that I should rather not plug my sound card into the 2nd x16 PCIe slot.
depends how important soundcard is to you - if it's for livestreaming/recording multichannel audio, then I'd argue consistent performance of dedicated CPU PCIe lanes is worth over slightly worse GPU throughput
The speed drop will likely not matter even with a 4090 if the GPU, CPU and motherboard all support PCIe 4.0 or more. You ought to find out what PCIe speed your GPU is running at and you can find some online benchmarks at different speeds to judge whether halving it will affect you (techpowerup has such articles, for example).
After 30 years of owning PCs I've only recently realized that it's a waste not to fill the free slots with useful tools like an extra USB-C slot with power delivery... While using a non-transparent case and a non-gargantuan GPU, there's like no reason not to. I'm only ashamed I was never curious enough to properly explore these options in the past.
A benefit of excessive USB is that you can put sketchy devices on their own USB header (there are two ports to a header) instead of sharing one header between two devices. I've had problems with my sketchy Red Dragon Chinese brand keyboard. Also, with Realtek based USB WiFi adapters that have this weird background utility which turns them on an off automatically like some kind of hacky solution to a design flaw.
@@Mr.Morden ooooh, I remember having a keyboard like that, it was this fake mechanical one, from MSI maybe, with rubber domes and springs imitating the feel of mechanical switches. Apart from that cursed extra USB header for the RGB it was also loud as hell, my family made me sell it off even though I kind of liked it. xD
No kidding, and it doesn't necessarily get less confusing if you have the workstation chips he mentioned either. My LGA 3647 board is full of IT8898 MUXs and has a PEX 8747 PCIe switch.
In simple terms: - pcie lanes are provided by cpu. Motherboard is in charge of routing slots to cpu. - the closest x16 slot is most definitely direct route at full lane full speed - motherboards these days usually has a couple of m.2 slots sharing 4-8 directly routed lanes. - consumer CPUs let motherboard handles the remaining 4 or so lanes for everything else. Meaning everything on motherboard that’s not in aforementioned slots, shares 4 lanes via chipset. How exactly they share it depends on motherboard. For example ryzen 9000 has 28x pcie 5.0. Some high end consumer mb do: 16x to primary slot, 8x to m.2, 4x to chipset. The secondary x16 slot will at best work in 4.0 x8 mode because at most it’s 5.0x4. this is essentially why workstation/server CPUs and motherboards are so much more expensive than consumer ones. Extra cores at lower clock is not that much appealing. But the IO capacity is night and day.
Well the pcb is most likely costing the same in production. But yes. More dedicated parts. Lower sales numbers and commercial users can afford to pay more.
A little bit wrong. Using Intel platform, and normal consumer desktop, the CPU usually has 20-24 lanes of PCIE, there's hidden 4 lanes, which is used to connect to the PCH. That PCH also has PCIE lanes, some are shared or some are dedicated. The best method is to view the motherboard block diagram, if provided.
@@AlfaPro1337 yeah but it’s more or less the same. Pch was previously known as south bridge, still it is chipset and its handling IO. AMD integrated part of it into cpu as IO die, then offers pcie to mb chipsets in return, so on paper amd has 4 more. But it’s the same idea.
to some extent this is a meme. to the other you can atleast download a new bios that may or may not allow you to use a new version of PCI-e. a good number of AM4 boards and some intel Z390 boards were updated from 3.0 to 4.0. but some low end AM4 boards will overheat so you will need to buy some heatsinks for your chipset.
A RTX 40 series card doesn’t exactly leave any physical space to plug anything into your other slots. Unless you squeeze one of those riser extension cables in there.
@@larsondavis8155 yep if your brave enough go for it but a lot aren't hence why i say average joe isnt likely to especially if they look at it use a riser cable or potentially damage my very expensive gpu that person may have saved to get.
@@sarthak-ti. I zoomed in on his screen and could just make out that spirit breaker is the skill/spell he’s about to use - I think. It’s at the left, bigger than anything else on his actions bar.
*TLDR* : You have 24 PCIe lanes available from your CPU: - The GPU (usually the first PCIe slot) takes 16 lanes for itself (« _CPU lanes_ ») ; - The 8 remaining lanes are given to the chipset (« _chipset lanes_ »), and the chipset has to deal with them to share/spread those 8 lanes between anything you put inside all the remaining PCIe slots. So if you put just a USB hub or a sound card in the other PCIe slots, it's not a big deal, as they don't always try to bother the CPU all at the same time. But if you put heavy equipment in those chipset lanes, you'll have a traffic jam and the poor thing will have a hard time trying to transfer everything at once to your CPU. Server/workstation CPUs have lots of PCIe lanes, so it's not a problem with those (it's also why they have *tons* of pins on their sockets).
I think the question "where" and "how many" is especially important for M.2 SSDs because even motherboard manuals often don't include information what might impact the gpu slot and what won't
@@TazzSmk i would already be a fan if mbo manufactureres would include some info for configurations (especially for the main m2 slot and main gpu slot, because at some boards they impact each others performance)
It's going to be interesting now, seeing as Arrow Lake platforms might be offering 48 lanes total. 24x Gen5 (285K Ark Page) from the CPU and 24 more Gen4 from the Z890 chipset (Ark page as well). I could almost see a full 7-slot ATX board coming back. 16x Gen5 from the CPU for the top slot and dual Gen5 x4 slots for SSDs. That leaves 24 to be split between 6 slots, so 6 Gen4 x4 below the main one. Of course, you're still limited by the 8 DMI lanes from the CPU to that chipset, but the idea that you could in theory populate a whole very reasonable ATX board with every slot on a consumer platform is very enticing.
This is the whole reason my last build was X99, I have a FireWire card, a TV tuner card, and obviously a GPU and M.2 SSD. I just wish there was a few more lanes on modern consumer platforms, current HEDT/ workstation platforms have WAAAAY more than I could ever use, and have gotten MUCH more expensive since I built my X99 system. I just hope as we move to Gen5 and 6 that we’ll start seeing devices using less lanes for the same bandwidth
I did the same thing when I built my first system which was a first gen i7 on the x58 platform. For not too much more money I went with HEDT specifically for the extra PCI-e 2.0 lanes. That system lasted me for well over a decade because I could utilize most of the PCI-e slots to get around the limitations of the chipset which only offered sata-2 ports and usb2 ports natively. Having a dedicated raid card for 8 sata3 drives extended the lifespan of that system as well as the bandwidth in raid-10 was exceptional for the time. The only thing holding back the system was the PCI-e lanes were 2.0 so eventually video cards were being throttled as that was the only device on my system that could saturate the bandwidth. Unfortunately when I wanted to upgrade, HEDT systems had gotten much more expensive and now Intel doesn't even offer one and Threadrippers are not even a consideration due to the price. I eventually settled on a cheap used 3rd gen i7 3770k about 4 years ago for $120 (mobo, CPU, RAM) and was actually surprised at how much better it was at gaming despite being gimped for PCI-e lanes. My x58 930 could be pushed to 3.8ghz but once I got the 3770k and overclocked it to 4.3ghz it was a drastic difference. I still have the x58 as a backup server and am gimping along with my ancient z-77 system coupled with an AMD 6600. The one thing that has saved my system so far and has given it legs is an updated bios which allowed me to install my OS on an NVME drive through a x8 PCI-e 3 slot since the graphics card was also locked at x8. Unfortunately I lose half the bandwidth of the 2nd x8 slot with just one NVME drive because there is no support for lane bifurcation which is frustrating because it should be standard for all mobo's to split lanes into 4x4x4x4 but for whatever reason most desktop platforms don't allow for that to work and most documentation doesn't have that specification listed even if it doesn't support that function. I would be curious to know what the theoretical bandwidth maximum for a 4090 is, as in does it come close to saturating a full 16 lanes of PCI-e gen 4 and if it had been spec'd with gen 5 would it work just the same limited to x8. It would be nice if new cards were designed for x8 by default so that the other PCI-e slot could work at x8 as well. Four gen 4 NVME drives could easily run on a gen 5 slot at x8 assuming bifurcation was possible. I definitely agree with you that with higher bandwidth lanes we shouldn't be limited to still just being stuck with one dedicated x16 lane if using a x16 video card even if that card comes no where near saturating the bandwidth, especially once we get to gen 6 and 7. It is crazy to think that one PCI-e gen 4 x4 slot is the same bandwidth as my x58 PCI-e gen 2 x16.
For real, I "upgraded" from x99 to x570 and it felt like a downgrade aside from the CPU speed. Severely cut down PCI-E lanes, slots, SATA ports, and RAM. Had to retire my NIC and was forced to get an HBA to keep using the SATA SSDs I was using natively. Have always had a storage bottleneck I didn't realize was from limited PCI-E lanes for years... but at least I can still use it. Now looking at X870E? lol. 1 functioning port. The next 16x is only wired 4x (at last gen) and my HBA is 8x (and even if used it'll drop the GPU slot to 8x). SATA ports also dropped even more, now there's often 2-4, sometimes 6, when the platform still supports 8, just like x570. And even the 4 DIMM slots are a lie, as there's major issues using more than 2 DIMMs. Instead, boards now have 4-6 NVME slots taking up the space... when we could've just bought an M.2 NVME card and installed drives on it if we needed it. Who the hell is going to replace half a dozen of SATA SSDs with NVME simply to switch motherboards? When using all those ports probably isn't even going to work anyway? If we want a system equivalent to X99, we have to pay 5-10x the price (ie; Threadripper), which sucks for gaming (especially without X3D). X99 cost *half* as much as a modern X870E platform. It's ridiculous and I wish big content creators like Linus or Steve addressed the severity of this issue, as they're the only ones that can actually get companies to listen. --- Found out why TH-cam filtered the comment, I think. 3rd time's the charm?
@@AceStrife honestly, I don’t mind losing some SATA, I only have 2 SATA SSD’s in my PC, and plan to have even less eventually moving them to M.2, but that’s another reason I want lots of PCIe 😛
@@AceStrife What is strange is that sata3 has been a standard for more than a decade and sata-express morphed into NVME which have to be installed directly onto the mobo with dedicated sockets and cooling. Would be great if we can get higher capacity NVME drives in the 2.5" form factor with dedicated headers utilizing just one cable for both I/O and power per drive. It was nice back in the day when I could run a raid-10 array with 8 4TB drives (16TB) at relatively high bandwidth for how slow the individual drives were at the time, usually topping out at 200MB/s per drive. I definitely agree with you that we as consumers should be demanding more from the companies that sell us gimped products at inflated prices. The delta between consumer/ desktop and server/ workstation is getting wider and so far the pro parts are the only ones advancing while desktop seems to be stagnant other than CPU speed which has presumably hit a wall as indicated by the last couple of gens from both AMD and Intel.
what people need to remember is that cutting your GPU bandwidth to x8 mode or running them 1 PCIe gen lower will not result in a significant performance loss like they all tried to make out when Ryzen first moved to PCIe 4, just don't do both especially on low end GPU's that are already bandwidth starved
This was a great video! I've been into PC building for more than a decade now and I've never fully understood this topic. The restaurant analogy was awesome for a visual learner like me. 😊
I'm not sure if this is still a thing with intel these days, but on my old i7-7700K PC you also had to be mindful of certain things that were sharing pci-e lanes. Want to use all of your SATA ports? Well, one of your pci-e slots is going to be disabled. Same with the M.2 ports and the U.2 ports as well. The PC has an EVGA Z270 FTW K mobo in it and that was an annoying issue I had to think about when building that computer years ago. There were just not enough lanes to go around.
Besides the aforementioned solution of going with a threadripper/epyc/xeon, there are mining motherboards that have more (chipset) lanes. Since mining is over, these can be repurposed for other compute, like some locally-hosted ChatGPT alternative (e.g., Ollama+OpenWebUI) from used GPUs.
I miss the days when less was integrated to the mb and there were more options to customize. Like don’t want SATA? Don’t add a sata controller. Don’t ever use onboard audio? There’s no onboard audio to pay for and never use.
The problem is that then you had so many devices that were incompatible with anything but software specifically made for that individual unit. Like imagine buying a sound card, booting up your favorite game, and the sound doesnt even work because that game doesn't support your model of card.
If you like having a lot of NVMe storage, that’ll take a good number of lanes. However you probably won’t use more than 2 at any given time. Iirc the AMD desktop CPU design allows more lanes than Intel.
It usually says in the Motherboard's manual what the compromises are for your extra stuff. My B550-F board's second PCIe x16 and all 3 PCIe x1 slots share connectors and the second M.2 and SATA ports 5 and 6 share connectors. For the former, I can only use the x16 as a x1 to use the three x1 slots. If the x16 is used for anything more, all three x1 slots are disabled. Even something as minor as a x4 device, like a capture card, disables all three x1 slots. For the latter, SATA ports 5 and 6 are disabled when the M.2 slot is in use. For both examples, the board is prioritizing which slots would have the most bandwidth use and disabling the ones that are minor.
learned this the hard way as i work in broadcast where i was using dual gpus for 6 monitors plus a capture card plus m.2 drives. fixed the issue by getting a ryzen 7000 series chip as it has integrated graphics so i didn't need 1 of the gpus anymore and instead of using multiple low capacity drives i bought a single 8tb m.2 drive
Good refresher and overview. As you probably know - but most viewers might not - having lots of cards (and slots) used to be the norm before so much functionality was moved directly into the MB chipsets. In addition to the graphics card, you also would have a network card (or going back further a fax/modem card), IDE controller for storage, Parrallel/Serial I/O cards, and a sound card. So the idea is so many cards isn't really a point of concern for us old fogies. Although the limitations of the faster lanes certainly is something new. Thank god for progress!.
This is just my PC, you can actually go even wilder since m.2 slots have 4 lanes, something I am familiar with having my ethernet card on one. (Also double GPU is more functional than you might expect since you can divide up programs in windows, so games on your primary and all background stuff on another)
Its specially hard when you try to run a server with consumer hardware 🤦, most mobo have a 16 lane pcie and the rest is 4 or 1, which is not enough if you plan to run a GPU, HBA card and a 10G NIC full bandwidth. 😔
5:20 that's why the Samsung 990 evo with it's 4 lanes of pcie 4.0 or 2 lanes of 5.0 confuses me, sine if it's using 2 5.0 lanes, it still won't free up 2 lanes, maybe some point in the future with new designs, but not currently.
back around 2000, I used all 5-6 slots in my ATX motherboard: AGP video, PCI sound card, network, SCSI, Dial-up Modem, and some other ISA card that I couldn't remember what it did. That last one broke the camels back, and destroyed my entire motherboard.
This would be an interesting topic to follow up on, and see what would actually happen if a power user wanted to max out a mid-tier board compared to high end board, vs a purpose built threadripper or xeon board. And whether ANY consumer board could be meaningfully maxed out at all.
I know losing the backwards compatibility for a gen would suck but i really feel a fully new slot design to replace the pcie would be great now. Tech has improved so much since it was made. We could cut its size down to 1x size and still be as good as x16. And just like vurrent pcie over time make it bigger again and maintain backwards compatibility and forwards. And if designed right we can make it better in other ways. For example fix the issues caused by big heavy cards that the board struggles to support. Having each slot be really two slots very close to eachother that one card snaps into both at once would give more support. Also could ad mounting points in it that alow a shaft to be installed that goes through the card thus holding the weight up. At same time these changes csn be made to help with power delivery issues and hopefully avoid any more issues like we have had recently with melting conectors due to the amount of power the big things need. Thunderbolt 5 is at pci x4 speeds and has high power to that shows how much we can shrink and thats in a cord that has to be user friendly so it could be smaller. This would also open the door for mor intresting case designs if done right.
This video perfectly explained my situation. I was running out of space for my boot ssd so i made the jump to another nvme ssd. I got another one, but no where to install it. I bought a PCIE to NVME adaptor and bang... but my system was super throttled. This video explained why this was the case.
The 5,1 Mac Pro (2010-12) has four PCIe slots; Slots 1 and 2 are x16, and slots 3 and 4 are x4. I have an RX 580 in slot 1, a Jeyi RGB NVMe card with a 2 TB 970 EVO Plus SSD in slot 2, a Sonnet four-port USB-C card in slot 3, and a FireWire 400 card in slot 4. The fourth card might sound like a strange choice, but FireWire 400 is very useful for me as a vintage Mac collector and tinkerer, and the later cheesegrater Mac Pros only have FireWire 800 built-in.
Motherboards nowadays come with barely any PCIe slots. I can't believe that ATX motherboards with 2 slots (2nd max 4x) are the norm. Part of that is due to the increased number of M.2 slots, which also take up PCIe lanes. So there's a tradeoff between M.2 and PCIe, with some boards further disabling PCIe lanes when M.2 slots are filled. But when it's possible to have motherboards with 4 PCIe slots and 3 M.2 slots, with no disabling when filled... a lot of the motherboard manufacturers have no excuse. It's less bandwidth to add more PCIe slots by using older gen PCIe too.
Good timing on this video. I was just saying to someone last night that my next rig is probably going to be a threadripper so I can get access to more PCIe lanes. Current desktops are nice and all, but kind of lacking in that regards. Personally, I think it's high time they drop some of the sata ports for instance, to allow for more lanes leftover in the chipset to dole out for let's say a bigger pci-e slot than x4 for common example. x8 would be nice for the boards that claim server/workstation capability with certain cpu's and ram, but kind of leave out some of the nicer bells and whistles due to desktop constraints. My guess is the pcie lanes being so ... limiting. Anyways, with wider lanes available, more is available to the end user for potential builds. To get around this problem on my end, while still technically spending less than what would be the cost for a threadripper system of comparable usage, I built 3 desktops. Each one handles different tasks with different components for the tasks intended. And on the power draw side of things, while it can get pretty power chuggy when everything is all full tilt, it should still come out to being roughly on par or less than said threadripper system based on my estimates. And some of that power chug is because of multiple montiors, not just the rigs. People tend to forget they pull a good few watts themselves depending on side and specs. But all of those are considered in my estimates for both 'setups' as it were, the actual and the comparative. Considering getting a KVM switch for it, because multiple keyboards and mice can get a bit hectic. PS. For those wondering if they can pull this off themselves. I did this on a roughly 20k CAD income over the course of 3 years, give or take a few months. All the parts together at cost basically put it on par with roughly a 12-15k threadripper system depending on exact costs at time of purchase due to sales, etc. I got it all for about the total of... I'd say about 5-6k, one of the monitors being grandfathered in though bought a long time ago. Including it, 8k easily at original cost.
The cost of gaming desktops has become ludicrous for how gimped they truly are in comparison to sever/ workstation platforms. What we need is at least 32 PCI-e gen 5 lanes direct from the CPU (48 would be ideal) and a quad channel memory controller which would allow for high speed RAM to operate on all four slots at full bandwidth. With that little of an upgrade it would still come nowhere near the capability of the I/O of the server/ workstation systems which are upwards of 128 PCI-e gen 5 lanes and 12 channel DDR5 RAM. Also worth noting is that PCI-e lane bifurcation should be standard on desktop for better compatibility with add-on cards.
@@thephantomchannel5368 Over all, agreed. 1. CPU lanes specifically for GPU/M.2 use, yeah I would like to see those double up or something like that. There are more than a few reasons for it, but the comment would get super long if I even dared mention them all. So, main one is this: Multiple GPU support for 2 person 1 pc gaming setups. Virtualization with a really powerful PC can also do this, but... cost is a factor for many. Meanwhile, a good cpu and decent build with 2 lesser gpu's going and serving up separate monitors without any extra virtualization to deal with, would be a nice thing to have for families who want their kids to be able to use the single computer they have without having to buy another. 1 ssd each on the CPU, 1 gpu each on the cpu, and only virtualization at that point is splitting the cpu resources, and maybe the networking to some degree. The ram too of course. And this is possible with multi-gpu methods other than sli/crossfire. I just don't know the details well enough to go further than this in mentioning it at all. 2. Bifurcation. Yeah, I've had to replace a board that had a capacitor go psst on the board with some of its juice. Conical spray pattern points right at it directly. That board, did not have bifurcation available. The one I replaced it with, I am pretty sure does have it. (If not, oh well, because that setup technically doesn't need it.) But my other setups, they do have it, and I really want to make use of it properly as it were... but there just isn't enough lanes to justify even trying to use it. These are desktop/workstaion hybrid style motherboards. So they come with a few extra bells and whistles. Which is nice. But I find them lacking still, just barely. Mostly because they have x4 ports where I need an x8, because the other slot that could maybe hold it... is covered by the GPU... and I don't want to run my gpu on x8 on the second slot connected to the chipset. It would techincally free up the x16 slot near the cpu, but... I'd rather not go that route. It works, but I don't like it. The top slot is meant for the GPU, so... Anyways. That's enough out of me. I agree with you, over all. Heck, push the numbers higher. We're not saying big enough numbers yet. Demand too little, and they won't give much. Demand a lot, and maybe we'll at least get a little.
Back in the day I had every slot in my Apple 2 Plus filled, keep in mind that it only did one thing at a time. In no particular order a 16k ram card, an eighty column card so it would show lower case and twice as many character across on the monitor, an Epson printer controller, no printer drivers in those days, a Z80 plus 64k of ram for trying out CPM programs, a modem card, a Mockingboard sound synthesizer, and 2 dual floppy disk controllers, because trying to run Apples Pascal language system on anything less than 4 100k floppy drives sucked.
GPUs for non-realtime 3D rendering are a fascinating example of why link width is not always critical. for something where a frame is going to take many seconds or even minutes to render, it doesn't make a lot of difference how long it takes for the CPU to serve up the render instructions to the GPUs, so the performance hit of running those extra cards in x4 or even x1 slots makes only a tiny percentage increase to the total time. you can even toss them behind an additional PCH layer if you want to have like, a whole frame of ebay P4s crunching away on your next backrooms exploration video.
One example - crypto mining rigs with a heap of GPUs connected to one PCIe lane each. Any potential performance hit was dwarfed by having the additional shaders and VRAM for hashing!
I was wondering about this. Looking at modern motherboards with multiple X4 lanes for PCIe storage is a concern. Wouldn't it bottleneck? I'd like you all to do a follow up video about this concern. It would be interesting to see at what point do we actually saturate the PCIe lanes?
What do you use dedicated sound cards for (apart from being a musician of course, or using toslink if that's what you need (I mean my mobo has toslink, but probably not the case for everyone))?
@@Martititi Avoiding the noise on the integrated audio. Separate cards are usually better insulated, while on the integrated audio it is common to hear weak CPU electrical noise on the audio output if you pass it through to a high-end stereo (hard to hear on headphones).
I have had 6 GPUs attached to a single MSI x470 MB and had all of them rendering blender projects. Worked fine but the 128GB of system memory had trouble keeping up.
We got a workstation at work that has 5 PCIe slots, all used with different devices. the IT guy told me that in order to get everything working he had to disable the WiFi, Sound Card and SATA ports.
MOBO manufacturers and AMD/Intel need a good kick up the ass with PCIe lane limits. Even on the most expensive boards, if you put anything in the second PCIe slot, you're gimped to x8/x8. I just want to have 2 GPUs (gaming and add monitor/streaming/encoding/media server), use the SSD slots and have a sound card or cap card at the same time. Even the "workstation" Asus X870E Pro Art is limited to 3 x16 slots. There isn't even a x4 or x1 slot for the chipset. (3rd slot is 16 @x4) I'd have to get a Threadripper or extremely expensive workstation class set up for that.
Last year, I had the fun of buying a new NVMe. I didn't realise it would mean that I would have to give up two SATA ports. I would have to get rid of one of my drives and likely my optical Blu-Ray drive too. So I bought a PCIe extension card. A nice, fancy one too. Unplugged my old network card and replaced it with the storage extension. I didn't realise that it meant my PCI slot for my GPU would become bifurcated. Not great - especially because it meant I couldn't use SLI with my two GTX1080s. Here I am, months later, accepting that I'll just need to hold out til I upgrade next year.
Bruh how many peripherals can someone use? I only have headphones & KB + M, I can see most people having speakers, maybe a webcam, some external storage like hard drives and flash drives, but that'll use like 6 or 7 which is easily what most Mobo+case will have, how many USBs do we need?!?
The real challenge is finding a good motherboard that has enough PCIE slots. Basically everything has one x16 slot and then like 3 or 4 X1 slots which is pretty much useless because basically nothing useful will fit in those slots.
USB expansion cards are a must-have in all of my non ITX builds. And the people who say a good soundcard has no advantage over onboard audio are a little crazy imo. And those are just the cards that go in all of my builds, well and a GPU of course. Yeah, I'm one of those guys who fills every slot. If I have one sitting there empty, I get out the mobo manual, look at the pcie table, and start thinking what I can do with the slot ro get some use out of it. But I'm also someone who still has to have front bays in my cases, so I'm a bit crazy too I guess. No perfectly good PCIe lanes should go to waste.
What you're saying is nonesense. It's like saying "I will fill my house with cabinets and furniture just because there is empty space in the middle of the living room". Sometimes you need some room to spread your legs, you know? I'm not saying you're not allowed to, but you're also likely not using those additional cards and you just have them there for the show. Be mindful of what you're doing - though, your PC, your rules...
@@cristinelcostachescu9585 I would hardly equate a crowded home with a PC, and I'm also not saying I expect others to do what I do. I'm definitely not putting expansion cards in my rigs just for fun or show though. Who's going to see them? I have many perfectly usable modern PCs sitting around, not getting any use, but I don't have any PCs in use with unused i/o. I regularly run up against bandwidth limitations. I would argue the nonsense is you telling me how I should use my equipment. But it's not just you. I've been encountering numerous people lately who can't seem to comprehend or don't approve of people using PCs differently than them or what is typical. The best part of the PC platform is it's flexibility. Not getting the most out of it and discouraging others from enjoying that flexibility they offer is the only nonsense I'm seeing.
Get more PCIe lanes by buying a xeon workstation. Even then you'll have to configure those lanes to the slots, there may be more than a consumer grade cpu, but not enough to give maximum lanes to every card.
Restaurant is a bad example - during Yum Cha (11:00am - 3:00pm at Strathfirld this specific day) my friends & I were seated at a 12 seater table beside the kitchens 'out' door .. however, we had just come home from a 7am fishing trip where we caught almost no fish: long story short, the food carts were delivered *_from_* the 'kitchen entry' door *_to_* the 'kitchen exit' door (the 12 of us were taking soooo much food from the carts that they delivered the carts in the opposite direction!!!) [EDIT: (3:24) ironically, i am wrong; the 'kitchen/CPU' decided to re-route the 'data stream/BUS' to 'share' the data in the most efficient way possible - but the restaurant in my story had 2 doorways, which is even more apt to this video!]
I’ve had 3 way SLI plus PhysX before. Turns out a dedicated PhysX card was completely pointless when you’re already running 3 high end cards. But on that particular board, it was an x58 Classified, I believe it reduced PCIE lanes to x8/x8/x8 and x4 to the PhysX card. I’ve also had a couple boards with a PLX chip, in which case you probably won’t ever use up all your lanes. The point there was to allow for x16/x16 in 2 way SLI, or in the case of my nForce 790i Striker II Extreme, x16/x8/x16 in 3 way. Now, all those PCIE lanes are needed for m.2 storage. I’m running 4 on my z790 aorus master, so x16 to my 4090, and 4 by x4. But only because it’s PCIE 5.0 and that allows for x4 operation for that 4th SSD where on a pci 4 board, it’s something different, not sure. It can get complicated.
It's always a pain in the xss when I need to tell my cousins or friends why pcie slots have different length/speed, different speed in the same length, different bandwitch in the same length etc.
Most current GPU's take up 2-3 Slots and often cover up PCIe x1, x4 or x16 slots making them useless for any expansion. Even going with PCIe Riser adapter cables can help in some cases, but your PC Case needs to handle all of those slots or additional slots you will need due to owning an ATX or EATX board. I personally found out the hard way that AMD B-Series boards run out of PCI lanes for me and I'm stuck only using X-Series boards. RTX 2080 Super (16 lanes) , 2x M.2 NVMe on-board (4 lanes each), Asus Hyper M.2 Card with 4x 2TB NVMe (needs 16 lanes) and I want to add either Thunderbolt or USB ports or a DeckLink SDI card to increase my functionality. Without any additional devices I'm using 40 PCIe Lanes! I need more! This doesn't include all of the USB Devices or SATA drives I use for Video storage while editing.
Just don't forget to set your jumpers on the card to the right IRQ and base address. You may need to change them in the BIOS for the onboard devices.. :P
I recently ran into an issue myself with a 3700x on an x570m board. I was using the board for a Proxmox server that virtualized TrueNAS. So I passed all my SATA controllers to TN, bifurcated my 4.0 x 16 slot to 8 x 4 x 4, put 2 m.2 NVMe's and a dual port 10g NIC in that slot, a dual 2.5g NIC in my 4.0 x 1 slot, and an Intel ARC A380 into my 4.0 x 4 (physical x 16) slot. Needless to say it was not happy at times lol
In my streaming pc I have 3060, camlink pro and a WiFi card that fills all the slots in my matx board. Unfortunately the WiFi card wouldn’t fit in the case due to the orientation so I had to use an pci extender to rout it dangerously in the case 😆😅
I build media pcs that run 3 or 4 gpu's... but they are never a problem as they are basic outputs, usually at most running a 1080 video per output, and 2 or 3 displays per gpu. In fact the only time I have a problem is the hidden pcie x16 slot that is connected at x1.
Looking forward to seeing pcie backplane expansion boards that plug into the back of the motherboard. So the case size doubles for the serious power users😂
My first build was an x58 system using that same motherboard. Having the extra PCI-e slots allowed me to bi-pass a lot of the limitations of the first gen i7 platform. I used a raid adapter on an x8 lane for a raid 10 array of 8 sata3 drives because the native sata ports were limited to sata2. I also used a usb3 addon card because of the limited amount usb3 ports on the mobo were non-native and used slower 3rd party controllers. I was also able to get two used gtx-580 cards just a couple years after they released for $250 each and ran those in sli. The system itself was limited fairly quickly once PCI-e gen 3 came out but having the extra lanes definitely extended its lifespan and capabilities for well over a decade. Unfortunately after the first couple of gens of HEDT the prices got insane relative to desktop and now that Intel doesn't make HEDT their desktop parts cost just as much as an older HEDT system and is far more limited specifically because of how gimped the PCI-e lanes are. Desktops today are no more than glorified consoles or should be considered extreme mobile platforms at best.
Mine still has a Samsung 950 pro 512gb nvme as a boot drive. Used all 10 sata ports as a "game" drive. Lol. Using that many. It didnt matter if it was sata 2. It was fast. W3690 overclockable xeon too at 4.6ghz for the last 8ish years I believe.
Main reason why i dislike Full ATX motherboards and full tower Pc Cases is because of the huge gap that is left under the GPU and the unnecessary extra space from the case to fill my desk, however, i wish MATX boards wasn't so cheaply built and ITX boards being too darn expensive.
when i was plotting chia on an x570 board with a 5900x i had 4 NVME drives - one in the x16 GPU, one in the main nvme slot and the two others were connected to the chipset and an x1 GPU was on the chipset as well. I Noticed that the 2 nvme Drives on the chipset were significantly slower until i figured out that the tiny fan on the chipset was not enough, so i added an external fan blowing fresh air directly on the Chipset and then there were no performance losses anymore
4:31 yeah had that. On a B660 I could barely use my mouse when transferring from a 20 or even 10Gbps NVME external SSD. Dit go for Z890 this time on my new build so hope that less bad with the Z series idk 🤷♂️
When you realize he's doing the scishow episode of how to present yourself by waving hands in 3-4 directions, taking pauses at exact moments of presenting and pretending you're in awe... After you realize this, you can't but pay attention to those motions and ignore the fact. Go back to your natural state of awkward presentation.
Well, my home server (old Intel 8th gen. based beast) has all of it's PCI-E slots filled up. And even it's lone M.2 slot has an PCI-E x4 card in it. Dell (LSI) 8-port HBA card, Sun F80 800GB Enterprise SSD, Realtek dual network nic, GTX 1650 Super GPU, 6 port SATA controller and a 10gbit network card. And all of these are actually utilized, not just dummy cards. Well, there's one free PCI-E x1 under the GPU, if only I could get to it somehow...
REALLY need Comsumer/Prosumer level chips to start coming with a few more Lanes. And perhaps a better mix of available slots. 2 X16 ( connected to the Processor) 1 X8 1(or 2) X1 1 X4
Honestly didn’t realize how much of a difference the amount of lanes can make for a gpu. I was tinkering yesterday and I moved my gpu to a 16x physical slot that runs at x4 and the game was getting 3/4 of the frame rate with hitches and stutters down to like 8 fps compared to the butter smooth 65-75 in the full fat slot
You run out of PCI-e slots
Brilliant
This
GENIUS
Can someone explain the joke to me? I don't quite get it.
@@janveskrna It's not a joke it's a fact
I actually ran into a situation where a SATA DVD drive didn't work, because as it turns out, some motherboards share certain SATA ports with M.2 slots and I just so happened to plug it into a port that was shared with the same lane as the SATA M.2 drive I also had plugged in. So, that was real fun to troubleshoot lol
I had that but worse, it was my boot drive that got disconnected. I just popped my first m.2 in and suddenly I couldn't find my drive. I was very confused!
Many motherboards do this.
Knowing is half the battle. 🤜🤛🏾
I never really thought about it in all the years I've built machines, until recently when I found out using 3 of the m.2 slots (gen5) and X2 gen 4 halves the GPU, then if you use the last m.2 at gen 3 the board disables 2 sats ports😂
I have an old gigabyte motherboard configured as a True Nas server that is set up like that.
Absolutely share in that frustration. I had plans for what my dumb@ss wanted to do, and didnt check out the motherboards layout before purchase. There were parts i didnt get to use until I sold a kidney for a threadripper two years later. I seriously dont know why there werent more HEDT systems on the market back then. Well I guess there were, but it certainly wasnt well enough known to me then
It is also important to check your mobo before doing this kind of stuff as they can have behaviour you didn't realise or expect. For instance, I had a motherboard that would disable the second pcie slot (x8) if you plug in a second ssd.
This is very common on mid range chipsets. It even happens on most x570 boards having a third x16 PCI-E slot, where its 4 lanes are shared with the second M2 and 2 other Sata ports. You can have 2 out of 3, but not all of them at once as the chipset has to do some multiplexing magic to support the bandwidth.
Why not set the bandwidth of each PCIe slot?
@@fishbellyman in theory, you should be able to just split the lanes so that the m.2 gets 2 lanes, and the slot gets 2 lanes, but that adds some complexity and cost, and cheaper motherboards have to save every penny in manufacturing to stay competitive at lower price points while still being profitable
@@fishbellyman Because you can't, because that'd require complicated $$$ hardware support that doesn't exist on even high end chipseets.
Discovered this via a glance at the manual.
"Unnatural attraction to PCs with lots of USB ports", me looking at my PC with a USB expansion card 💧😶
It’s ok, sometimes we need an extra usb port. Always better to have and not need than need and not have.
My Pentium 4 knows that feeling lol
Don't worry, I have two expansion cards. Although both only provide 2 ports.
I use a USB card, but mainly to power my VR Headset, since it doesn't play well wit the mobo USBs at all.
i am considering one since I am running out of usb's (I have 9 of them)
I think this is the very first Techquickie video where I learned something relevant for my personal setup. Namely that I should rather not plug my sound card into the 2nd x16 PCIe slot.
depends how important soundcard is to you - if it's for livestreaming/recording multichannel audio, then I'd argue consistent performance of dedicated CPU PCIe lanes is worth over slightly worse GPU throughput
The speed drop will likely not matter even with a 4090 if the GPU, CPU and motherboard all support PCIe 4.0 or more. You ought to find out what PCIe speed your GPU is running at and you can find some online benchmarks at different speeds to judge whether halving it will affect you (techpowerup has such articles, for example).
@@leonro Well, I have a PCI 4.0 CPU, but a B450 board.
@@TazzSmk I installed the sound card for better sound and the software features.
@@Chuck_vs._The_Comment_Section got ya, then you can have it in "chipset" bus PCIe slot :))
After 30 years of owning PCs I've only recently realized that it's a waste not to fill the free slots with useful tools like an extra USB-C slot with power delivery... While using a non-transparent case and a non-gargantuan GPU, there's like no reason not to. I'm only ashamed I was never curious enough to properly explore these options in the past.
Startech is your friend for all these weird add in cards 😂 they even have serial controllers
A benefit of excessive USB is that you can put sketchy devices on their own USB header (there are two ports to a header) instead of sharing one header between two devices. I've had problems with my sketchy Red Dragon Chinese brand keyboard. Also, with Realtek based USB WiFi adapters that have this weird background utility which turns them on an off automatically like some kind of hacky solution to a design flaw.
@@TheOCDDoc i use their 4pin 3.5mm headphone jack into 2x 3.5mm 3 pin to use my phone earbuds with a functional mic on a pc, comfort is divine ^^
@@Mr.Morden ooooh, I remember having a keyboard like that, it was this fake mechanical one, from MSI maybe, with rubber domes and springs imitating the feel of mechanical switches. Apart from that cursed extra USB header for the RGB it was also loud as hell, my family made me sell it off even though I kind of liked it. xD
@@alchemik666 yup exactly.
It's always important to read the manual in order to make sure you're getting the best performance out of your PCIE lanes.
No kidding, and it doesn't necessarily get less confusing if you have the workstation chips he mentioned either. My LGA 3647 board is full of IT8898 MUXs and has a PEX 8747 PCIe switch.
Can you find them online? I bought a used system and dont have one
@@WeatherMan2005 yes
@@WeatherMan2005 They should be available on the manufacturer's website.
Ok
In simple terms:
- pcie lanes are provided by cpu. Motherboard is in charge of routing slots to cpu.
- the closest x16 slot is most definitely direct route at full lane full speed
- motherboards these days usually has a couple of m.2 slots sharing 4-8 directly routed lanes.
- consumer CPUs let motherboard handles the remaining 4 or so lanes for everything else. Meaning everything on motherboard that’s not in aforementioned slots, shares 4 lanes via chipset. How exactly they share it depends on motherboard.
For example ryzen 9000 has 28x pcie 5.0. Some high end consumer mb do: 16x to primary slot, 8x to m.2, 4x to chipset. The secondary x16 slot will at best work in 4.0 x8 mode because at most it’s 5.0x4.
this is essentially why workstation/server CPUs and motherboards are so much more expensive than consumer ones. Extra cores at lower clock is not that much appealing. But the IO capacity is night and day.
Well the pcb is most likely costing the same in production. But yes. More dedicated parts. Lower sales numbers and commercial users can afford to pay more.
Thanks for information
A little bit wrong.
Using Intel platform, and normal consumer desktop, the CPU usually has 20-24 lanes of PCIE, there's hidden 4 lanes, which is used to connect to the PCH.
That PCH also has PCIE lanes, some are shared or some are dedicated. The best method is to view the motherboard block diagram, if provided.
@@AlfaPro1337 yeah but it’s more or less the same. Pch was previously known as south bridge, still it is chipset and its handling IO. AMD integrated part of it into cpu as IO die, then offers pcie to mb chipsets in return, so on paper amd has 4 more. But it’s the same idea.
I just discovered this channel. I used to watch LTT a long time ago. I like this format and this topic.
You have to download more PCI Express lanes.
With RGB.
to some extent this is a meme. to the other you can atleast download a new bios that may or may not allow you to use a new version of PCI-e. a good number of AM4 boards and some intel Z390 boards were updated from 3.0 to 4.0. but some low end AM4 boards will overheat so you will need to buy some heatsinks for your chipset.
A RTX 40 series card doesn’t exactly leave any physical space to plug anything into your other slots. Unless you squeeze one of those riser extension cables in there.
It does if you put it on a water block, cuts the size down dramatically.
@@andydbedford yes that is if your brave enough to do so im pretty sure the average joe isnt going to be plumbing a water block.
@@jamv2122 Anybody whose into watercooling would. So, it's not that crazy. I'd do it for sure.
@@larsondavis8155 yep if your brave enough go for it but a lot aren't hence why i say average joe isnt likely to especially if they look at it use a riser cable or potentially damage my very expensive gpu that person may have saved to get.
@@andydbedford I didn’t think of that.
Linus didn't break it down when he said let's break it down. what has this world come to? normalcy?
Bring it - Oh - it is broughten!
Nah... idiocracy... "Welcome to TH-cam, I love you"...
That Spirit Breaker at 0:49 is definitely getting reported.
Brown boots and quelling blade, skilling e first and eating while playing?
@@sarthak-ti. I zoomed in on his screen and could just make out that spirit breaker is the skill/spell he’s about to use - I think. It’s at the left, bigger than anything else on his actions bar.
*TLDR* :
You have 24 PCIe lanes available from your CPU:
- The GPU (usually the first PCIe slot) takes 16 lanes for itself (« _CPU lanes_ ») ;
- The 8 remaining lanes are given to the chipset (« _chipset lanes_ »), and the chipset has to deal with them to share/spread those 8 lanes between anything you put inside all the remaining PCIe slots.
So if you put just a USB hub or a sound card in the other PCIe slots, it's not a big deal, as they don't always try to bother the CPU all at the same time. But if you put heavy equipment in those chipset lanes, you'll have a traffic jam and the poor thing will have a hard time trying to transfer everything at once to your CPU.
Server/workstation CPUs have lots of PCIe lanes, so it's not a problem with those (it's also why they have *tons* of pins on their sockets).
I think the question "where" and "how many" is especially important for M.2 SSDs because even motherboard manuals often don't include information what might impact the gpu slot and what won't
interesting alternative is having 4x4 NVME card by Sonnet which has own internal switch, so it can run at x8 mode dynamically loadbalancing 4x x4 SSDs
@@TazzSmk i would already be a fan if mbo manufactureres would include some info for configurations (especially for the main m2 slot and main gpu slot, because at some boards they impact each others performance)
Do you know / remember from which brand? Just to maybe avoid in the future.
It's going to be interesting now, seeing as Arrow Lake platforms might be offering 48 lanes total. 24x Gen5 (285K Ark Page) from the CPU and 24 more Gen4 from the Z890 chipset (Ark page as well). I could almost see a full 7-slot ATX board coming back. 16x Gen5 from the CPU for the top slot and dual Gen5 x4 slots for SSDs. That leaves 24 to be split between 6 slots, so 6 Gen4 x4 below the main one. Of course, you're still limited by the 8 DMI lanes from the CPU to that chipset, but the idea that you could in theory populate a whole very reasonable ATX board with every slot on a consumer platform is very enticing.
This is the whole reason my last build was X99, I have a FireWire card, a TV tuner card, and obviously a GPU and M.2 SSD. I just wish there was a few more lanes on modern consumer platforms, current HEDT/ workstation platforms have WAAAAY more than I could ever use, and have gotten MUCH more expensive since I built my X99 system. I just hope as we move to Gen5 and 6 that we’ll start seeing devices using less lanes for the same bandwidth
I did the same thing when I built my first system which was a first gen i7 on the x58 platform. For not too much more money I went with HEDT specifically for the extra PCI-e 2.0 lanes. That system lasted me for well over a decade because I could utilize most of the PCI-e slots to get around the limitations of the chipset which only offered sata-2 ports and usb2 ports natively. Having a dedicated raid card for 8 sata3 drives extended the lifespan of that system as well as the bandwidth in raid-10 was exceptional for the time. The only thing holding back the system was the PCI-e lanes were 2.0 so eventually video cards were being throttled as that was the only device on my system that could saturate the bandwidth.
Unfortunately when I wanted to upgrade, HEDT systems had gotten much more expensive and now Intel doesn't even offer one and Threadrippers are not even a consideration due to the price. I eventually settled on a cheap used 3rd gen i7 3770k about 4 years ago for $120 (mobo, CPU, RAM) and was actually surprised at how much better it was at gaming despite being gimped for PCI-e lanes. My x58 930 could be pushed to 3.8ghz but once I got the 3770k and overclocked it to 4.3ghz it was a drastic difference. I still have the x58 as a backup server and am gimping along with my ancient z-77 system coupled with an AMD 6600. The one thing that has saved my system so far and has given it legs is an updated bios which allowed me to install my OS on an NVME drive through a x8 PCI-e 3 slot since the graphics card was also locked at x8. Unfortunately I lose half the bandwidth of the 2nd x8 slot with just one NVME drive because there is no support for lane bifurcation which is frustrating because it should be standard for all mobo's to split lanes into 4x4x4x4 but for whatever reason most desktop platforms don't allow for that to work and most documentation doesn't have that specification listed even if it doesn't support that function.
I would be curious to know what the theoretical bandwidth maximum for a 4090 is, as in does it come close to saturating a full 16 lanes of PCI-e gen 4 and if it had been spec'd with gen 5 would it work just the same limited to x8. It would be nice if new cards were designed for x8 by default so that the other PCI-e slot could work at x8 as well. Four gen 4 NVME drives could easily run on a gen 5 slot at x8 assuming bifurcation was possible. I definitely agree with you that with higher bandwidth lanes we shouldn't be limited to still just being stuck with one dedicated x16 lane if using a x16 video card even if that card comes no where near saturating the bandwidth, especially once we get to gen 6 and 7. It is crazy to think that one PCI-e gen 4 x4 slot is the same bandwidth as my x58 PCI-e gen 2 x16.
@@thephantomchannel5368 the funny thing is, my previous PC was ALSO X58, I just had an i7 950 instead 😅
For real, I "upgraded" from x99 to x570 and it felt like a downgrade aside from the CPU speed.
Severely cut down PCI-E lanes, slots, SATA ports, and RAM. Had to retire my NIC and was forced to get an HBA to keep using the SATA SSDs I was using natively. Have always had a storage bottleneck I didn't realize was from limited PCI-E lanes for years... but at least I can still use it.
Now looking at X870E? lol. 1 functioning port. The next 16x is only wired 4x (at last gen) and my HBA is 8x (and even if used it'll drop the GPU slot to 8x). SATA ports also dropped even more, now there's often 2-4, sometimes 6, when the platform still supports 8, just like x570. And even the 4 DIMM slots are a lie, as there's major issues using more than 2 DIMMs.
Instead, boards now have 4-6 NVME slots taking up the space... when we could've just bought an M.2 NVME card and installed drives on it if we needed it. Who the hell is going to replace half a dozen of SATA SSDs with NVME simply to switch motherboards? When using all those ports probably isn't even going to work anyway?
If we want a system equivalent to X99, we have to pay 5-10x the price (ie; Threadripper), which sucks for gaming (especially without X3D). X99 cost *half* as much as a modern X870E platform.
It's ridiculous and I wish big content creators like Linus or Steve addressed the severity of this issue, as they're the only ones that can actually get companies to listen.
---
Found out why TH-cam filtered the comment, I think. 3rd time's the charm?
@@AceStrife honestly, I don’t mind losing some SATA, I only have 2 SATA SSD’s in my PC, and plan to have even less eventually moving them to M.2, but that’s another reason I want lots of PCIe 😛
@@AceStrife What is strange is that sata3 has been a standard for more than a decade and sata-express morphed into NVME which have to be installed directly onto the mobo with dedicated sockets and cooling. Would be great if we can get higher capacity NVME drives in the 2.5" form factor with dedicated headers utilizing just one cable for both I/O and power per drive. It was nice back in the day when I could run a raid-10 array with 8 4TB drives (16TB) at relatively high bandwidth for how slow the individual drives were at the time, usually topping out at 200MB/s per drive.
I definitely agree with you that we as consumers should be demanding more from the companies that sell us gimped products at inflated prices. The delta between consumer/ desktop and server/ workstation is getting wider and so far the pro parts are the only ones advancing while desktop seems to be stagnant other than CPU speed which has presumably hit a wall as indicated by the last couple of gens from both AMD and Intel.
LMG is a mind reader. I was literally just thinking of this question yesterday.
I love tech quickie. 1/4 of the video is a sponsor spot...
what people need to remember is that cutting your GPU bandwidth to x8 mode or running them 1 PCIe gen lower will not result in a significant performance loss like they all tried to make out when Ryzen first moved to PCIe 4, just don't do both especially on low end GPU's that are already bandwidth starved
This was a great video! I've been into PC building for more than a decade now and I've never fully understood this topic. The restaurant analogy was awesome for a visual learner like me. 😊
I'm not sure if this is still a thing with intel these days, but on my old i7-7700K PC you also had to be mindful of certain things that were sharing pci-e lanes. Want to use all of your SATA ports? Well, one of your pci-e slots is going to be disabled. Same with the M.2 ports and the U.2 ports as well. The PC has an EVGA Z270 FTW K mobo in it and that was an annoying issue I had to think about when building that computer years ago. There were just not enough lanes to go around.
You look so much younger on techquickie set lol. I almost thought this was an old video for a second.
You forgot to mention that some PCIE (usually the x1) are unavailable when PCIE2 or PCIE3 being populated.
Linus you videos are way ahead of its time, i usually find your videos useful and when i see the upload date, its usually 7-8yrs old 😮😢😢😢😮
You forgot the SoundBlaster with golden contacts, platinum plated resistors, and diamond encrusted tubes for the audiophiles amongst us!
The references to how a kitchen works is perfect to relate to a desktop motherboard...very nice.
Besides the aforementioned solution of going with a threadripper/epyc/xeon, there are mining motherboards that have more (chipset) lanes. Since mining is over, these can be repurposed for other compute, like some locally-hosted ChatGPT alternative (e.g., Ollama+OpenWebUI) from used GPUs.
I miss the days when less was integrated to the mb and there were more options to customize. Like don’t want SATA? Don’t add a sata controller. Don’t ever use onboard audio? There’s no onboard audio to pay for and never use.
This also made computers much cheaper. Each individual part was competed over by multiple companies. Now you get just one.
The problem is that then you had so many devices that were incompatible with anything but software specifically made for that individual unit. Like imagine buying a sound card, booting up your favorite game, and the sound doesnt even work because that game doesn't support your model of card.
@@KaitouKaiju That's a world without drivers.... not what we were talking about.
If you like having a lot of NVMe storage, that’ll take a good number of lanes. However you probably won’t use more than 2 at any given time.
Iirc the AMD desktop CPU design allows more lanes than Intel.
It usually says in the Motherboard's manual what the compromises are for your extra stuff. My B550-F board's second PCIe x16 and all 3 PCIe x1 slots share connectors and the second M.2 and SATA ports 5 and 6 share connectors. For the former, I can only use the x16 as a x1 to use the three x1 slots. If the x16 is used for anything more, all three x1 slots are disabled. Even something as minor as a x4 device, like a capture card, disables all three x1 slots. For the latter, SATA ports 5 and 6 are disabled when the M.2 slot is in use. For both examples, the board is prioritizing which slots would have the most bandwidth use and disabling the ones that are minor.
learned this the hard way as i work in broadcast where i was using dual gpus for 6 monitors plus a capture card plus m.2 drives. fixed the issue by getting a ryzen 7000 series chip as it has integrated graphics so i didn't need 1 of the gpus anymore and instead of using multiple low capacity drives i bought a single 8tb m.2 drive
Good refresher and overview. As you probably know - but most viewers might not - having lots of cards (and slots) used to be the norm before so much functionality was moved directly into the MB chipsets. In addition to the graphics card, you also would have a network card (or going back further a fax/modem card), IDE controller for storage, Parrallel/Serial I/O cards, and a sound card. So the idea is so many cards isn't really a point of concern for us old fogies. Although the limitations of the faster lanes certainly is something new. Thank god for progress!.
This is just my PC, you can actually go even wilder since m.2 slots have 4 lanes, something I am familiar with having my ethernet card on one. (Also double GPU is more functional than you might expect since you can divide up programs in windows, so games on your primary and all background stuff on another)
Its specially hard when you try to run a server with consumer hardware 🤦, most mobo have a 16 lane pcie and the rest is 4 or 1, which is not enough if you plan to run a GPU, HBA card and a 10G NIC full bandwidth. 😔
this
Who are you? And why do you have the exact same problem as me?^^
This is literally my question in my head a few days ago as a new joiner in pc building lol
Simple solution...Download more PCI-e slots!
Linus after asking every other hypothetical question in these videos:
It dEpEnDs
5:20 that's why the Samsung 990 evo with it's 4 lanes of pcie 4.0 or 2 lanes of 5.0 confuses me, sine if it's using 2 5.0 lanes, it still won't free up 2 lanes, maybe some point in the future with new designs, but not currently.
back around 2000, I used all 5-6 slots in my ATX motherboard: AGP video, PCI sound card, network, SCSI, Dial-up Modem, and some other ISA card that I couldn't remember what it did. That last one broke the camels back, and destroyed my entire motherboard.
The resturant \ chef's to customers analogy was great. I'll use that in the future.
This would be an interesting topic to follow up on, and see what would actually happen if a power user wanted to max out a mid-tier board compared to high end board, vs a purpose built threadripper or xeon board. And whether ANY consumer board could be meaningfully maxed out at all.
I know losing the backwards compatibility for a gen would suck but i really feel a fully new slot design to replace the pcie would be great now. Tech has improved so much since it was made. We could cut its size down to 1x size and still be as good as x16. And just like vurrent pcie over time make it bigger again and maintain backwards compatibility and forwards. And if designed right we can make it better in other ways. For example fix the issues caused by big heavy cards that the board struggles to support. Having each slot be really two slots very close to eachother that one card snaps into both at once would give more support. Also could ad mounting points in it that alow a shaft to be installed that goes through the card thus holding the weight up. At same time these changes csn be made to help with power delivery issues and hopefully avoid any more issues like we have had recently with melting conectors due to the amount of power the big things need.
Thunderbolt 5 is at pci x4 speeds and has high power to that shows how much we can shrink and thats in a cord that has to be user friendly so it could be smaller. This would also open the door for mor intresting case designs if done right.
This video perfectly explained my situation. I was running out of space for my boot ssd so i made the jump to another nvme ssd. I got another one, but no where to install it. I bought a PCIE to NVME adaptor and bang... but my system was super throttled. This video explained why this was the case.
The 5,1 Mac Pro (2010-12) has four PCIe slots; Slots 1 and 2 are x16, and slots 3 and 4 are x4. I have an RX 580 in slot 1, a Jeyi RGB NVMe card with a 2 TB 970 EVO Plus SSD in slot 2, a Sonnet four-port USB-C card in slot 3, and a FireWire 400 card in slot 4. The fourth card might sound like a strange choice, but FireWire 400 is very useful for me as a vintage Mac collector and tinkerer, and the later cheesegrater Mac Pros only have FireWire 800 built-in.
Motherboards nowadays come with barely any PCIe slots. I can't believe that ATX motherboards with 2 slots (2nd max 4x) are the norm. Part of that is due to the increased number of M.2 slots, which also take up PCIe lanes. So there's a tradeoff between M.2 and PCIe, with some boards further disabling PCIe lanes when M.2 slots are filled. But when it's possible to have motherboards with 4 PCIe slots and 3 M.2 slots, with no disabling when filled... a lot of the motherboard manufacturers have no excuse. It's less bandwidth to add more PCIe slots by using older gen PCIe too.
2:48 skip ad
Thank you
Thanks Linus. Well explained. I'll put this in the SAVED folder.
You can fit anything in there but do not forget about the sound cards okay. It's still relevant for gaming PCs
5:42 I'm disappointed there's no Fruit Ninja visual effect
"Expansion cards are very unlikely to all be operating at peak capacity"
Ah... the supressed memories of first-gen VR hardware :)
Good timing on this video. I was just saying to someone last night that my next rig is probably going to be a threadripper so I can get access to more PCIe lanes. Current desktops are nice and all, but kind of lacking in that regards. Personally, I think it's high time they drop some of the sata ports for instance, to allow for more lanes leftover in the chipset to dole out for let's say a bigger pci-e slot than x4 for common example. x8 would be nice for the boards that claim server/workstation capability with certain cpu's and ram, but kind of leave out some of the nicer bells and whistles due to desktop constraints. My guess is the pcie lanes being so ... limiting. Anyways, with wider lanes available, more is available to the end user for potential builds.
To get around this problem on my end, while still technically spending less than what would be the cost for a threadripper system of comparable usage, I built 3 desktops. Each one handles different tasks with different components for the tasks intended. And on the power draw side of things, while it can get pretty power chuggy when everything is all full tilt, it should still come out to being roughly on par or less than said threadripper system based on my estimates. And some of that power chug is because of multiple montiors, not just the rigs. People tend to forget they pull a good few watts themselves depending on side and specs. But all of those are considered in my estimates for both 'setups' as it were, the actual and the comparative.
Considering getting a KVM switch for it, because multiple keyboards and mice can get a bit hectic.
PS. For those wondering if they can pull this off themselves. I did this on a roughly 20k CAD income over the course of 3 years, give or take a few months. All the parts together at cost basically put it on par with roughly a 12-15k threadripper system depending on exact costs at time of purchase due to sales, etc. I got it all for about the total of... I'd say about 5-6k, one of the monitors being grandfathered in though bought a long time ago. Including it, 8k easily at original cost.
The cost of gaming desktops has become ludicrous for how gimped they truly are in comparison to sever/ workstation platforms. What we need is at least 32 PCI-e gen 5 lanes direct from the CPU (48 would be ideal) and a quad channel memory controller which would allow for high speed RAM to operate on all four slots at full bandwidth. With that little of an upgrade it would still come nowhere near the capability of the I/O of the server/ workstation systems which are upwards of 128 PCI-e gen 5 lanes and 12 channel DDR5 RAM. Also worth noting is that PCI-e lane bifurcation should be standard on desktop for better compatibility with add-on cards.
@@thephantomchannel5368 Over all, agreed.
1. CPU lanes specifically for GPU/M.2 use, yeah I would like to see those double up or something like that. There are more than a few reasons for it, but the comment would get super long if I even dared mention them all. So, main one is this: Multiple GPU support for 2 person 1 pc gaming setups. Virtualization with a really powerful PC can also do this, but... cost is a factor for many. Meanwhile, a good cpu and decent build with 2 lesser gpu's going and serving up separate monitors without any extra virtualization to deal with, would be a nice thing to have for families who want their kids to be able to use the single computer they have without having to buy another. 1 ssd each on the CPU, 1 gpu each on the cpu, and only virtualization at that point is splitting the cpu resources, and maybe the networking to some degree. The ram too of course.
And this is possible with multi-gpu methods other than sli/crossfire. I just don't know the details well enough to go further than this in mentioning it at all.
2. Bifurcation. Yeah, I've had to replace a board that had a capacitor go psst on the board with some of its juice. Conical spray pattern points right at it directly.
That board, did not have bifurcation available. The one I replaced it with, I am pretty sure does have it. (If not, oh well, because that setup technically doesn't need it.)
But my other setups, they do have it, and I really want to make use of it properly as it were... but there just isn't enough lanes to justify even trying to use it.
These are desktop/workstaion hybrid style motherboards. So they come with a few extra bells and whistles. Which is nice.
But I find them lacking still, just barely. Mostly because they have x4 ports where I need an x8, because the other slot that could maybe hold it... is covered by the GPU... and I don't want to run my gpu on x8 on the second slot connected to the chipset. It would techincally free up the x16 slot near the cpu, but... I'd rather not go that route. It works, but I don't like it. The top slot is meant for the GPU, so...
Anyways. That's enough out of me. I agree with you, over all. Heck, push the numbers higher. We're not saying big enough numbers yet. Demand too little, and they won't give much. Demand a lot, and maybe we'll at least get a little.
Back in the day I had every slot in my Apple 2 Plus filled, keep in mind that it only did one thing at a time. In no particular order a 16k ram card, an eighty column card so it would show lower case and twice as many character across on the monitor, an Epson printer controller, no printer drivers in those days, a Z80 plus 64k of ram for trying out CPM programs, a modem card, a Mockingboard sound synthesizer, and 2 dual floppy disk controllers, because trying to run Apples Pascal language system on anything less than 4 100k floppy drives sucked.
GPUs for non-realtime 3D rendering are a fascinating example of why link width is not always critical. for something where a frame is going to take many seconds or even minutes to render, it doesn't make a lot of difference how long it takes for the CPU to serve up the render instructions to the GPUs, so the performance hit of running those extra cards in x4 or even x1 slots makes only a tiny percentage increase to the total time. you can even toss them behind an additional PCH layer if you want to have like, a whole frame of ebay P4s crunching away on your next backrooms exploration video.
One example - crypto mining rigs with a heap of GPUs connected to one PCIe lane each. Any potential performance hit was dwarfed by having the additional shaders and VRAM for hashing!
I was wondering about this.
Looking at modern motherboards with multiple X4 lanes for PCIe storage is a concern. Wouldn't it bottleneck? I'd like you all to do a follow up video about this concern. It would be interesting to see at what point do we actually saturate the PCIe lanes?
DOTA ON THE SCREEEN FOR 2 SECOND LETSGOOOOOO
5 cards? Where am I supposed to install my dedicated sound card?
get 5 times dolbey sound quality
Just use an audio interface
And then use USB headset
What do you use dedicated sound cards for (apart from being a musician of course, or using toslink if that's what you need (I mean my mobo has toslink, but probably not the case for everyone))?
@@Martititi Avoiding the noise on the integrated audio. Separate cards are usually better insulated, while on the integrated audio it is common to hear weak CPU electrical noise on the audio output if you pass it through to a high-end stereo (hard to hear on headphones).
Damn who else thought Sound Card before any of the others? So sad for Sound Blaster 😢
I have had 6 GPUs attached to a single MSI x470 MB and had all of them rendering blender projects. Worked fine but the 128GB of system memory had trouble keeping up.
We got a workstation at work that has 5 PCIe slots, all used with different devices. the IT guy told me that in order to get everything working he had to disable the WiFi, Sound Card and SATA ports.
Remember IRQ conflicts? Don’t miss those.
MOBO manufacturers and AMD/Intel need a good kick up the ass with PCIe lane limits.
Even on the most expensive boards, if you put anything in the second PCIe slot, you're gimped to x8/x8.
I just want to have 2 GPUs (gaming and add monitor/streaming/encoding/media server), use the SSD slots and have a sound card or cap card at the same time.
Even the "workstation" Asus X870E Pro Art is limited to 3 x16 slots. There isn't even a x4 or x1 slot for the chipset. (3rd slot is 16 @x4)
I'd have to get a Threadripper or extremely expensive workstation class set up for that.
With how fast PCIe is now, you can probably run both GPUs in x8 with little to no performance hit.
Last year, I had the fun of buying a new NVMe. I didn't realise it would mean that I would have to give up two SATA ports. I would have to get rid of one of my drives and likely my optical Blu-Ray drive too. So I bought a PCIe extension card. A nice, fancy one too. Unplugged my old network card and replaced it with the storage extension. I didn't realise that it meant my PCI slot for my GPU would become bifurcated. Not great - especially because it meant I couldn't use SLI with my two GTX1080s.
Here I am, months later, accepting that I'll just need to hold out til I upgrade next year.
There's nothing unnatural about wanting more USB ports. I've yet to find a PC with enough of them.
Bruh how many peripherals can someone use? I only have headphones & KB + M, I can see most people having speakers, maybe a webcam, some external storage like hard drives and flash drives, but that'll use like 6 or 7 which is easily what most Mobo+case will have, how many USBs do we need?!?
@@oscartomlinson11Sim racing stuff, midi keyboard, audio interface, controller, vr headset, webcams.
I thought this was a family friendly channel? Here I am listening to Linus talk about having all the slots filled.
This was legit super interesting! I learned a lot
The real challenge is finding a good motherboard that has enough PCIE slots. Basically everything has one x16 slot and then like 3 or 4 X1 slots which is pretty much useless because basically nothing useful will fit in those slots.
There's definitely a "having all your slots filled" joke in there somewhere
Linus has some very different restaurant experiences than the rest of us i think
USB expansion cards are a must-have in all of my non ITX builds. And the people who say a good soundcard has no advantage over onboard audio are a little crazy imo. And those are just the cards that go in all of my builds, well and a GPU of course. Yeah, I'm one of those guys who fills every slot. If I have one sitting there empty, I get out the mobo manual, look at the pcie table, and start thinking what I can do with the slot ro get some use out of it. But I'm also someone who still has to have front bays in my cases, so I'm a bit crazy too I guess. No perfectly good PCIe lanes should go to waste.
What you're saying is nonesense. It's like saying "I will fill my house with cabinets and furniture just because there is empty space in the middle of the living room". Sometimes you need some room to spread your legs, you know?
I'm not saying you're not allowed to, but you're also likely not using those additional cards and you just have them there for the show.
Be mindful of what you're doing - though, your PC, your rules...
@@cristinelcostachescu9585 I would hardly equate a crowded home with a PC, and I'm also not saying I expect others to do what I do. I'm definitely not putting expansion cards in my rigs just for fun or show though. Who's going to see them? I have many perfectly usable modern PCs sitting around, not getting any use, but I don't have any PCs in use with unused i/o. I regularly run up against bandwidth limitations. I would argue the nonsense is you telling me how I should use my equipment. But it's not just you. I've been encountering numerous people lately who can't seem to comprehend or don't approve of people using PCs differently than them or what is typical. The best part of the PC platform is it's flexibility. Not getting the most out of it and discouraging others from enjoying that flexibility they offer is the only nonsense I'm seeing.
Get more PCIe lanes by buying a xeon workstation. Even then you'll have to configure those lanes to the slots, there may be more than a consumer grade cpu, but not enough to give maximum lanes to every card.
Restaurant is a bad example - during Yum Cha (11:00am - 3:00pm at Strathfirld this specific day) my friends & I were seated at a 12 seater table beside the kitchens 'out' door .. however, we had just come home from a 7am fishing trip where we caught almost no fish: long story short, the food carts were delivered *_from_* the 'kitchen entry' door *_to_* the 'kitchen exit' door (the 12 of us were taking soooo much food from the carts that they delivered the carts in the opposite direction!!!)
[EDIT: (3:24) ironically, i am wrong; the 'kitchen/CPU' decided to re-route the 'data stream/BUS' to 'share' the data in the most efficient way possible - but the restaurant in my story had 2 doorways, which is even more apt to this video!]
I’ve had 3 way SLI plus PhysX before. Turns out a dedicated PhysX card was completely pointless when you’re already running 3 high end cards. But on that particular board, it was an x58 Classified, I believe it reduced PCIE lanes to x8/x8/x8 and x4 to the PhysX card. I’ve also had a couple boards with a PLX chip, in which case you probably won’t ever use up all your lanes.
The point there was to allow for x16/x16 in 2 way SLI, or in the case of my nForce 790i Striker II Extreme, x16/x8/x16 in 3 way. Now, all those PCIE lanes are needed for m.2 storage. I’m running 4 on my z790 aorus master, so x16 to my 4090, and 4 by x4. But only because it’s PCIE 5.0 and that allows for x4 operation for that 4th SSD where on a pci 4 board, it’s something different, not sure. It can get complicated.
I can remember the days of IRQ conflicts and assigning devices your self ... luckely those are almost completely gone
Good thing they give us all the PCIe slots.
It's always a pain in the xss when I need to tell my cousins or friends why pcie slots have different length/speed, different speed in the same length, different bandwitch in the same length etc.
Most current GPU's take up 2-3 Slots and often cover up PCIe x1, x4 or x16 slots making them useless for any expansion. Even going with PCIe Riser adapter cables can help in some cases, but your PC Case needs to handle all of those slots or additional slots you will need due to owning an ATX or EATX board.
I personally found out the hard way that AMD B-Series boards run out of PCI lanes for me and I'm stuck only using X-Series boards.
RTX 2080 Super (16 lanes) , 2x M.2 NVMe on-board (4 lanes each), Asus Hyper M.2 Card with 4x 2TB NVMe (needs 16 lanes) and I want to add either Thunderbolt or USB ports or a DeckLink SDI card to increase my functionality. Without any additional devices I'm using 40 PCIe Lanes! I need more!
This doesn't include all of the USB Devices or SATA drives I use for Video storage while editing.
You'll need to swtch to ISA slots for the next card to add
Just don't forget to set your jumpers on the card to the right IRQ and base address. You may need to change them in the BIOS for the onboard devices.. :P
Depending on the chipset you can halso halve your pcie lanes for the main pcie slot used by your gpu(most probably)
You have no PCI slots left. Feels like those math questions in middle school 😂
'Just from having all of your slots filled" THATS WHAT SHE SAID!
Linus was reading my mind this last week😂
I recently ran into an issue myself with a 3700x on an x570m board. I was using the board for a Proxmox server that virtualized TrueNAS. So I passed all my SATA controllers to TN, bifurcated my 4.0 x 16 slot to 8 x 4 x 4, put 2 m.2 NVMe's and a dual port 10g NIC in that slot, a dual 2.5g NIC in my 4.0 x 1 slot, and an Intel ARC A380 into my 4.0 x 4 (physical x 16) slot. Needless to say it was not happy at times lol
In my streaming pc I have 3060, camlink pro and a WiFi card that fills all the slots in my matx board. Unfortunately the WiFi card wouldn’t fit in the case due to the orientation so I had to use an pci extender to rout it dangerously in the case 😆😅
I build media pcs that run 3 or 4 gpu's... but they are never a problem as they are basic outputs, usually at most running a 1080 video per output, and 2 or 3 displays per gpu.
In fact the only time I have a problem is the hidden pcie x16 slot that is connected at x1.
Make sure you add some thermal paste into those pci slots, it's sure to speed up those transfers xD
What about a sound card? I love my 7.1 X-Fi card that outputs directly to my surround receiver.
Looking forward to seeing pcie backplane expansion boards that plug into the back of the motherboard. So the case size doubles for the serious power users😂
I have a gigabyte UDR3 V2 X58 system. It has something in every single pcie slot. Aaaaand!! You would never know. It all just works.
My first build was an x58 system using that same motherboard. Having the extra PCI-e slots allowed me to bi-pass a lot of the limitations of the first gen i7 platform. I used a raid adapter on an x8 lane for a raid 10 array of 8 sata3 drives because the native sata ports were limited to sata2. I also used a usb3 addon card because of the limited amount usb3 ports on the mobo were non-native and used slower 3rd party controllers. I was also able to get two used gtx-580 cards just a couple years after they released for $250 each and ran those in sli. The system itself was limited fairly quickly once PCI-e gen 3 came out but having the extra lanes definitely extended its lifespan and capabilities for well over a decade. Unfortunately after the first couple of gens of HEDT the prices got insane relative to desktop and now that Intel doesn't make HEDT their desktop parts cost just as much as an older HEDT system and is far more limited specifically because of how gimped the PCI-e lanes are. Desktops today are no more than glorified consoles or should be considered extreme mobile platforms at best.
Mine still has a Samsung 950 pro 512gb nvme as a boot drive. Used all 10 sata ports as a "game" drive. Lol. Using that many. It didnt matter if it was sata 2. It was fast. W3690 overclockable xeon too at 4.6ghz for the last 8ish years I believe.
Main reason why i dislike Full ATX motherboards and full tower Pc Cases is because of the huge gap that is left under the GPU and the unnecessary extra space from the case to fill my desk, however, i wish MATX boards wasn't so cheaply built and ITX boards being too darn expensive.
Just like when you go completely deaf when you dont leave half the ram slots open for your earlobes to perch upon.
when i was plotting chia on an x570 board with a 5900x i had 4 NVME drives - one in the x16 GPU, one in the main nvme slot and the two others were connected to the chipset and an x1 GPU was on the chipset as well. I Noticed that the 2 nvme Drives on the chipset were significantly slower until i figured out that the tiny fan on the chipset was not enough, so i added an external fan blowing fresh air directly on the Chipset and then there were no performance losses anymore
4:31 yeah had that. On a B660 I could barely use my mouse when transferring from a 20 or even 10Gbps NVME external SSD. Dit go for Z890 this time on my new build so hope that less bad with the Z series idk 🤷♂️
When you realize he's doing the scishow episode of how to present yourself by waving hands in 3-4 directions, taking pauses at exact moments of presenting and pretending you're in awe...
After you realize this, you can't but pay attention to those motions and ignore the fact.
Go back to your natural state of awkward presentation.
Well, my home server (old Intel 8th gen. based beast) has all of it's PCI-E slots filled up. And even it's lone M.2 slot has an PCI-E x4 card in it.
Dell (LSI) 8-port HBA card, Sun F80 800GB Enterprise SSD, Realtek dual network nic, GTX 1650 Super GPU, 6 port SATA controller and a 10gbit network card. And all of these are actually utilized, not just dummy cards.
Well, there's one free PCI-E x1 under the GPU, if only I could get to it somehow...
REALLY need Comsumer/Prosumer level chips to start coming with a few more Lanes. And perhaps a better mix of available slots. 2 X16 ( connected to the Processor) 1 X8 1(or 2) X1 1 X4
Damn it, 3:34 reminded me that I haven't had lunch yet! 🤣
Honestly didn’t realize how much of a difference the amount of lanes can make for a gpu. I was tinkering yesterday and I moved my gpu to a 16x physical slot that runs at x4 and the game was getting 3/4 of the frame rate with hitches and stutters down to like 8 fps compared to the butter smooth 65-75 in the full fat slot
Bold of you to assume that we even have PCI Express slots to fill with how barebones motherboards are these days.