Yes, It’s Real: PCI Express x32
ฝัง
- เผยแพร่เมื่อ 6 พ.ค. 2024
- Check out the MSI MPG GUNGNIR 300R AIRFLOW at lmg.gg/zCGkN
You've heard of PCI Express x16, but did you know there's such a thing as x32?
Leave a reply with your requests for future episodes.
► GET MERCH: lttstore.com
► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
► SPONSORS, AFFILIATES, AND PARTNERS: lmg.gg/partners
FOLLOW US ELSEWHERE
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech - วิทยาศาสตร์และเทคโนโลยี
Scooby Doo and the gang unmasking this ghost as SLI/Crossfire
Ikr? idk why it had to be a whole vid
@tech-wondo4273 "Money! Ak yakyakyakyak"
Aren't those limited to x8 x8?
@@prawny12009 Not necessarily. It depends on your motherboard
I award you seven ahyuks and a guffaw.
x16 is all you’ll ever need - bill gates, probably
turns out bill is lame
hahaha
Based on a quote he never said - possibly, probably.
PCI didn't even exist yet, let alone PCIe.
Sure, like 16 x x16 it's like a 16 core CPU LOOOL
PCI-E does support x32 single link devices, even if it does not use a single socket. It is specified in the PCI Express capability.
There is also x12
Also x24 in some high end stuff.
No one ever used it, hence the removal from the latest revision
@@shanent5793 It was used in Cisco UCS for some VICs as well as other things. Also I believe it was used by IBM for specific cryptography accelerators.
@@steelwolf411 there is no x24 in the spec. Some PHI MXM cards claimed x24 but it was running in either 2x12 / 3x8 / 6x4 mode.
I'm just waiting on x64
Thank you Mr. Handsome Mustache man
Grinder called…Jk😂
His wife would agree
He really is so cute 🥰
i was thinking of the other guy when you said mustache man
Lmao
This just sounds like SLI/Crossfire with extra steps
Well sli is used for synchronizing gpu while this is is just fancy name/way to aggregate high speed network card
Don't forget nvlink haha
Riley Yoda needs to be a regular thing.
A pretty good slot to put in your Virtua fighter cartridge
ha! good one
Back in the day X32 meant something different to us entry level audio production folks 😂
And Sega fans.
I still use an behringer x32♥️
x32 and beyond are very common in ultra high end modular servers. If you look at the server manufacturer Trenton Systems, they have massive PCI-E array capability. Of course its still PCI-E a migration from PCI and that has its bottlenecks but when you want parallelism they do it very well. (Not affiliated; just impressed)
You are mistaken, there has never been an implementation of x32, which is why it was deleted from PCIe 6.0
@@shanent5793 Weird, then why do I have a x32 NIC on my desk? It was only not used in consumer boards, it very much so exists in the commercial space. You often find them as riser cards, x48 is the highest i've personally dealt with.
I've also got an x32 FPGA dev kit sitting at my bench.
@@sakaraist If they were referring to the total number of lanes, then this wouldn't be noteworthy because RYZEN Threadripper consumer boards have had more than 32 lanes for several years already, but they're never referred to as PCIe x32 devices. Riser cards are just glue, not end devices and are out of scope.
In the case of NICs, they may have two x16 ports that can be connected to different sockets in a system to save inter-socket bandwidth, but PCIe will still treat them as two separate devices.
FPGAs could of course be programmed to implement PCIe x32, but if you want to use the hardened PCIe IP it will still be x16.
If your devices have actually negotiated a PCIe x32 link at the hardware level, I would love to know the part numbers because even PCI-SIG doesn't know about them and they're definitely not off-the-shelf
@@shanent5793This needs more upvotes, honestly. ----- Just because the slot can carry 32 lanes doesn't mean that there are must be any true 32 lane devices.
Makes perfect sense that you might make a single board that is a carrier for more thin one device and use a single slot. Especially in an industrial context where one larger slot might be better than a bunch of extra slots and little cards everywhere.
Kind of a throwback to the days of large card edge connectors for parallel buses, only using each signal line as a separate communications lane.
@@shanent5793 Wow did you watch the same video I did? @ 3:30 they show Nvidia cards using X32. (off the shelf BTW). They call it Infiniband because NVIDIA. And yes I know Infiniband is the communication standard that uses the PCIE x32 specs.. Just like NVME is the communication standard that uses PCIE x4.
1:30 the binary says: "Robert was herrr" 🤓
nice
Hey robert @@robertm1112
🤓
How do you have that much free time?
@@carabooseOG i dont lol, i just put it in a binary to text translator lol
That just sounds like SLI with extra steps.
Without the proprietary connector
why are you people keep comparing it to SLI, it has nothing to do with SLI. It is more like link aggregation.
@@eliadbu SLI needs both the PCIE lanes and an extra SLI bridge to enable faster data between the cards.
But this was from the time when PCIE wasn't fast enough for NVidia's standards.
now with PCIE 4 and 5 being as fast as it is, we mostly don't need the SLI Bridge anymore. keyword: mostly.
but in simpler terms, X32 lanes is more like using RAID 0 on storage.
@@TheHammerGuy94 In SLI, the PCI_E is used to communicate with both devices at the same time as they both work in unison to render interlacing frames, this is more like having a second card its whole purpose is to pass the communication to the main card, so it is not as RAID 0 - as with RAID 0 both devices are part of an array and are the same.
We don't need SLI bridge anymore because SLI is pretty much a dead technology.
0:18 You guys worked really hard on this shot; you probably should’ve stayed on it longer. 😂
So... This is just SLI?
Had the same thought hah!
No, it is not
Sli for non gpus
Not really, just a fancy name of link aggregation for, mostly, network card
Kind of, but not really. It is using similar methods, but it's not exactly the same. This is probably closer to what is done on AMD's workstation cards with being able to attach a display synch module between multiple workstation gpu's to output as a single monitor signal with one of these: AMD FirePro™ S400 Sync Module for instance with AMD's workstation cards. (Nvidia has their own version, but I don't know the details.)
If you look at the card shown by Riley in the video, you'll see that cable connecting them. I'm not sure of it's exact connector specs, but it will be somewhat similar in nature to the JU6001 connector that can be found on the AMD WX series cards. Sometimes it's populated with an actual socket/port. Sometimes not.
Essentially, if I understand correctly, instead of each of the cards sharing the workload intended between them, they are all doing their own work, or shared work perhaps in some cases, and outputting it all to the same monitor. It's a subtle but important difference, because SLI/Crossfire typically is used for splitting workloads between GPU's to get a better end result, where as display synch (as I will call it for now), is more about combining separate or even shared workloads into a single tangible visual result.
That sync card is effectively doing what Riley explained about the x32 setup, and the asynchronous data streams typical of PCI compared to when they are... well... synced.
Maybe not the worlds best explanation, but I hope it helps.
... You telling me I don't need it. I'm an American. I don't need multiple 64 thread Ryzen Epic servers. But I got em, and they got 128 PCIE lanes each!
SLI and crossfire failed back then, but with modern high speed interconnect tech, I think we can bring it back.
When it comes to gaming, it wasn't about the interconnect. It was about the sync between the two which had frame lag.
SLI or crossfire will never make sense.. it didn't back then as it's difficult to get working much less smoothly. Best case it to have all chips and memory as close to one another as physically possible. Considering we regularly +30-70% uplift in GPUs just 1.5 years later.. you're better off just throwing out your old flagship and get the new one than to try and mate 2 together. It will use more than 2x power and deliver much less than 2x performance. I get that this was probably .mostly a joke but I am just here to bring the real world to the discussion.
Marketing just needs a way to spin it and we'll be buying 2-4 cards again for no reason again.
Yes, I'm just not happy buying one $1000+ GPU. I want to have to buy multiple $1000+ GPU's!
Nvidia would much rather you buy a 1200 dollar 4080 than two 300 dollar 4060s
Simplifying complex tech stuff like PCI Express x32 - just brilliant. Keep up the informative and clear tech explanations.
MSI tech support are the worst in the industry. You know what they told me? This is verbatim: “We don’t troubleshoot incompatibility”
😂
Really?? In the past, MSI has always had the best warranty and repair service. I had a video card that was displaying weird corrupt garbage after like 6 months, and they replaced it at no cost. I had an MSI laptop that I smashed the screen by shutting the lid on a pencil, and MSI replaced the screen under their one-time replacement warranty. But that was years ago, so I'm guessing things have changed?
I don't know; my 2017 AM4 motherboard is still getting BIOS updates as of January 2024, which was necessary for me to swap the original 1080Ti with a new 4070 I got last month.
@@simongreen9862Can we take a moment and ask the question why the fuck isn’t UEFI/BIOS firmware open source? Really should be.
@@5urg3x I agree with you there!
The Mellanox NICs also allow them to be connected to PCIe lanes from both CPUs. It levels out the network latency by not requiring ½ of the traffic to jump an interprocessor link to get to the NIC.
I glanced at the thumbnail and thought it was about a new longer x series barrel for p320 for some reason.
Now let's wait for x64
And then maybe x86?
GPUs are getting so wide these days, they might as well support PCIe x32
LTT is like the MCU where this video is just setting up the next home server episode.
Doesn't the most recent Mac Pro have a double PCIx16 link to their custom AMD GPU?
"I'm sure some of you are already thinking of ways you can justify your purchase." Wow, calling me out just like that?
In the early 90s even simple soundcards need the ISA slot...and were long an beefy.
A 32 lane PCI bus, awesome! GPU card makers can use it for their premium cards...and only use four lanes. Awesome....
Each time someone buys any current Intel 1700 and you add an ssd it gets bumped down to ×8 anyway leaving 4 of the very few lanes you have useless.
AMD has the same thing, in practiced expect a card to always ×8.
Then any bench you see where the compare ×8 to ×16 there is minimal to no difference unless you go down a generation.
Just make GPU link on desktop ×8 by default and make room for 2 more NVMe slots.
@@jamegumb7298AMD does not have the same thing
Unless its a dogshit motherboard you can have 2X NVMe slots at full speed at the time and an X16 slot
Its when you go past 2 you run into issues as your either adding multiple to the chipset or you start stealing lanes
Without any performance loss on conflicts you can have these configurations in AM5
2x NVMe + 1 x16
4x NVMe + 1 x16
@@jamegumb7298 eh, on AM4 I have 2 NVMes, one on gen 4 and the other on gen 3. My GPU is still on 16 gen 4 lanes, and iirc AM5 only increased the lane count
This video reminded me of SLI. The physical setup looks identical, you got two devices occupying two PCI Express x16 slots and have an extra cable/connection between the devices.
x64 just needs 5 more to work properly...😏
In FPGA is fairly common to see X32, Microsoft had a board that allow u to control two FPGAs with this lanes, the trick was even it was an X32 actually it was like emulating the connection been two X16 lanes by readdressing the lanes.
lmao 2 4090s on one card would be absolutely insane
Nvivia Titan Z 2024 Edition
There was a time when stuff like that did get made. I bought a Dell that was meant to have a 7950GX-2, but it arrived with an Ati card.
The cooling would be problematic, would need a 360mm radiator maybe a 420mm. Though I guess if you afford one the cooling and power costs wont matter ! Big problem with SLI is that memory is become a bottleneck. The two cards VRAM don't add together. So 2 X 24 is still just 24. So it would need like 2 x 48. Fuk that would be insanely expensive.
Cisco iirc has a PCI-E x24 for their MLOM + NIC (they may call it a VIC) on some of their stuff.
last intel mac pro had two 16x slots combined for dual gpu amd cards. I guess technically that’s 32x
its not. many servers have a long slot for holding riser boards (eg. 3 cards in 2U rack mount server), but those are NOT SINGLE DEVICE slots. Same as a dual x16 for dual gpu is not a single PCIe device.
Reminds me of those old school gargantuan 16bit ISA slots used to overcome speed limits.
I am glad that we got SLI PCIE before GTA6.
3:58 I like that the connector is labeled as "black cable" even though it's not black.
that was the best quickie I've had in years.
"Not fast enough? Just add more lane!"
In the early 90s, I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.
Just a thought 💭🤔.. if you want a super small SFF build the motherboard only has 1 PCI express slot usually plus some nvme slots. So if you have 1x 32x pci express slot you could have 1 card that has the GPU and SSDs and dedicated npu and a 10gb NIC ect all in 1 expansion card especially if you have 1 side of the PCB for the GPU and the other side for the npu and SSDs and NIC and all the other hardware you want. It would make for very capable SFF builds or very very tidy full size builds that only has the motherboard and CPU and cooler and ram and 1 expansion card that's a mix of all kinds of different hardware.. so as much as we don't need 32x PCI express lane's for general hardware but the idea and the 32x slot could definitely be used up..
0:26 that’s what he said.
I KNEW IT, i was sure I've seen an oversize PCIe somewhere!
I've seen server motherboards with X24 physical slots that just connect to exciting PCIe switches.
so x32 is just two x16 in a trench coat?
X32 is often used to connect 2 server nodes together
0:00 woah MSI Z68A-GD80 that was my first ever gaming motherboard that baby Linus is showing.
The end pointing the reference to Linus got me dead 🤣🤣
There are also OCP ports
My god that segue reminded me of STEFON in SNL... The MSI MPG Gungnir 4000 Battleflow Monster Extreme has EVERYTHING!
Wait a minute. That binary looks suspicious. All either having started with 01 or 011. It's ascii! Quickly, someone translate it!
Edit: ive noticed some binary as 01000000 which isnt ascii, but it is 1 away from capital A. BUT, A huge majority of the stuff looks like readable letters
Thanks for the video!
Of course I knew, the server in my basement has two of them... although it just uses them for risers with different slot setups.
I feel the bandwidth could be used by an SFF with some sort of breakout expansion slots.
it still cracks me up whenever someone says dada instead of data
I would love it if using one PCIE slot didnt disable another. I dont think we're ready for the jump to x32 until this bandwidth limitation for lanes is addressed.
That limitation doesn't exist in the products that use x32. Desktop CPUs may only have 8-24 lanes, but server chips have hundreds.
@@rightwingsafetysquad9872 True. Old server processors have WAY more PCIE lines then even top of the line modern desktop processors (PCIE 3.0 tho), and if that isnt enough, just get yourself dual cpu system.
I was just wondering this yesterday
1:47 The way you explain that sounds a lot like SLI graphics cards.
Every time you say 'Dad a Center', a piece of my soul dies.
Riley sounds like the announcer from the price is right when he does his sponsor bit
Now we just need Desktop Chips to actually provide a reasonable amount of lanes so we can have 4 or more X16 slots
After you said "beyond 16 lanes..." my pc froze for a moment. LOL!
Technically, a pcie gen 5x16 slot is like a pcie gen 1x256 slot
It need in all the lanes to arrive the bits at the same time, routing 32x x2 (each lane is a differential pair) , 64 tracks to trace all to the same chip is very hard, all traces must have the exact same lenght, or there will be penalties, delays penalties.
so... every SLI rig was running x32 all along?
The last time I saw a product with an x and a 32 next to it was in 1994.
That didn't go well!
Here is hoping this is not a gimmicky in-between product and is an actual leap into the future.
#SEGA #32x
I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.
Miss my a8n32x sli fsb would post above 340 nuts board handled anything I tossed at it back then. Will be missed ( oh it's in a box still memories)
need all that sweet x32 for the next great A.I. film, music, art and book creation app / bit miner.
x32? Meh.
x32 RGB? Oh, HELL yeah!
wonder how many years it'll be before PCIE X16 is phased out..... remember how long AGP slots lasted for...... only time will tell..... and who knows what it'll be replaced by....
I mean physically the connection maybe phased out but i think it's very unlikely that the pcie itself will be phased out too
AGP only lasted for about 13 years, 1997 to 2010.
Pci-e is currently at 22 years. launched in 2002.
As far as when it might get phased out, when ever it stops being able to handle the data we need to transfer.
Maybe 10 to 15 years on the current trajectory.
OR it may wind up like USB and never die.
lol
@@chrisbaker8533 On desktops possibly, However PCIE is a core component of a metric shitload of embedded systems and fpga dev boards.
So, is this the future of SLI? 16 lanes talking between the GPU’s on the board and 16 lanes talking to the cpu? From each gpu?
Would a dual gpu card benefit from the x32 possibly allowing for more gpus in a smaller space in a sever?
Looks over at the EDSFF 4C+ slot a PCIe x32 slot in wide use in server PCIe cards. I guess we wont tell him about you.
Sounds like the old SLI at full speed
Video idea- Usb-c Explained: Everything about the Usb-c type and all its types!
0:05 was the only B-roll available of a motherboard with PCI and PCIe slots?
I'm still surprised optical connections aren't used yet (again? (SPIDF)).
I'm expecting USB (or whatever apple calls it next) to have a fibre down the middle in that tiny blank part of the C connector at some point.
Bend insensitive SMOF is cheap enough now that is plausible at scale. SFPs are getting there too.
So it's Crossfire/SLI for server network cards basically.
Sooo if on X32 PCI devices talk with each other, can’t you do SLI with it? Wasn’t that the problem that the NVLink was too slow and they couldn’t really communicate?
this will be useful for the upcoming intel cps and nvidia gpus
With 3kg+ graphics cards a long slot would be a good idea long as it can still accept 16x. Extra power too, maybe 4050 work without cables and no card droop.
Im casually waiting for PCIe x64
GPUs with two pcie slots coming soon 🗿
Imagine they bring back SLI/Crossfire via PCIE-6.0 x32. Imagine 2-4 5090s or 8950 XTXs in one rig pushing 8k 4+ rays and 4+ bounces path tracing at 120fps.
I've always wanted to run my 10g nic's in SLI!
Pour one out for the man-hours spent on the 1-second star wars clip at 0:18. Worth it.
Sounds like SLI but with extra steps
That ending was great 😂
1:25 PCIe is almost a network
So using this system it would be theoretically possible to have two linked x16 slots with an rtx 4090 in each , this would give a backdoor form of sli ....
aha the last time i did driver binding was to bond 2 56k dialup modems together into 1....in 1999
Why weren't the number of CPU lanes ava mentioned? Mine has 24 lanes, there is no 32 possible 2x 16's or not, am I wrong? One will be 16x and the other 8x
Kind of ironic since Intel‘s current LGA 1700 platform is pretty bad regarding PCIe flexibility, for example not being able to do PCIe Bifurcation.
Yeah x4 and x8 used to be a lot too. In less than 10 years we'll be seeing more x32 things.
At the end I somehow thought Riley was going to say QuickTechy.. ah, maybe next time.
The X is referred to as "by" though. So a PCIE x4 is called PCIE by 4 and so on
Thankyou. It's like when people refer to camera zoom eg. 4x as "4 ex". Infuriating
@shall_we_kindly It’s a multiplication. Is 5 x 10 “five ex ten”?
And justify I must!!! 4:43
0:17 _But there is another_
Shouldn't Yoda have said _But, anotherone there is_ ?
my mum walked past and asked if i was watching something with steve carrell and i'm never going to unhear that.
You could just make the video card with a ribbon to another PCIe slot. GPU's are already double wide.
So sli with a new name and broader use
Well if you have 128 lanes per package with your epyc cpus