Same. PCIe Switches and optical transceivers to easily move stuff away from the slots would be great. Electrical cables are inviting issues when looking at PCIe Gen4 or faster.
Now that few people have hard drives and DVD drives in the front of their computer anymore it would be interesting if we mount a GPU in the front or maybe top as if it were a radiator.
Yup, super frustrating when setting up homelab servers. Like a game of tetris trying to squeeze in GPUs, high speed networking and storage controllers on some motherboards. Would love to see OCP come downmarket. Make the rear-panel modular so I can swap out 75% of the USB ports that I don't need for an OCP NIC.
7?! I'm still on pcie3, which doesn't even saturate my rtx3090 (at least not for the 4k games I play). It is truly wild just how much further the server and AI markets have gone and totally left the desktop sector behind!
@@thor.halsliI certainly wouldn't say that as a blanket statement but yes it's far more likely to be the case for gen3 than going from gen4 to gen5 (on existing hardware).
This is a fascinating interview and your guest is extremely knowledgeable and professional and well spoken. And we thank him for his contributions. Great job! All aboard the PCI Express!
there's no skipping Gen6, Gen7 isn't even a completed spec yet, and Gen6 is only JUST in data centres, this stuff takes time to trickle down to consumer devices.
My guess is that the server and desktop will bifurcate. Desktops will remain electronic because there is only one card generally so converting signals twice is a waste of time. Servers will need the flexibility.
@@myne00 maybe eventually, but also in-silicon optics are developing really well. We may see fiber optic links within consumer desktops in our lifetime
If I understood correctly(not sure of the terminology), they managed to keep 6.0's signal integrity requirements only a little more strict than 5.0's- Are they succeeding in doing that with 7.0 as well?
I am no engineer but I have been using this technology for over 40 years. The next paradigm shift is photonic circuitry. Electronics are too slow, to power hungry, and too expensive. There is a physical limit to both node design and interconnectivity. The fast you go the more power is required.
I have been looking forward to development in that area, but that's like 15+ years away and intel which was heavily involved just had huge set backs so don't see those things coming anytime soon.
When I was a student in electrical engineering in the early to mid 1990s we had many discussions that are basically what you said. We thought PCs would not be able to get much over 100MHz because of these limitations and that we would have to move to optical circuitry to overcome these limitations. Now 30+ years later look where we are..
I still remember Intel cucking us with photonics, before Thunderbolt came out it was said to be using optical cables (hence the codename “Light Peak”).
Silicon photonics is in active development right now, and you can see the activity and maturity level by looking at the wafer probing companies such as FormFactor and its offerings in this space. The customers are there. Photonics was technically viable but more expensive than copper back in the Light Peak era, but copper solutions are now becoming more expensive as signaling rates increase. With AI and other high data-rate applications bringing in the money, photonics has a real market to serve where copper cannot compete.
@@drescherjmYes but TBF many also didn't totally believe that and there were clear advancements even just in manufacturing that were obviously holding a lot back. Then in the early 2000's Intel said 10GHz chips could be on the horizon and look how that went. There's still certainly ways to go (mostly up) and we'll certainly see efficiency gains but there are some clear indications of where constantly pushing for more and more raw performance is going to become increasingly less and less practical on multiple levels.
Setting a standard is a long way from working production implementations. I will be surprised if v6 can maintain the generational speed doubling without a large number of strings attached. v5 hit substantial signal integrity issues in practical commodity production. It's why you see so many boards that offer a combination of v4-only slots with only one v5 slot despite the hassle and bom cost of adding extra control chips, and the v5 slot is physically close to the CPU.
@@_yuri data centers also have higher operational requirements, they don't spend money for fun. PCIe v3 and [with some limits] v4 were able to link between rack enclosures with cabling but the increased frequency of v5 has reduced maximum length and latency requirements such that it cannot extend beyond the enclosure even with switches and signal boosters. So they can't use it for internode communication.
bandwidth is clearly allways important and improving but how would using optical comminication for PCIe effect latency / ping time. wouldnt converting from electronic to optical and then back to electrical result into longer ping times? or can this be done without an impact or possibly even faster? there was a mention of building optical PCIe into chips - if it can be faster with lower latency i wander if optical connections would ever replace some (or even most) pins in future generations of procesors
Hi Ian I have question on Serdes, what are the parameter hyperscaler or buyer use while choosing for same configuration? 112g pam4 , whose serdes got choosen from snps/cdn/awave
Come on, just gimme transparent optical transceivers for any kind of PCIe connectors or slots and power-efficient PCIe Switches. Is that too much to ask without the stuff costing an arm and a leg?
Currently, yes. It's too hard to cheaply make the optical-electrical interfaces. High-speed and high-bandwidth lasers, modulators, receivers, and so on are still expensive. Some of that, from what I hear from customers, is the rapidly changing methods of packaging. For example, do you edge-couple or surface-couple the optics? What process are you using for the optical chiplet vs the logic? Who is making the best optical chiplets right now and what is their road-map for die-shrinks and costs? It's not just the process node that matters, but the advanced packaging offerings. Mixing and matching Global Foundry silicon with advanced packaging from TSMC or Intel is possible but adds costs/complexities.
Just look at the price of 100gb Ethernet SFPs. That's effectively what you're asking for. The layer 2 protocol can be whatever you want including pcie.
@@abavariannormiepleb9470 that's really what the optical PCIe implementations used. It's very possible but much more expensive than a simple slot connector. Not enough demand for a hundred dollar cable? I'm totally guessing on price, though. It's hard for me to estimate what it would be if mass market adoption happened.
I'm looking forward to consumer devices using less lanes, I want more flexibility in expansion cards in my PC, I know I'm weird, but I want to keep a FireWire card, and a TV tuner card running, without impeding the bandwidth my gfx card needs, plus in the era of M.2 I want room for multiple drives too. for Gen 4, that's roughly 32 lanes, and Gen5 SSD's can be nice, so mainstream Gen6, if well implemented should be where things get interesting, just giving everything half as many lanes. Gen7 is exciting though, just a long way off still.
"Gen 5 SSD's can be nice" What are you doing that actually requires 32 GBytes/sec read-write? Thats just shy of 2 TB/min. Watch every single TV channel there is at the same time?
@@MrHaggyy Most SSD’s already only use 4 lanes. So in this case a Gen5 M.2 would be 4 lanes. But then a Gen6 version would only need 2 lanes for the same speed.
one of the challanges i see is ammount of hardware we can connect to a desktop CPU due to the limit of lanes. I wonder if we can keep the same or even more ammount of lames with the future versions of pcie allowing devices to require less lanes to achieve the required performance making it possible to setup more devices in a consumer grade system. is this a unrealistic expectation or can we see this happening? since i assume we do not require some of these datarates for consumer grade hardware.
I just want my devices to connect to the CPU via fibre optics. Replace the x16 slot with a row of optical connectors that can each (or more if more data is needed) connect a single device. Mini-ITX becomes normalized and isn't priced at a premium, completely remove PCIe slot layout from the buying equation, devices not physically connected to the board and can be moved about the system based on case choice, no need for high quality PCIe copper tracers/added board cost, etc, etc.
I would like to see a paradigm shift in motherboard design a hybrid electrical/optical build, continue with cards or modules but only provide power. The PCIe signaling components would be accomplished by optical fiber thus eliminating signaling over copper traces on the motherboard but only within the cards or modules themselves. Theoretically, you could swap out an INTEL CPU for an AMD CPU module or even an ARM. PCIe over fiber is not a new technology, it is available today but not in this form at the consumer level. There are many pros to this approach such as reducing manufacturing costs like copper use and E-waste. Hardware repair is as simple as a card or modules swap out, true plug and play.
i really want better I/O I think that for a long time I/O has been tethering on the edge of "just barely enough". but over time, more and more things started fighting for those few lanes. NVME made pci-e gen 3 useless. it can't even max a single one, let alone when you need to expand and they end up eating more lanes, and ruining your gpu's performance. i mean, it's not like our cpus have infinite PCI-E lanes, so the few that are there better be fast enough to hold the most powerful GPU with only 4 lanes so we have some to spare for other things. like more gpus. or nvme drives, or wifi cards.
I could imagine a card edge connector solution in the form factor of Previous PCIe 1x that connects power and optical data. These would then connect to photonics on package transievers. Should make for a very neat inside of server solutions. Tntrested to see what physical solutuion the different companies come up with.
I used to work with Telco network equipment in which the card backplane connectors are exactly like that. Tellabs Titan Titan 5500 digital cross connect is one example.
I wish that you would take a few seconds to explain some of the jargon and acronyms you are using. I can guess what RTL stands for, but I'm probably wrong, so maybe at least spell out the acronyms .
I get that you're excited about the technical and performance aspects of AI. It's hardware, it's possibilities, it's training, and it's inference, but I think we're quickly running into a moral dilemma with how quickly it's spreading. Not only do we have these models scraping human ingenuity, writing, artistic, and musical talents (often WITHOUT the artists knowledge let alone consent), but it's doing it will the full intent of it's end purpose being to *_REPLACE_* these artists for the soulless corporate suits looking to save a buck and no longer employ said talent. I just find it baffling how many people mindlessly fantasize about what AI can do _for_ them without taking into account what it can do to _hurt_ you once greed sets in. And when has greed ever _NOT_ set in?
I'm still waiting for GPUs which you can upgrade the memory, which would make GPUs cheaper, more future proof. Will a future BUS like PCIe x help solve this?
I don't give a f what data centers and AI Mambo Jambo needs to interconnected 6k of GPUs together faster, if the technology doesn't bring benefits to normal hardware for normal consumers, it's only propaganda to make money on AI investments for server company not for normal people. We have Nvidia GPUs that can't saturate even the PCIe 3. If we have GPUs costing 2k with destroyed normal consumer market is because of those AI M.F. Non of them will see a single $ from me.
I am just going to put this for those still wondering: PCI-3 is absolutely fine for every use case in storage and even the best GPUs barely lose 5% if they are installed in a PCI-3... PCI-4 might make sense but it's mostly for vanity PCI-5 is overkill for the next 6 years for sure
absolutely! Maybe it is different with the 4090 and next gen cards, but PCIe3 is more than enough to not be the bottleneck for my 3090. I mean... what kind of ram and SSD raid array would I need to make PCIe3 the bottle neck in any workload on my computer lol. All of this is wild to me!
From the enterprise side this is clearly wrong as a single 100G connection requires x16 PCIe3. Even on the consumer side this is becoming progressively less true, especially as we are starting to see things like x4 or x8 GPUs and the latest NVMe drives can definitely saturate PCIe3. Maybe that won't result in huge improvements to game load times, but I do expect it'll matter as things like Direct Storage take off. Something like PCIe5 opens the door to letting a consumer ~20 lane CPU run 2 GPUs (e.g. a GPU and AI card) at x8 and performing like a "normal" x16 PCIe4.
PCIe 7.0 is definitely not yet needed for home or small business customers. If you'll note, Ian and Priyank both focused on business use cases like the big-AI training farms. These are billion-dollar clusters where you need high bandwidth (terabytes/second) and low latency (tens or hundreds of nanoseconds).
Applying 'consumer' logic to 'enterprise' (or real bleeding edge scientific world) is not really suitable. 1GbE is fine for consumers 2.5GbE for enthusiasts, but I'm the SME I work in 100GbE is regularly used, and for large tech enterprise and datacentre stuff 400GbE is not uncommon. Using network interface speeds as an example as NICs hang off PCIe just as much as anything else like RAM and accelerators PCIe isn't a consumer standard, it just so happens consumer hardware uses is.
Gaming technology and server technology used to be fairly closely related. New leading edge server technology is no longer applicable to end users on home PCs. This is not being driven by PCs and isn't designed for them.
DRAM is something that is "stuck" due to physics with the current approach. Latency and cell size appear to be at their physical limits unless we can change the fundamental design. Thus, we get more and better caching, better coherence, and more bandwidth as a poor replacement.
@@Brians256 well then that's a billion dollar business area. From what I have read, it looks like MOSFETs are in use. I wonder why we can't just the RAM to be of type SRAM (flip-flop)?
@@peppybocan Are you asking about memory research? There's loads of research and engineering. Engineers refine the current approach at each process node and you can safely assume that serious money is invested into making each node as productive as possible. Research science is done on many different types of memory (e.g., magnetoresistive, phase change) as well as shifting some compute inside the memory modules.
@@jamesjonnes I doubt it. Even if SOCs did become dominant in consumer space, at the data center level, they would still being trying to link these theoretical monster SOCs to achieve even greater compute power.
There's a lot of hype around ML but let's be honest, it's not actually doing anything particularly useful for most people today. We're probably going to spend at least the next 5 years trying to turn the crap off, like Recall on Windows, or AI-generated BS being given in search results instead of the actual results.
К чему приводит монополия скорость wifi увеличилась в 1000 раз, скорость PCIe в 8, я подключил 6 жёстких дисков в RAID 0, к 6 разьёмам SATA!!! и получил скорость чтения из кэша в 2 раза выше. чем скорость обмена с моей PCIe картой 3090 nvidia, это основное достижение двух компаний INTEL и AMD глобальная стогнация во всём. Так работает любая монополия или сговор на рынке. 30 лет индустрия топчется на месте, болтает и тратит деньги в какую то ерунду.
Electrons are propagating energy with about 270,000km/s, light is traveling at about 300,000km/s, that is somehow „close“, so comparable in those scales.
They're going too fast... It's annoying, PCIe 3 lasted probably too long, PCIe4 was too short, I would suggest most of us are thinking of GPU saturation of the bus rather than other uses like SSDs etc. My PC is using PCIe4, previous one was PCIe3, I don't want GPUs to be utilising PCIe 5 or 6 in a couple of years, as it means that upgrading would mean throwing away the whole system, if I wanted to update. Just make the standard and let it sit for a time, jeez, if we jump up to PCIe 7, just leave it there.
You would throw an entire system away simply because a tech bro said 6 and you have a 5? Thank god 3dmark have that pcie benchmark to show people that it hasn't and doesn't matter if you slash a GPUs bandwidth in half by either halving transmission speed or width. Hell you can cut a 7900xtx down to pcie3 and then cut 3/4 of the lanes off it and it barely gives a shit of under 5%. It's not like a GPU is sending massive amounts of data down the slot, not unless you're overflowing into system RAM, then you would want pcie bandwidth as fast as DDR..
You buy every gen upgrade? The tech companies love you. BTW, this is not for current consumer desktops. Listen closely to the application space they are targeting.
This is for Nvidia A100 clusters(several dozen 4090), or 128 Core Chips from Ampere, ARM, RISC, or something like the IBM Power family with TBs of RAM per chip. For a single GPU, even a RTX 5090, PCIe 3 or 4 if you want less lanes is more than enough.
How many home users care about this? Desktop CPUs offer 20 PCIE channels, and motherboards offer maybe 2 x16 slots and an x4 slot, with users using only the CPU's x16 slot for a graphics adapter. Only gamers care whether that slot is the latest & greatest PCIE. Accelerators are used in servers (mostly in the cloud), and rarely seen at home
I want a girl to look at me the same way Dr Cutress looks at the PCIe board 14:09
Excellent video!
You should be looking at NoC instead of a girl, Mr. artemis. 😂
Buy her some Chips and Cheese 😅
My issue is, my GPU takes up many pcie slots….
Same. PCIe Switches and optical transceivers to easily move stuff away from the slots would be great. Electrical cables are inviting issues when looking at PCIe Gen4 or faster.
Now that few people have hard drives and DVD drives in the front of their computer anymore it would be interesting if we mount a GPU in the front or maybe top as if it were a radiator.
Yup, super frustrating when setting up homelab servers. Like a game of tetris trying to squeeze in GPUs, high speed networking and storage controllers on some motherboards.
Would love to see OCP come downmarket. Make the rear-panel modular so I can swap out 75% of the USB ports that I don't need for an OCP NIC.
watercool it.
@gabrielpi314
Sounds like you need a mining motherboard. Some have 7-8 slots spaced 3-4 apart.
7?! I'm still on pcie3, which doesn't even saturate my rtx3090 (at least not for the 4k games I play).
It is truly wild just how much further the server and AI markets have gone and totally left the desktop sector behind!
What's wild is that PCIe 7 will be able to do in 1 lane what your 16X PCIe 3 is doing today. Roughly speaking.
It may not saturate the bandwidth but your 1% lows and 0.1% lows is way better on pcie4 and even more so on pcie5(your miles may vary depending game)
bruh im on pcie 3.0 with a 4080S/10900k 😅
@@thor.halsliI certainly wouldn't say that as a blanket statement but yes it's far more likely to be the case for gen3 than going from gen4 to gen5 (on existing hardware).
This isn't for you. Leading edge bus design is not being driven by desktop PC applications anymore.
I saw that sneaky transition at 1:40… you can’t hide from me.
This is a fascinating interview and your guest is extremely knowledgeable and professional and well spoken. And we thank him for his contributions. Great job! All aboard the PCI Express!
No sarcasm intended, but I do love this channel. Sooo much better than any Linus Tech Tips hot-takes on any of the covered technologies.
Thank you Ian for this very original and clean Interview.
Ian why are you so pale? Are the camera settings ok?
He's from the North of England where the sun doesn't shine. We are all like Gollum up here.
I would feel nautious too thinking about all that compute
Heh, I filmed this after just landing at the airport from an 11 hour economy flight. Was quite tired, but dedicated!
Ian couldn’t star in a contemporary Tolkien adaptation for “modern audiences”.
Ian is the white-balance :)
Just skip PCIE 6 at this point
With PCIE 7 on desktop means fewer lanes needed for each device and more nvme and pcie slots can be added.
Assuming lane counts don't decrease as less of them are deemed necessary.
there's no skipping Gen6, Gen7 isn't even a completed spec yet, and Gen6 is only JUST in data centres, this stuff takes time to trickle down to consumer devices.
PCIe 7 isnt even finalized as a spec yet. No way they will be skipping 6. It's impossible with the timeline.
My guess is that the server and desktop will bifurcate. Desktops will remain electronic because there is only one card generally so converting signals twice is a waste of time. Servers will need the flexibility.
@@myne00 maybe eventually, but also in-silicon optics are developing really well. We may see fiber optic links within consumer desktops in our lifetime
If I understood correctly(not sure of the terminology), they managed to keep 6.0's signal integrity requirements only a little more strict than 5.0's- Are they succeeding in doing that with 7.0 as well?
I am no engineer but I have been using this technology for over 40 years. The next paradigm shift is photonic circuitry. Electronics are too slow, to power hungry, and too expensive. There is a physical limit to both node design and interconnectivity. The fast you go the more power is required.
I have been looking forward to development in that area, but that's like 15+ years away and intel which was heavily involved just had huge set backs so don't see those things coming anytime soon.
When I was a student in electrical engineering in the early to mid 1990s we had many discussions that are basically what you said. We thought PCs would not be able to get much over 100MHz because of these limitations and that we would have to move to optical circuitry to overcome these limitations. Now 30+ years later look where we are..
I still remember Intel cucking us with photonics, before Thunderbolt came out it was said to be using optical cables (hence the codename “Light Peak”).
Silicon photonics is in active development right now, and you can see the activity and maturity level by looking at the wafer probing companies such as FormFactor and its offerings in this space. The customers are there. Photonics was technically viable but more expensive than copper back in the Light Peak era, but copper solutions are now becoming more expensive as signaling rates increase. With AI and other high data-rate applications bringing in the money, photonics has a real market to serve where copper cannot compete.
@@drescherjmYes but TBF many also didn't totally believe that and there were clear advancements even just in manufacturing that were obviously holding a lot back. Then in the early 2000's Intel said 10GHz chips could be on the horizon and look how that went. There's still certainly ways to go (mostly up) and we'll certainly see efficiency gains but there are some clear indications of where constantly pushing for more and more raw performance is going to become increasingly less and less practical on multiple levels.
If my machine is learning it better be bringing good grades home
Feed it top quality organic electricity then, you don't want it throttled, do you?
Setting a standard is a long way from working production implementations. I will be surprised if v6 can maintain the generational speed doubling without a large number of strings attached.
v5 hit substantial signal integrity issues in practical commodity production. It's why you see so many boards that offer a combination of v4-only slots with only one v5 slot despite the hassle and bom cost of adding extra control chips, and the v5 slot is physically close to the CPU.
consumer platforms have budget restrictions that commercial datacentres do not.
@@_yuri data centers also have higher operational requirements, they don't spend money for fun.
PCIe v3 and [with some limits] v4 were able to link between rack enclosures with cabling but the increased frequency of v5 has reduced maximum length and latency requirements such that it cannot extend beyond the enclosure even with switches and signal boosters. So they can't use it for internode communication.
16:15 thats so exciting, i cant wait to see that.
bandwidth is clearly allways important and improving but how would using optical comminication for PCIe effect latency / ping time. wouldnt converting from electronic to optical and then back to electrical result into longer ping times? or can this be done without an impact or possibly even faster?
there was a mention of building optical PCIe into chips - if it can be faster with lower latency i wander if optical connections would ever replace some (or even most) pins in future generations of procesors
"All mips and no I/O" really has come a stretch since old mate Seymour was at Control Data.
and ordered transistors only by millions from Fairchild 😀
Hi Ian
I have question on Serdes, what are the parameter hyperscaler or buyer use while choosing for same configuration?
112g pam4 , whose serdes got choosen from snps/cdn/awave
In short, GTA VII will load in 0.05 seconds
Finally optical! Hopefully this will tinkle down to DP as well, 4K 240Hz or higher is hard to support on long distances
Come on, just gimme transparent optical transceivers for any kind of PCIe connectors or slots and power-efficient PCIe Switches.
Is that too much to ask without the stuff costing an arm and a leg?
Currently, yes. It's too hard to cheaply make the optical-electrical interfaces. High-speed and high-bandwidth lasers, modulators, receivers, and so on are still expensive. Some of that, from what I hear from customers, is the rapidly changing methods of packaging. For example, do you edge-couple or surface-couple the optics? What process are you using for the optical chiplet vs the logic? Who is making the best optical chiplets right now and what is their road-map for die-shrinks and costs? It's not just the process node that matters, but the advanced packaging offerings. Mixing and matching Global Foundry silicon with advanced packaging from TSMC or Intel is possible but adds costs/complexities.
Just look at the price of 100gb Ethernet SFPs.
That's effectively what you're asking for. The layer 2 protocol can be whatever you want including pcie.
Is there a reason why the cheaper kind of transceivers like 100 Gbit QSFP28 models couldn’t be used for this purpose?
@@abavariannormiepleb9470 that's really what the optical PCIe implementations used. It's very possible but much more expensive than a simple slot connector. Not enough demand for a hundred dollar cable? I'm totally guessing on price, though. It's hard for me to estimate what it would be if mass market adoption happened.
good info! thanks.
I'm looking forward to consumer devices using less lanes, I want more flexibility in expansion cards in my PC, I know I'm weird, but I want to keep a FireWire card, and a TV tuner card running, without impeding the bandwidth my gfx card needs, plus in the era of M.2 I want room for multiple drives too. for Gen 4, that's roughly 32 lanes, and Gen5 SSD's can be nice, so mainstream Gen6, if well implemented should be where things get interesting, just giving everything half as many lanes. Gen7 is exciting though, just a long way off still.
"Gen 5 SSD's can be nice" What are you doing that actually requires 32 GBytes/sec read-write? Thats just shy of 2 TB/min.
Watch every single TV channel there is at the same time?
@@MrHaggyy Most SSD’s already only use 4 lanes. So in this case a Gen5 M.2 would be 4 lanes. But then a Gen6 version would only need 2 lanes for the same speed.
one of the challanges i see is ammount of hardware we can connect to a desktop CPU due to the limit of lanes.
I wonder if we can keep the same or even more ammount of lames with the future versions of pcie allowing devices to require less lanes to achieve the required performance making it possible to setup more devices in a consumer grade system.
is this a unrealistic expectation or can we see this happening? since i assume we do not require some of these datarates for consumer grade hardware.
Nvidia Blackwell already comes with PCi-e 6.0 interface. But yeah technically not a released part.
I just want my devices to connect to the CPU via fibre optics. Replace the x16 slot with a row of optical connectors that can each (or more if more data is needed) connect a single device.
Mini-ITX becomes normalized and isn't priced at a premium, completely remove PCIe slot layout from the buying equation, devices not physically connected to the board and can be moved about the system based on case choice, no need for high quality PCIe copper tracers/added board cost, etc, etc.
I'm more excited about the improvement in latency it will have
loved the talk... very intresting to see the importance of $ SNPS. kudos for you, maybe over time theyll change it to Cuda s ...
I would like to see a paradigm shift in motherboard design a hybrid electrical/optical build, continue with cards or modules but only provide power. The PCIe signaling components would be accomplished by optical fiber thus eliminating signaling over copper traces on the motherboard but only within the cards or modules themselves. Theoretically, you could swap out an INTEL CPU for an AMD CPU module or even an ARM. PCIe over fiber is not a new technology, it is available today but not in this form at the consumer level. There are many pros to this approach such as reducing manufacturing costs like copper use and E-waste. Hardware repair is as simple as a card or modules swap out, true plug and play.
I wondered if the switch to glass as a substrate would alleviate signal loss
i really want better I/O
I think that for a long time I/O has been tethering on the edge of "just barely enough". but over time, more and more things started fighting for those few lanes. NVME made pci-e gen 3 useless. it can't even max a single one, let alone when you need to expand and they end up eating more lanes, and ruining your gpu's performance. i mean, it's not like our cpus have infinite PCI-E lanes, so the few that are there better be fast enough to hold the most powerful GPU with only 4 lanes so we have some to spare for other things. like more gpus. or nvme drives, or wifi cards.
Connecting gpu using fiberoptic would be great and noise would be very low over very long distance
Green screen chat edit ftw! Still, thanks, it's always great to have some insight to what might be coming up. Cheers!
(This is better than BG3! ❤)
cant wait for this to hit mainstream in 2037
if you can afford it 🙂
@@yxyk-fr that wont be an issue
Cant wait for the 1tbps pcie 10 (it will change nothing for us)
great interview thanks
When does fibre start to be used instead of copper….
never, because everything would need transceiver to convert it back to electrical signal for the transistors
Awsome Job ..I would like to see an interview with someone that is Responsible for the Camm2 Memory Development. I think it's the Future for Desktop
Most people really don't understand that this is mainly designed for data centers and not consumer devices.
15:25 it's all in the details
up to 256 GB/s via x16 configuration... Actually might be more bandwidth than Nvidia's upcoming 5060tis memory interface will provide.
I could imagine a card edge connector solution in the form factor of Previous PCIe 1x that connects power and optical data. These would then connect to photonics on package transievers. Should make for a very neat inside of server solutions. Tntrested to see what physical solutuion the different companies come up with.
I used to work with Telco network equipment in which the card backplane connectors are exactly like that. Tellabs Titan Titan 5500 digital cross connect is one example.
Did you just sneakily namedrop Cadence at 7:45? ;-)
my reaction too ! 😀
More of THIS!
That's a great question
That's a great question
That's a great question
Like talking to bloody Alexa
Synopsys has birthed her son 😛
yeah, I had the same feeling, like he's talking to a politician ... all words, no contents 😕
SamTec - Bulls Eye
Bulls Eye Rugged Solid FEP Dielectric, 25 AWG Microwave Cable Assembly ?
I wish that you would take a few seconds to explain some of the jargon and acronyms you are using. I can guess what RTL stands for, but I'm probably wrong, so maybe at least spell out the acronyms .
The content has to be tailored for an audience, you can’t make it for everyone or Ian would still be there explaining what a transistor is.
high-performance compute
thats cool. pcie 7 is already around the corner. bus speed and clocks are a huge bottleneck to ai
ah yes, now motehrbaords and pcie switches can continue to skyrocket in price.
Came for a video about PCIe, got a video about AI :-/
I get that you're excited about the technical and performance aspects of AI. It's hardware, it's possibilities, it's training, and it's inference, but I think we're quickly running into a moral dilemma with how quickly it's spreading.
Not only do we have these models scraping human ingenuity, writing, artistic, and musical talents (often WITHOUT the artists knowledge let alone consent), but it's doing it will the full intent of it's end purpose being to *_REPLACE_* these artists for the soulless corporate suits looking to save a buck and no longer employ said talent.
I just find it baffling how many people mindlessly fantasize about what AI can do _for_ them without taking into account what it can do to _hurt_ you once greed sets in. And when has greed ever _NOT_ set in?
I have missed 6.0 :)
if were not at 6 yet and almost done with 7, then 7 becomes six! you start again and begin research on revision 8, which is now 7!
and 7 ate 9...
...
... OK 😛
I'm still waiting for GPUs which you can upgrade the memory, which would make GPUs cheaper, more future proof. Will a future BUS like PCIe x help solve this?
4 24pins? weird stuff
I don't give a f what data centers and AI Mambo Jambo needs to interconnected 6k of GPUs together faster, if the technology doesn't bring benefits to normal hardware for normal consumers, it's only propaganda to make money on AI investments for server company not for normal people.
We have Nvidia GPUs that can't saturate even the PCIe 3.
If we have GPUs costing 2k with destroyed normal consumer market is because of those AI M.F.
Non of them will see a single $ from me.
I am just going to put this for those still wondering: PCI-3 is absolutely fine for every use case in storage and even the best GPUs barely lose 5% if they are installed in a PCI-3...
PCI-4 might make sense but it's mostly for vanity
PCI-5 is overkill for the next 6 years for sure
absolutely! Maybe it is different with the 4090 and next gen cards, but PCIe3 is more than enough to not be the bottleneck for my 3090. I mean... what kind of ram and SSD raid array would I need to make PCIe3 the bottle neck in any workload on my computer lol.
All of this is wild to me!
From the enterprise side this is clearly wrong as a single 100G connection requires x16 PCIe3. Even on the consumer side this is becoming progressively less true, especially as we are starting to see things like x4 or x8 GPUs and the latest NVMe drives can definitely saturate PCIe3. Maybe that won't result in huge improvements to game load times, but I do expect it'll matter as things like Direct Storage take off. Something like PCIe5 opens the door to letting a consumer ~20 lane CPU run 2 GPUs (e.g. a GPU and AI card) at x8 and performing like a "normal" x16 PCIe4.
PCIe 7.0 is definitely not yet needed for home or small business customers. If you'll note, Ian and Priyank both focused on business use cases like the big-AI training farms. These are billion-dollar clusters where you need high bandwidth (terabytes/second) and low latency (tens or hundreds of nanoseconds).
Applying 'consumer' logic to 'enterprise' (or real bleeding edge scientific world) is not really suitable.
1GbE is fine for consumers 2.5GbE for enthusiasts, but I'm the SME I work in 100GbE is regularly used, and for large tech enterprise and datacentre stuff 400GbE is not uncommon. Using network interface speeds as an example as NICs hang off PCIe just as much as anything else like RAM and accelerators
PCIe isn't a consumer standard, it just so happens consumer hardware uses is.
Gaming technology and server technology used to be fairly closely related. New leading edge server technology is no longer applicable to end users on home PCs. This is not being driven by PCs and isn't designed for them.
too bad we can't skip 5 and 6...
Maybe, just maybe, they should focus on the DRAM situation. We do have fast enough interconnect (PCIe), but we seem to struggle with DRAM latency.
DRAM is something that is "stuck" due to physics with the current approach. Latency and cell size appear to be at their physical limits unless we can change the fundamental design. Thus, we get more and better caching, better coherence, and more bandwidth as a poor replacement.
@@Brians256 well then that's a billion dollar business area. From what I have read, it looks like MOSFETs are in use. I wonder why we can't just the RAM to be of type SRAM (flip-flop)?
is there even research in this area? Wikipedia doesn't seem to say much.
@@peppybocan Are you asking about memory research? There's loads of research and engineering. Engineers refine the current approach at each process node and you can safely assume that serious money is invested into making each node as productive as possible. Research science is done on many different types of memory (e.g., magnetoresistive, phase change) as well as shifting some compute
inside the memory modules.
@@peppybocanthere is ton of R&D but not for Wikipedia publication.
cambrian period for ai 😎
PCIe is a big mistake. The CPU, memory, and GPU should always be fused together. That's faster by orders of magnitude, such as Apple's fused CPUs.
You can't build a data center on a chip.
@@karl0ssus1 You'll see in a few years.
@@jamesjonnes I doubt it. Even if SOCs did become dominant in consumer space, at the data center level, they would still being trying to link these theoretical monster SOCs to achieve even greater compute power.
Color is a bit off :) Or a vacation is needed.
Remember to take vitamin D.
There's a lot of hype around ML but let's be honest, it's not actually doing anything particularly useful for most people today. We're probably going to spend at least the next 5 years trying to turn the crap off, like Recall on Windows, or AI-generated BS being given in search results instead of the actual results.
Cough cough... Hifi audio cable
The model with big datacenters won't work! No company in their right mind lets their employees put company data on the servers of such providers.
What is the timestamp?
К чему приводит монополия скорость wifi увеличилась в 1000 раз, скорость PCIe в 8, я подключил 6 жёстких дисков в RAID 0, к 6 разьёмам SATA!!! и получил скорость чтения из кэша в 2 раза выше. чем скорость обмена с моей PCIe картой 3090 nvidia, это основное достижение двух компаний INTEL и AMD глобальная стогнация во всём. Так работает любая монополия или сговор на рынке. 30 лет индустрия топчется на месте, болтает и тратит деньги в какую то ерунду.
light is faster than electrons is a correct statement to make but totally irrelevant
Electrons are propagating energy with about 270,000km/s, light is traveling at about 300,000km/s, that is somehow „close“, so comparable in those scales.
They're going too fast... It's annoying, PCIe 3 lasted probably too long, PCIe4 was too short, I would suggest most of us are thinking of GPU saturation of the bus rather than other uses like SSDs etc. My PC is using PCIe4, previous one was PCIe3, I don't want GPUs to be utilising PCIe 5 or 6 in a couple of years, as it means that upgrading would mean throwing away the whole system, if I wanted to update. Just make the standard and let it sit for a time, jeez, if we jump up to PCIe 7, just leave it there.
This isn't for PCs.
You would throw an entire system away simply because a tech bro said 6 and you have a 5? Thank god 3dmark have that pcie benchmark to show people that it hasn't and doesn't matter if you slash a GPUs bandwidth in half by either halving transmission speed or width. Hell you can cut a 7900xtx down to pcie3 and then cut 3/4 of the lanes off it and it barely gives a shit of under 5%. It's not like a GPU is sending massive amounts of data down the slot, not unless you're overflowing into system RAM, then you would want pcie bandwidth as fast as DDR..
This is for 10+ gpus to exchange AI data. We not going to buy this at microcenter
You buy every gen upgrade? The tech companies love you. BTW, this is not for current consumer desktops. Listen closely to the application space they are targeting.
This is for Nvidia A100 clusters(several dozen 4090), or 128 Core Chips from Ampere, ARM, RISC, or something like the IBM Power family with TBs of RAM per chip.
For a single GPU, even a RTX 5090, PCIe 3 or 4 if you want less lanes is more than enough.
So, lets just skip version 6, at least the upgrade will be awesome with Nvida RTX 8090 AI
Anything beyond PCIe 4.0 (3.0) is totally unnecessary for gamers. 7.0 gives us larger AI models, and faster NVMe drives for data centers.
Faster Fiber Channel networking. Faster Ethernet networking. Faster GPUs. Faster AI accelerators. Faster CXL.
How many home users care about this?
Desktop CPUs offer 20 PCIE channels, and motherboards offer maybe 2 x16 slots and an x4 slot, with users using only the CPU's x16 slot for a graphics adapter. Only gamers care whether that slot is the latest & greatest PCIE.
Accelerators are used in servers (mostly in the cloud), and rarely seen at home
i want a A.I. girl robot.....the AI evolution is too slow.....
n r zee btw
Zed
There are dozen of us clients!