ATX won't go away if they do that, we'll see what happens with ARM PCs but even laptops have needed options on components. It's Apple soldering on SSDs and disempowering end users.
@@RobBCactive Apple is absolutely not the only one doing it. You used to be able to upgrade the CPU in a laptop. Nowadays every laptop CPU is soldered to the board, and many laptops have soldered ram. Apple is certainly the worst offender, but once they get away with it other companies start doing the same shit.
Great to see Gordon back! On the discussion on ATX, there are some use-cases (industrial, file servers etc) that need multiple PCIe slots (interface cards for lab equipment, RS-232 controllers, storage controllers etc). Storage devices will also need a reasonable wattage on the +5V / +3.3V for file servers. Intel needs to consider all of these use-cases when designing a standard. M.2 is fine when you only have
peterwstacey Where you buy board ? Many boards support RS 232, or Pin Headers. What storage u need, 6 NVMe drives. U.2 drives ? What board you bought ? What is the need ? Need a File Cloud app ?
Some industrial uses need e.g 16 UARTs for a scientific lab monitoring setup. Add in a few NI PCIe cards for RS, and you need a full stack of slots. We're not talking about linking to one UPS, it's more akin to physicists needing to control multiple lasers in a laboratory from the same base clock. Railway junction controllers can also use a lot of UARTs. The one I found with the most PCIe slots that also had good VRMs was the ASUS Prime Z790-A WiFi. The MSI Z790-A was also a good candidate
Thank you for mentioning this. I understand creating some motherboard standard to accommodate one large GPU and more M.2 devices, but for file servers and HEDT systems, having multiple graphics cards for VFIO, Ethernet and USB controllers, WiFi, and so on, are all very important for some people. Perhaps M.2 can replace some less complex or bandwidth heavy cards, but for storage controllers, m.2 riser cards, or especially GPUs, having full slots would be preferable. This would also need to be accompanied by a greater number of lanes on consumer processors.
One thing worth taking another look at is: does the GPU have to be an add-on card? By which I mean, should we go to the old co-processor model? Have a dedicated space for a GPU socket and DIMM slot-type arrangement. While a standarised socket is probably too much to ask, maybe the basic concept behind AGP wasn't wrong. The GPU is so fundamental and with so many peculiarities compared to your average PCIe card, not least of which is the power requirements and the sheer weight of the air cooling solution. A standardised location could allow things like GPU tower-cooling, or maybe combined AiOs that serve both CPU and GPU. On a slightly unrelated note, one major pet peeve of mine is that you often can't upgrade a cases front panel i/o. It's a bit ridiculous to have to change your entire case just to upgrade your front usb port, especially now that the old 5.25" and floppy drive bays are also dead in the consumer market.
Those are great points. Wouldn't a socket on the motherboard enable as much bandwidth as you wanted? All three GPU manufacturers are also making CPUS now, so... two chips (or more?) would seem smart. Front panel I/O is insanely bad atm. You're right, it should be normal in a mid-range case to just slot in a replacement panel that has the USB4 or whatever you want it to have.
Hell yeah, Gordon's back!! And I must say that no one does set presentation like Gordon. This made my year, get well soon Gordon and Happy Holidays!! May Santa bring you a speedy recovery and the finest gas station coffee there is ☕😁
Any redesign of ATX must focus around the GPU. This shit was NEVER meant to be this big, or hot. Some sort of sandwich layout makes the most sense, and then you could make it flow-thru or side-intake, and both would outperform the solutions we have now.
I've felt if GPUs are staying air-cooled then I'd like them above the CPU, so air can be drawn out at the top rear, with cool air mixing in from front. Part of that is so CPU AIO failures are less likely to do bonus damage the most expensive component. A secondary benefit can be GPU & NVME slot not wanting the same space. An issue is Intel have CPUs using ridiculous power levels needing top or front mounted AIOs.
When it comes to redesigning ATX layouts, I like the idea of a socketable GPU. Let us buy our own GDDR6 so we can get the capacity we want and avoid the headache of running out of vram. Let us move our existing GPU coolers to a new socketed GPU. This could massively cut the costs of a GPU upgrade to just the die package and substrate itself instead of having a closet full of obsolete GPUs with huge coolers and unused vram on them. It's odd that we have so much freedom when it comes to CPU, RAM, PSU, cooling, storage and motherboard, but have dictators telling us what our options are when it comes to the GPU.
well, socketable Vram isn't going to happen. that stuff needs to be soldered down with the chip to get the necessary signal integrity. There's also the problem of power delivery, since boardmakers would have to spec their boards to potentially have an oc'd 4090 in the socket, when the end user is only going to install a 4060. That's either going to make for really expensive boards or a really confusing buyers experience when some boards only fit certain power limits.
@@cosmic_cupcake Perhaps for lower end GPUs or less memory bandwidth intensive workloads, a new GDDR standard would allow for socketed memory. As for power delivery, there could be different classes of boards for low, medium, and high power consumption GPUs, similar to what we have for coolers and motherboards. Additionally, these GPU upgrades would also allow for cross generation or perhaps even cross-vendor GPU swaps. Then the upgrade options would go beyond even CPUs
Of course Nvidia cares about gamers, i mean does PC World and Gamers Nexus care about their viewers? Its not the type of care like your family or friends care about you. Like Nvidia or AMD or Steve or Gordon arent going to be there for you for emotiobal support. But they do care that you watch their content and like ot subscribe and in Nvidias case that you buy their products. Its just a transactional relationship and its all that it needs to be. What can you do for me and maybe we agree on the value. Thats it. So i would say Nvidia and AMD and even Intel care enough about the gaming industry to invest in R&D to make cool products (tools) we can use to play video games and make workstation products for developers and technologies.
@@Wobbothe3rd Pretty much. Life's too short. Some people work hard and have some time to spare here and there they want to play their games they do what they need to get immersed in some escapism.
Gordon development points are strong (especially the ram airflow) on ATX except that he glossed over "the it works" comment quickly. As an oldie having built close to 100 PCs I remember the old saying at times like this. If it ain't broke don't fix it!"
Another thing I've thought about a lot is more modular GPU design. Standardize a couple of PCB form factors, standardize the cooler mounts, and give the users an option to buy cards and coolers separately. This way we could add extra cooling to a struggling GPU or reuse the cooler and upgrade just the GPU.
since you're at it, why not change the PCI bracket height? modern graphic cards are already taller than the PCI specs, why not work around a new design that allows for 120mm fans?
@@unintelligiblue that would mess up compatibility with all other cards though. Sourcing half vs full height PCI brackets for specific cards is already enough of a mess as it is, adding a third variant wouldn't help. Also, there's nothing standing in the way of manufacturers just doing it anyway with the brackets we already have, just like put 120mm fans on a card, whatever.
Many of us grew up with Gordon and would purchase pc mag because he was some of our favorite reviewer. I stopped getting pc mags shortly after Gordon went from senior reviewer to chief. His words carry a lot of weight if you grew up with him as a brand name in the pc world.
Gordon does bring up a good point about ATX. ATX standard has been very stagnant because "it's just cheaper to leave it as it is, even though things can be improved" I think the biggest contributing factors are the PSU and GPU, they are the biggest parts that keeps on getting bigger.
Not sure why we need to get rid of ATX just because it has been around a long time. That means it was engineered right from the start and has been able to serve the needs up until now - and probably for many more years. If we are going to do this, we need to engineer for additional PCIe lanes and accommodating multiple fat cards - which will also require adjusting the case designs too. We need a spec that can handle the next 30 years. But, we also need to look at really multiple variations for larger and smaller needs - like mATX, ITX and EATX. Granted, we already have all those, so a new spec would need to be very compelling to make it worth doing. A module design does sound pretty cool as long as the interconnects don't become the bottlenecks of innovation.
I've only seen Gordon a few times, mostly in guest appearances of channels like Gamers Nexus but he seemed like a really cool dude. Very knowledgeable and passionate. Hope you get well soon!
Should move the cpu socket lower and put a 3-4 slot place for gpu on top. That way you can do exhaust from top or bottom of the case with gpu fans. No more janky airflow shite.
To be fully honest i never noticed any difference between turning ray tracing and tesselation on and off, so to me they feel like "lower performance buttons". If i was rich, i might care about names on paper, but having to work for the money i throw at nvidia, i strictly want higher framerates for the longest time possible before upgrading. Everything else is just marketing jazz, wich i couldn't care less about. Edit: I terms of hating changes and not wanting ATX to change i have a mediocre opinion: First of all if it ain't broke don't bloody fix it. I'm sick and tired of what we call in german "worse-provement", where people want to make something "better" but it ends up a complete mess. Secondly, if you change something, then make it a propper, well thought out change and fully standardise it, so in the future stuff is compatible. Everything should be as much compatible and interchangeable as possible, so that we are able to use perfectly functional hardware for as long as possible. To bring a stupid example, wich doesn't really make sense, but explains my mind set on these things: There was no way i would have upgraded to DDR4 in 2017, if it wasn't necessary. Yes with DDR3 i may had lost theoretically 15% cpu performance for example, but as i play havily gpu intensive games, i simply don't care about maximum cpu performance. Instead now i have working DDR3 memory in the box, wich i got in 2010 and i would be happily still using today, but i cannot make the decision if i want to. That's the main problem. When customers don't get to decide what they get and how they use the stuff in combination, wich in its core is on of the main reasons why most pc people don't want a console. They want a system, where every major part is customer upgradeable and you are not forced to buy the new entire mediocre box every x years. If ATX is to go away, in my opinion that is fully fine with the condition that it would be replaced by a good standard, so that i can buy standardised cases, motherboards, etc. wich are compatible, without having to search nieche specific products, to get a working system.
Graphics cards would benefit from a better orientation and position. Either parallel to the top of the case and near to it or oriented vertically with fans facing away from the rest of the motherboard. We could even bring back ducting if the GPU can be in a better place.
I saw a ducted case this week I think on KitGuru. It looked a bit clunky, but if the motherboard/GPU layout/Gen 5 NVME/RAM layout were redone to be cheaper, use less material, and be easier to cool, it'd also be easy to take advantage of dual, tri or quad compartment design and ducting if that's better. Which would be easily demonstrable.
Bless you Gordon for advocating advancement beyond ATX. I used macs for a time, and when I came back around go building a PC. I felt the heaviness of these stagnant standards as I endeavored to build what I thought would be airflow optimized. I ended up creating a vertically oriented flow through design, with a cooler master case with two 200mm fans in the bottom, and putting the graphics card on a riser also vertically mounted. Bringing every fin of the case into vertical orientation, and the ram inline with the airflow too. As I imagined how I would engineer a better standardized system, I think in terms of large fans that each drive an air tunnel dedicated to a system component. Probably 140mm for ease of convention. But 200mm or more would be even better. Essentially, the io continues to be standardized as front and rear, while airflow travels upwards from the bottom, components become long modules that span the height of the case. So a power supply, and a GPU, and motherboard, all exist in modular tunnels of the same size. Perhaps the motherboard spans the backplate as the interface of these standardized modules. Perhaps the motherboard is accessed on both sides by 3 tunnels, for a total of 6 dedicated modules. Perhaps some modules are half length and others are full length . With 12 universal bus connections spread across both sides, like pcie slots. 2 per full length module. And 1 per half length module. Perhaps one side of the central motherboard would have specialized connectors, while the other has universal pcie like connectors. The power supply having a specialized standard, and the ram/hard drives having another sharing the same module. On the module opposite the CPU. This backside of specialized ports would allow for the slow creep of new standards without moving the whole thing forward all or nothing. Like a laptop with only usb-c is actually less desirable than one with mixed ports. Having two sides allows for one side to be old as the other is new, and I feel that gives the standard room to grow. But maybe that’s a bit timid, perhaps it would be best to make a pcie interface that can accommodate ram and power delivery and SSD’s. And just have vertically mounted rows of them for easy module swapping. I imagine certain sectors would also find this approach preferable their own plug and play simplicity. Like if being installed in the wing of a plane.
I have actually thought about this for a while. Here is how I would change ATX. I would basically turn everything into an add in card. The only thing the base motherboard would be responsible for is pcie lanes, booting, and power delivery (ATX12VO). I would flip the CPU vertical similar to how intel tried with the pentium 3s though they would run parallel to the pcie slots for better cooling. I would put it 3 pcie slot width's from the top of the board, the cpu card would have 3 pcie slots reserved for it and then there would be 3 more pcie slots widths below it. This would make the overall dimensions of the board similar to the miniITX boards as the rear length of the board would only be 18mm x 9 slots for a total of 162mm in length on the back. Once you add a little of space for the screw holes you are going to be very similar. A major benefit of this kind of design is that the cpu manufacturer could do their own cooling. They already do this occasionally with some CPUs having its own cooler anyway but this would be expected in this standard. A huge benefit of this though is that it shouldn't be foreign to AMD or intel as they already design coolers for their GPUs and it would allow for direct die cooling without an IHS getting in the way. Why is the CPU in the middle in my design? Well this would allow for the shortest trace lengths to the CPU hopefully helping keep signal integrity. Why would the CPU get 3 dedicated slots on its own? Well cooling would be a main point. It would make it so that it would have sufficient lanes for communication. We would get up to 48 lanes on consumer systems which would be awesome. though to this point I would be supportive of a new PCIE slot that would allow for more power delivery and more pcie lanes. The one on the mac pro from 2019 allowed up to 475W of power delivery so I know it is possible. Other changes that would come with this. RAM. I feel that ram will be on die with the CPU before much longer. Apple is already doing this and given the density of technologies like HBM I feel it is a matter of time before AMD and Intel follow suit. I do think some customers will like expandable memory though and for these users would would have CAMM be the standard and it would simply be directly on back of the CPU card. This would literally put the memory as close to the CPU as possible. Literally millimeters away. Performance should be great. You could have 2 channels in a left right configuration or 4 channels in a left right up down configuration. UEFI would also change. It would need to be something more similar to libreboot as this is something that would simply be in charge of initializing hardware. This has some pretty major implications as ideally you either would never need to change your motherboard when changing CPUs. The slot would be standard so you could move from generation to generation no issue and so long as you didn't want to potentially update to a newer pcie standard you wouldn't need to. This would however mean that either the standard would need to be so agnostic that the process for initializing hardware wouldn't need to change from generation to generation or you would need to be able to update the onboard libreboot frequently to add support for new CPUs. The next major change would be your basic IO. First physically, as we are covering the entire back of the pc in slots there isn't much space for IO on the board. Your IO from the motherboard would need to be kept lower than 5-6 mm as is everything else on the motherboard. As everything coming off the PC would be PCIE lanes you would need compatible standards as well so things like USB C ports with usb4 and thunderbolt would kind of need to be standard. The majority of the IO however I would break out into its own addin card(s). This would leave it up to the user on how they wanted to spec out their computer for IO. I would think current motherboard manufactures would sell their own IO cards that could have things like wifi, ethernet, and USB. They could also purchase individual cards if they wanter higher end IO. Just like how they already do. The important thing here is that It would be up to the user to decide what IO they wanted or needed for their setup. Last major change here is storage. Since we are switching to an all PCIE motherboard the number of PCIE lanes actually becomes an issue. Now I would hope we could add more lanes coming from the CPU but that only goes so far. To solve this I would expect some add in cards to be pcie mux(hopefully that is the right term) card that would take the incoming up to 16 pcie lanes and either split them out to 16 different storage devices or if the chip was sophisticated, and likely expensive, enough it could share that pcie bandwidth between however many storage devices it wanted. This is my idea. I hope it has sparked some ideas of your own. Edit: A benefit of this design is that it would be much easier to do maintenance compared to what you were describing. All of the memory and storage would be kept on top of the motherboard. This would keep case design simpler. It would also still be deploy-able in something like a rack mount case. Also I stated the board here in the most compact form factor as it seemed like that was one of the main desires. A motherboard manufacturer could have a bigger board relatively easily. Simply add space between each pcie slot to make them dual slot on their spacing. A lot of add in cards could probably use this space anyway for things like cooling or if they were something like the pcie mux card and it had a lot of cables coming off the board. Doing this would increase the total width on the back to about 270mm It would only be about an inch shorter than the current ATX standard but it would have a lot of advantages in cooling, expandability, upgradability, and sustainability.
@matthewsmith3817 yes. lane bifurcation is part of my explanation and is the cheaper way honestly I have one of those ASUS hyper M.2 cards in my storage server. As far as m.2 cards go it is a much better way of doing it. There are cheaper options though that forgo the massive aluminum heatsink on it though if you don't need the top of the line. It can go further though. Apex storage has an add in card that supports up to 21 m.2 drives. They do this by having a chip on the board that basically acts like a network switch but for PCIE lanes and allows up to 84 pcie lanes through this approach. They are limited to 16 lanes of bandwidth for anything leaving the card just like you would be limited by your uplink speed for using the internet but there is a lot you can do there without having to hit that bandwidth and even if you do, the chances of you hitting all of the drives simultaneously is going to be rare enough that for most people it would be a non issue.
Pretty similar to my idea. Maybe the PCIe connections can be overbuilt for future generations, then tested against them by the manufacturers later when their specs are finalized.
I have to say going from the ease of SATA connection drives to how permanent M.2 drives feel was jarring and I can see how cooling NVME is going to become an increasing issue with graphics cards and storage all generating more and more heat.
I have been wanting ATX to die for a while now and lately its gotten absurd how badly the standard is breaking down. Modern GPU's weigh more than the 'motherboard' with huge cards sagging til it looks like it physically will break under its own weight, cases are sold with 8 or 10 expansion slots but boards are lucky if they have more than 2 PCI slots and are still somehow getting bigger because they need surface area for m.2 storage, some brands are developing custom solutions like proprietary daughter boards to hold more high speed storage etc etc. You don't have to go as far as soldering everything down. That's way too far. I would envision an interconnect board or system that hosts a power module, CPU module, a GPU module and a high speed storage module that can hold a bunch of M.2. Eliminate wires with a new connection standard like Apple did with the old Mac pro. The only way to get buy in would be to base it on aesthetics because that is what sells.
It's so hard to care about RT considering the state of the games coming out anyways. If I can barely hit 60FPS in raster on 1440p or 4k, why would I care about RT? I don't want to play on 30fps on my 1500€ PC. And to be then forced to use frame gen or upscaling just to have slightly better reflections or GI doesn't feel with it. Then the actually good PC games are from small indie studios and those don't even have RT anyways. Honestly just get 7900xtx and Steam Deck instead of 4080.
NGreedia:"We care about so much gamers, that's why we make them as inaccessible as possible by overpricing the f out of them because that makes total sense, right you guys?"
So, my thought has been, boards the same size as ATX, place NVME slots on the back, extra height standoffs with a duct that grabs some of the airflow to direct over the back move the RAM to be in line with the PCIE slots, with channel A abive the CPU, and channel B below the CPU(where the top NVME used to go) you could use the space where RAM used to go for more NVME, but we'd need more PCIe lanes. for that to matter.
Trying to push ray tracing as though raster is "solved" and everyone has GPUs running games at 4K ultra 540FPS cool and quiet, it's just the most BS marketing, consumer hostile thing that's happened in the GPU space.
Could you just cut the ATX to different cards that are connected together? Perhaps CPU on one card and the memory slots on another side of that single card? Connectors on one card, soundcard on it's own, dedicated GPU and storage card on their own. CPU and GPU cards could be horizontal and properly supported for heavier cooling solutions.
One big improvement would be to have the entire IO panel be a custom slot. Having the IO independent of the Motherboard. It would look like a 20-100 pin interface that you would slide a rectangular panel in that has all of your display, usb, mouse/keyboard... the entire IO of the computer. I see this as being a higher priority than having usb or audio on the front of the case...
As someone currently using an inverted motherboard case, the Silverstone RL08, any future motherboard spec should 100% put the GPU on top of the motherboard. You could even mount it horizontally and include case features that just jut out of the case to hold the bottom of the gpu board, rather than hang the GPU vertically which limits the kinds of coolers you can put on it. Then you can mount fans on top of the case to directly blow into the gpu, or even mount a tower cooler on the gpu and mount fans front and back just like how tower CPU coolers work.
Gordon Mah Ung is the GOAT of computer stuff. Steve Burke is the only person to ever approach that level. Seeing them working together is like one of those superhero stories where they finally join forces and kick everybody's ass. Steve just needs to adopt Gordon's rants. Nobody can string together profanity like Gordon. It's like music. With F words.
What an insane idea, you are just prescribing change for change sake. Atx works. Who’s your sponsor? PCIE needed for direct storage and if you get your way the EU and FCC will hammer the home built pc market on certification for energy reasons.
Amen. "It's thirty years old!" is such a lazy and arrogant argument. What's next, revamp everyone's indoor plumbing because the tech hasn't been revamped in this century?
ATX: I don't see placement of components on the board is an issue. There is freedom to move them. The real ATX problem is card slots originally designed with low power cards, now running GPUs that need 3 or 4 slots, and destroy themselves, sagging under their own weight. Airflow is all wrong for GPU cards. CPU has the right airflow design where it flows through a standard tower from front to back. You needs something like that for GPU.
I was thinking of replacing these cards with something that stretches between the front and back of the case, getting more support for weight and controlling a long straight path for ventilation. External connections would be at the opposite side to the PCIe connection, which would require a different case design. At least, for cards that need to be big. For smaller ones(including SSD's) you might have a few EDSFF E3 slots in a row.
Steve Burke question at 4:30 "what does Nvidia have", I came in 12 minutes later or so to the broadcast . . . Nvidia offers a whole product in various forms up for debate on utility value. mb
Steve pointing out "people just look for the green and pink ones" regarding all the 3.5mm on a motherboard reminds me of something that really irks me: motherboard makers who decide to make their board "look cool" by making all the audio jacks black. Like gee, thanks a lot. There's already way more ports than anyone needs and now they aren't color coded anymore so I can find the ones I need. Now I need to refer to the manual. Ridiculous.
58:20 THANK YOU. The ultra cramped 1990's designs simply does not make sense today. These HUGE graphics cards thrown a monkey wrench in every MB/case today. Everything is *cramped* - doesn't make sense.
Buying used is the option to get more for your money. It still makes sense to be able to get something good on a low budget. I would think if AMD and/or Intel released powerful enough APUs (like 16 CUs) they could fill the really low budget market since it ain't much you are getting with improved GPU performance with a dedicated at that point.
For Steve's edification: the wheel was invented ~3200B.C.E in Eastern Europe - roughly 800 years after the straw, which ancient Sumerians used to avoid the bitter dregs of their beer, this proving that partying has always been more important to people than work. That predates Rome by several thousand years, not decades. The wheel, to the Romans, would be like paper is to us, ancient technology. The wheel was actually a rather difficult innovation to achieve, owing to the physics involved. Prior to copper tools, it wasn't possible to make an axle that was thick enough to bear a significant load but also reduced axle friction enough to be efficient. Stone just really didn't provide for the level of woodworking necessary. That's why we can narrow down the invention of the wheel to a fairly small region, it was an innovation that was made once and spread, instead of being invented in multiple places concurrently. So now you all have just a little more useless knowledge you can use at parties to demonstrate that you have no social skills and spend your days with your face buried in a screen learning things no sane person needs to know!
Gordon's argument of ATX needing to "pass on" (let's say) is going to reach a huge impediment of its own - _money._ Not from manufacturers, he's addressed thatm much. But being a known quantity, lots of budget PC enthusiasts will roll their eyes at the decommissioning of ATX as a standard because then, that means _oops_ in fifteen years, there goes all of their cheap hardware. Everybody wants to save money, so nobody's moved on from ATX. The last major transition in the PC space was from S-100 to ATX. People with vintage systems have a hard enough time _as it is_ finding replacement S-100 hardware, the time difference is really its only saving grace because nobody really remembers the oldest of old ATX stuff, but everybody in the vintage enthusiast space remembers the greatest of the great S-100 stuff. During the transition away from ATX, many people are going to be burned, and _burned out_ of the PC space *because* of needing to find just the right hardware for the right formfactor. What we really need to do is just standardize the PC case, and then new standards should be able to flourish from that with a standardized mounting system for the mainboard tray. If the case needs to get more fat or more thin, engineers can create inter-compatible variations with the capability to use any mainboard wall and rear panels they want, because then the case doesn't entirely hold back use of hardware nearly as much if those components can be replaced wholesale.
ATX replacement layout idea. CPU socket lays flat, looking in a PC case it would be where a GPU sits but with the chip facing up PCIe slot underneath, vertical GPU. RAM under the CPU, shortest traces. Could use something like CAMM. 12VO power supply. With the RAM on the back of the board, there would be a straight line from the power connector to the VRM to the CPU. U.2 or something similar for SSDs, adapters for m.2s would smooth transition. Overall the board would be ~ITX width but longer front to back. By moving m.2 off the board you can save real estate there. Toss motherboard audio and just bundle with a decent USB DAC instead. Keeping CPU orientation the way it is and lengthening case standoffs would limit cooler height for slimmer cases.
I disagree with removing onboard audio, as well as the seeming lack of secondary or tertiary PCIe slots. Having one or two onboard M.2 slots as well would be a good idea for more compact PCs, and possibly cheaper considering the cost of U.2 cables.
Guys!! The wheel didn't really work that well, until bearings were invented so the axle is held within rollers, rather than just rubbing against a greased surface.
I bought a liquid cooled 4090 at full MSRP and thought it was outrageous, but that same card costs $150 more now, so I "guess" I got a good deal.?! Cheers 🍻
Regarding graphics card placement, placing the grfx card near the outside of the case allows it to pull fresh air for itself, and feed the case a little as well. 1:26:00
I wish Gordon swift recovery. Gordon get well !!
I love Gordon Get Off My Lawn Mah Ung so much and hope he continues to get better. We all have a much better time when he is around.
Yaaah! Nice to see Gordon back!
He looks fucked up
What happened?
Cancer is the reason Gordon is sick for those who have asked.
So terrific to see Gordon! Not surprised at all he's a Science Officer
Gordon going off on console things coherently is the welfare check I needed to see. He's ok.
lol
I'm afraid that if atx goes away all of the motherboard makers will just solder everything down and we'll end up with giant laptops
or they will sell it separately as an accessories. want another nvme slot ? 40$ ! want another pcie slot ? 100$ !
Companies are frothing at the mouth for that opportunity, you just know it
Case specific boards? I could see it.
ATX won't go away if they do that, we'll see what happens with ARM PCs but even laptops have needed options on components.
It's Apple soldering on SSDs and disempowering end users.
@@RobBCactive Apple is absolutely not the only one doing it. You used to be able to upgrade the CPU in a laptop. Nowadays every laptop CPU is soldered to the board, and many laptops have soldered ram.
Apple is certainly the worst offender, but once they get away with it other companies start doing the same shit.
So great to see the OG back in the saddle! Get well soon Gordon!
Gordon has been giving us the PC news my whole life; stay well rest easy brother the GOAT has more work to do!
Great to see Gordon back!
On the discussion on ATX, there are some use-cases (industrial, file servers etc) that need multiple PCIe slots (interface cards for lab equipment, RS-232 controllers, storage controllers etc). Storage devices will also need a reasonable wattage on the +5V / +3.3V for file servers. Intel needs to consider all of these use-cases when designing a standard. M.2 is fine when you only have
peterwstacey
Where you buy board ?
Many boards support RS 232, or Pin Headers.
What storage u need, 6 NVMe drives. U.2 drives ?
What board you bought ? What is the need ? Need a File Cloud app ?
Some industrial uses need e.g 16 UARTs for a scientific lab monitoring setup. Add in a few NI PCIe cards for RS, and you need a full stack of slots. We're not talking about linking to one UPS, it's more akin to physicists needing to control multiple lasers in a laboratory from the same base clock. Railway junction controllers can also use a lot of UARTs.
The one I found with the most PCIe slots that also had good VRMs was the ASUS Prime Z790-A WiFi. The MSI Z790-A was also a good candidate
Thank you for mentioning this. I understand creating some motherboard standard to accommodate one large GPU and more M.2 devices, but for file servers and HEDT systems, having multiple graphics cards for VFIO, Ethernet and USB controllers, WiFi, and so on, are all very important for some people. Perhaps M.2 can replace some less complex or bandwidth heavy cards, but for storage controllers, m.2 riser cards, or especially GPUs, having full slots would be preferable. This would also need to be accompanied by a greater number of lanes on consumer processors.
Great to see Gordon again 🥰, get well soon, you got this😊💪
i dont know of atx needs to go, i think horizontal mount cases need to come back. working with gravity instead of against it solves many problems.
So glad to see Gordon looking strong. Get well man.
THIS MADE MY DAY... SEEING GORDON BACK IS THE BEST NEWS FOR ME... EVER... ❤❤❤
One thing worth taking another look at is: does the GPU have to be an add-on card? By which I mean, should we go to the old co-processor model? Have a dedicated space for a GPU socket and DIMM slot-type arrangement.
While a standarised socket is probably too much to ask, maybe the basic concept behind AGP wasn't wrong. The GPU is so fundamental and with so many peculiarities compared to your average PCIe card, not least of which is the power requirements and the sheer weight of the air cooling solution. A standardised location could allow things like GPU tower-cooling, or maybe combined AiOs that serve both CPU and GPU.
On a slightly unrelated note, one major pet peeve of mine is that you often can't upgrade a cases front panel i/o. It's a bit ridiculous to have to change your entire case just to upgrade your front usb port, especially now that the old 5.25" and floppy drive bays are also dead in the consumer market.
Those are great points. Wouldn't a socket on the motherboard enable as much bandwidth as you wanted? All three GPU manufacturers are also making CPUS now, so... two chips (or more?) would seem smart.
Front panel I/O is insanely bad atm. You're right, it should be normal in a mid-range case to just slot in a replacement panel that has the USB4 or whatever you want it to have.
gordan is a fucking baller and i wish him only the best
didn't know he is a Blood and deals drugs.... Baller is not a Basketballer
I would totally buy into the GUTS form factor if it ever became a thing! Great to see Gordon and Steve again!
Hell yeah, Gordon's back!! And I must say that no one does set presentation like Gordon. This made my year, get well soon Gordon and Happy Holidays!! May Santa bring you a speedy recovery and the finest gas station coffee there is ☕😁
Any redesign of ATX must focus around the GPU. This shit was NEVER meant to be this big, or hot.
Some sort of sandwich layout makes the most sense, and then you could make it flow-thru or side-intake, and both would outperform the solutions we have now.
I've felt if GPUs are staying air-cooled then I'd like them above the CPU, so air can be drawn out at the top rear, with cool air mixing in from front.
Part of that is so CPU AIO failures are less likely to do bonus damage the most expensive component. A secondary benefit can be GPU & NVME slot not wanting the same space.
An issue is Intel have CPUs using ridiculous power levels needing top or front mounted AIOs.
I don't always agree with Gordon but I love listening to him. Get well soon Gordon.
It’s great to hear and see Gordon. I hope you’re doing well sir
Wishing you well Gordon! We all care for you and so glad to see and hear you! Love your humor most of all! Enjoy your presence!
seeing gordon and him getting better is an amazing gift for thousands of people, here is hoping
Hello to all three bosses,especially to Gordon.All the best wishes Gordon.
Happy to see Gordon! Plus Adam and Steve!
When it comes to redesigning ATX layouts, I like the idea of a socketable GPU. Let us buy our own GDDR6 so we can get the capacity we want and avoid the headache of running out of vram. Let us move our existing GPU coolers to a new socketed GPU. This could massively cut the costs of a GPU upgrade to just the die package and substrate itself instead of having a closet full of obsolete GPUs with huge coolers and unused vram on them.
It's odd that we have so much freedom when it comes to CPU, RAM, PSU, cooling, storage and motherboard, but have dictators telling us what our options are when it comes to the GPU.
well, socketable Vram isn't going to happen. that stuff needs to be soldered down with the chip to get the necessary signal integrity. There's also the problem of power delivery, since boardmakers would have to spec their boards to potentially have an oc'd 4090 in the socket, when the end user is only going to install a 4060. That's either going to make for really expensive boards or a really confusing buyers experience when some boards only fit certain power limits.
@@cosmic_cupcake Perhaps for lower end GPUs or less memory bandwidth intensive workloads, a new GDDR standard would allow for socketed memory.
As for power delivery, there could be different classes of boards for low, medium, and high power consumption GPUs, similar to what we have for coolers and motherboards. Additionally, these GPU upgrades would also allow for cross generation or perhaps even cross-vendor GPU swaps. Then the upgrade options would go beyond even CPUs
Of course Nvidia cares about gamers, i mean does PC World and Gamers Nexus care about their viewers?
Its not the type of care like your family or friends care about you. Like Nvidia or AMD or Steve or Gordon arent going to be there for you for emotiobal support. But they do care that you watch their content and like ot subscribe and in Nvidias case that you buy their products.
Its just a transactional relationship and its all that it needs to be. What can you do for me and maybe we agree on the value. Thats it. So i would say Nvidia and AMD and even Intel care enough about the gaming industry to invest in R&D to make cool products (tools) we can use to play video games and make workstation products for developers and technologies.
Touché. And 80% of gamers are happy to enter into that transaction.
@@Wobbothe3rd Pretty much. Life's too short. Some people work hard and have some time to spare here and there they want to play their games they do what they need to get immersed in some escapism.
Gordon development points are strong (especially the ram airflow) on ATX except that he glossed over "the it works" comment quickly. As an oldie having built close to 100 PCs I remember the old saying at times like this. If it ain't broke don't fix it!"
so to fix ATX, make the EVGA style rotated socket as standard and swap the main M.2 for U.2 where the Ram used to be
Maybe EDSFF E3 instead of U.2, so the same slots could later be used for both SSDs and smaller add-in cards. Would require a new case standard though.
Another thing I've thought about a lot is more modular GPU design. Standardize a couple of PCB form factors, standardize the cooler mounts, and give the users an option to buy cards and coolers separately. This way we could add extra cooling to a struggling GPU or reuse the cooler and upgrade just the GPU.
since you're at it, why not change the PCI bracket height? modern graphic cards are already taller than the PCI specs, why not work around a new design that allows for 120mm fans?
@@unintelligiblue that would mess up compatibility with all other cards though. Sourcing half vs full height PCI brackets for specific cards is already enough of a mess as it is, adding a third variant wouldn't help.
Also, there's nothing standing in the way of manufacturers just doing it anyway with the brackets we already have, just like put 120mm fans on a card, whatever.
Great to hear Gordon's voice 👍, and Steve chuckling at his comments
Sending some love to Gordon, get well soon! ❤🔥❤🔥
You sound great, Gordon. Voice is as strong as ever. So good to see you and hear your insights.
Gordon, praying for you brother , stay strong , never give up ! You are one my favorite tech tubers 😎
Steve being a pokemon otaku makes a lot of sense
So glad to see Gordan again 🎉😊
Great to see you 3! :)
Many of us grew up with Gordon and would purchase pc mag because he was some of our favorite reviewer. I stopped getting pc mags shortly after Gordon went from senior reviewer to chief. His words carry a lot of weight if you grew up with him as a brand name in the pc world.
But be careful. ATX is like Assad. Everyone who said he must go ended up going first 💀
Assad Technology Extended or Assad Terminator Executioner?
It's always great to see Gordon! Hang in there buddy!
Gordon does bring up a good point about ATX.
ATX standard has been very stagnant because "it's just cheaper to leave it as it is, even though things can be improved"
I think the biggest contributing factors are the PSU and GPU, they are the biggest parts that keeps on getting bigger.
This is like seeing the Beatles getting back together with Jimmy Hendrix as a guest. Wow!
Not sure why we need to get rid of ATX just because it has been around a long time. That means it was engineered right from the start and has been able to serve the needs up until now - and probably for many more years. If we are going to do this, we need to engineer for additional PCIe lanes and accommodating multiple fat cards - which will also require adjusting the case designs too. We need a spec that can handle the next 30 years. But, we also need to look at really multiple variations for larger and smaller needs - like mATX, ITX and EATX. Granted, we already have all those, so a new spec would need to be very compelling to make it worth doing. A module design does sound pretty cool as long as the interconnects don't become the bottlenecks of innovation.
Great to see you back Gordon!
Thank you for the podcast version! With these hosts this is going to be a great episode!
Get well Gordon!
Nice to see you on Gordon....get well soon!
Get well soon Gordon ❤
Best wishes to you Gordon. Out here killing it as per, love to see it!
Gordon! Love you, get better.
What a fun podcast! One of the best ever. More with the tech trio.
Great to see you back Gordon! Wishing you the best!
I've only seen Gordon a few times, mostly in guest appearances of channels like Gamers Nexus but he seemed like a really cool dude. Very knowledgeable and passionate.
Hope you get well soon!
Should move the cpu socket lower and put a 3-4 slot place for gpu on top. That way you can do exhaust from top or bottom of the case with gpu fans. No more janky airflow shite.
To be fully honest i never noticed any difference between turning ray tracing and tesselation on and off, so to me they feel like "lower performance buttons". If i was rich, i might care about names on paper, but having to work for the money i throw at nvidia, i strictly want higher framerates for the longest time possible before upgrading. Everything else is just marketing jazz, wich i couldn't care less about.
Edit: I terms of hating changes and not wanting ATX to change i have a mediocre opinion:
First of all if it ain't broke don't bloody fix it. I'm sick and tired of what we call in german "worse-provement", where people want to make something "better" but it ends up a complete mess.
Secondly, if you change something, then make it a propper, well thought out change and fully standardise it, so in the future stuff is compatible. Everything should be as much compatible and interchangeable as possible, so that we are able to use perfectly functional hardware for as long as possible. To bring a stupid example, wich doesn't really make sense, but explains my mind set on these things:
There was no way i would have upgraded to DDR4 in 2017, if it wasn't necessary. Yes with DDR3 i may had lost theoretically 15% cpu performance for example, but as i play havily gpu intensive games, i simply don't care about maximum cpu performance. Instead now i have working DDR3 memory in the box, wich i got in 2010 and i would be happily still using today, but i cannot make the decision if i want to.
That's the main problem. When customers don't get to decide what they get and how they use the stuff in combination, wich in its core is on of the main reasons why most pc people don't want a console. They want a system, where every major part is customer upgradeable and you are not forced to buy the new entire mediocre box every x years.
If ATX is to go away, in my opinion that is fully fine with the condition that it would be replaced by a good standard, so that i can buy standardised cases, motherboards, etc. wich are compatible, without having to search nieche specific products, to get a working system.
Woohoo Gordon is back!!!! Hoping for a speedy recovery!
Have some love and good vibes from a stranger, Gordon and the guys ❤A good conversation with some interesting points and some light hearted humour.
Graphics cards would benefit from a better orientation and position.
Either parallel to the top of the case and near to it or oriented vertically with fans facing away from the rest of the motherboard.
We could even bring back ducting if the GPU can be in a better place.
Sol System
Make a video where you do them !
Games need them in from of the window now, or the RGB strips on them.
I saw a ducted case this week I think on KitGuru. It looked a bit clunky, but if the motherboard/GPU layout/Gen 5 NVME/RAM layout were redone to be cheaper, use less material, and be easier to cool, it'd also be easy to take advantage of dual, tri or quad compartment design and ducting if that's better. Which would be easily demonstrable.
Bless you Gordon for advocating advancement beyond ATX. I used macs for a time, and when I came back around go building a PC. I felt the heaviness of these stagnant standards as I endeavored to build what I thought would be airflow optimized.
I ended up creating a vertically oriented flow through design, with a cooler master case with two 200mm fans in the bottom, and putting the graphics card on a riser also vertically mounted. Bringing every fin of the case into vertical orientation, and the ram inline with the airflow too.
As I imagined how I would engineer a better standardized system, I think in terms of large fans that each drive an air tunnel dedicated to a system component.
Probably 140mm for ease of convention. But 200mm or more would be even better.
Essentially, the io continues to be standardized as front and rear, while airflow travels upwards from the bottom, components become long modules that span the height of the case.
So a power supply, and a GPU, and motherboard, all exist in modular tunnels of the same size. Perhaps the motherboard spans the backplate as the interface of these standardized modules. Perhaps the motherboard is accessed on both sides by 3 tunnels, for a total of 6 dedicated modules. Perhaps some modules are half length and others are full length . With 12 universal bus connections spread across both sides, like pcie slots. 2 per full length module. And 1 per half length module.
Perhaps one side of the central motherboard would have specialized connectors, while the other has universal pcie like connectors. The power supply having a specialized standard, and the ram/hard drives having another sharing the same module. On the module opposite the CPU. This backside of specialized ports would allow for the slow creep of new standards without moving the whole thing forward all or nothing. Like a laptop with only usb-c is actually less desirable than one with mixed ports. Having two sides allows for one side to be old as the other is new, and I feel that gives the standard room to grow.
But maybe that’s a bit timid, perhaps it would be best to make a pcie interface that can accommodate ram and power delivery and SSD’s. And just have vertically mounted rows of them for easy module swapping.
I imagine certain sectors would also find this approach preferable their own plug and play simplicity. Like if being installed in the wing of a plane.
I don't remember Steve ever doing a podcast, I'm very excited!
he was on 3 weeks ago?
@@NulJern Didn't see it, thanks for telling me
Nice to see you up and about Gordon. I wish nothing but the best for you.
Good to see you Gordon Love you take care.
I have actually thought about this for a while. Here is how I would change ATX. I would basically turn everything into an add in card. The only thing the base motherboard would be responsible for is pcie lanes, booting, and power delivery (ATX12VO). I would flip the CPU vertical similar to how intel tried with the pentium 3s though they would run parallel to the pcie slots for better cooling. I would put it 3 pcie slot width's from the top of the board, the cpu card would have 3 pcie slots reserved for it and then there would be 3 more pcie slots widths below it. This would make the overall dimensions of the board similar to the miniITX boards as the rear length of the board would only be 18mm x 9 slots for a total of 162mm in length on the back. Once you add a little of space for the screw holes you are going to be very similar. A major benefit of this kind of design is that the cpu manufacturer could do their own cooling. They already do this occasionally with some CPUs having its own cooler anyway but this would be expected in this standard. A huge benefit of this though is that it shouldn't be foreign to AMD or intel as they already design coolers for their GPUs and it would allow for direct die cooling without an IHS getting in the way.
Why is the CPU in the middle in my design? Well this would allow for the shortest trace lengths to the CPU hopefully helping keep signal integrity. Why would the CPU get 3 dedicated slots on its own? Well cooling would be a main point. It would make it so that it would have sufficient lanes for communication. We would get up to 48 lanes on consumer systems which would be awesome. though to this point I would be supportive of a new PCIE slot that would allow for more power delivery and more pcie lanes. The one on the mac pro from 2019 allowed up to 475W of power delivery so I know it is possible.
Other changes that would come with this. RAM. I feel that ram will be on die with the CPU before much longer. Apple is already doing this and given the density of technologies like HBM I feel it is a matter of time before AMD and Intel follow suit. I do think some customers will like expandable memory though and for these users would would have CAMM be the standard and it would simply be directly on back of the CPU card. This would literally put the memory as close to the CPU as possible. Literally millimeters away. Performance should be great. You could have 2 channels in a left right configuration or 4 channels in a left right up down configuration.
UEFI would also change. It would need to be something more similar to libreboot as this is something that would simply be in charge of initializing hardware. This has some pretty major implications as ideally you either would never need to change your motherboard when changing CPUs. The slot would be standard so you could move from generation to generation no issue and so long as you didn't want to potentially update to a newer pcie standard you wouldn't need to. This would however mean that either the standard would need to be so agnostic that the process for initializing hardware wouldn't need to change from generation to generation or you would need to be able to update the onboard libreboot frequently to add support for new CPUs.
The next major change would be your basic IO. First physically, as we are covering the entire back of the pc in slots there isn't much space for IO on the board. Your IO from the motherboard would need to be kept lower than 5-6 mm as is everything else on the motherboard. As everything coming off the PC would be PCIE lanes you would need compatible standards as well so things like USB C ports with usb4 and thunderbolt would kind of need to be standard. The majority of the IO however I would break out into its own addin card(s). This would leave it up to the user on how they wanted to spec out their computer for IO. I would think current motherboard manufactures would sell their own IO cards that could have things like wifi, ethernet, and USB. They could also purchase individual cards if they wanter higher end IO. Just like how they already do. The important thing here is that It would be up to the user to decide what IO they wanted or needed for their setup.
Last major change here is storage. Since we are switching to an all PCIE motherboard the number of PCIE lanes actually becomes an issue. Now I would hope we could add more lanes coming from the CPU but that only goes so far. To solve this I would expect some add in cards to be pcie mux(hopefully that is the right term) card that would take the incoming up to 16 pcie lanes and either split them out to 16 different storage devices or if the chip was sophisticated, and likely expensive, enough it could share that pcie bandwidth between however many storage devices it wanted.
This is my idea. I hope it has sparked some ideas of your own.
Edit: A benefit of this design is that it would be much easier to do maintenance compared to what you were describing. All of the memory and storage would be kept on top of the motherboard. This would keep case design simpler. It would also still be deploy-able in something like a rack mount case. Also I stated the board here in the most compact form factor as it seemed like that was one of the main desires. A motherboard manufacturer could have a bigger board relatively easily. Simply add space between each pcie slot to make them dual slot on their spacing. A lot of add in cards could probably use this space anyway for things like cooling or if they were something like the pcie mux card and it had a lot of cables coming off the board. Doing this would increase the total width on the back to about 270mm It would only be about an inch shorter than the current ATX standard but it would have a lot of advantages in cooling, expandability, upgradability, and sustainability.
@matthewsmith3817 yes. lane bifurcation is part of my explanation and is the cheaper way honestly I have one of those ASUS hyper M.2 cards in my storage server. As far as m.2 cards go it is a much better way of doing it. There are cheaper options though that forgo the massive aluminum heatsink on it though if you don't need the top of the line. It can go further though. Apex storage has an add in card that supports up to 21 m.2 drives. They do this by having a chip on the board that basically acts like a network switch but for PCIE lanes and allows up to 84 pcie lanes through this approach. They are limited to 16 lanes of bandwidth for anything leaving the card just like you would be limited by your uplink speed for using the internet but there is a lot you can do there without having to hit that bandwidth and even if you do, the chances of you hitting all of the drives simultaneously is going to be rare enough that for most people it would be a non issue.
Pretty similar to my idea.
Maybe the PCIe connections can be overbuilt for future generations, then tested against them by the manufacturers later when their specs are finalized.
I have to say going from the ease of SATA connection drives to how permanent M.2 drives feel was jarring and I can see how cooling NVME is going to become an increasing issue with graphics cards and storage all generating more and more heat.
Great to see Gordon back on the show.
Hell yeah Gordon! Glad you are back and hope your treatments go well! Hang in there, we miss you!
I have been wanting ATX to die for a while now and lately its gotten absurd how badly the standard is breaking down. Modern GPU's weigh more than the 'motherboard' with huge cards sagging til it looks like it physically will break under its own weight, cases are sold with 8 or 10 expansion slots but boards are lucky if they have more than 2 PCI slots and are still somehow getting bigger because they need surface area for m.2 storage, some brands are developing custom solutions like proprietary daughter boards to hold more high speed storage etc etc. You don't have to go as far as soldering everything down. That's way too far. I would envision an interconnect board or system that hosts a power module, CPU module, a GPU module and a high speed storage module that can hold a bunch of M.2. Eliminate wires with a new connection standard like Apple did with the old Mac pro. The only way to get buy in would be to base it on aesthetics because that is what sells.
It's so hard to care about RT considering the state of the games coming out anyways. If I can barely hit 60FPS in raster on 1440p or 4k, why would I care about RT? I don't want to play on 30fps on my 1500€ PC. And to be then forced to use frame gen or upscaling just to have slightly better reflections or GI doesn't feel with it.
Then the actually good PC games are from small indie studios and those don't even have RT anyways. Honestly just get 7900xtx and Steam Deck instead of 4080.
NGreedia:"We care about so much gamers, that's why we make them as inaccessible as possible by overpricing the f out of them because that makes total sense, right you guys?"
Please more Gordon and Steve duo podcast. They are like the anger translators for the state of the pc/console gaming space.
Gordon don't hate Apple.. he just have problem with Apple 🤣
This is the longest I've even heard of Steve sitting down.
So, my thought has been, boards the same size as ATX, place NVME slots on the back, extra height standoffs with a duct that grabs some of the airflow to direct over the back
move the RAM to be in line with the PCIE slots, with channel A abive the CPU, and channel B below the CPU(where the top NVME used to go)
you could use the space where RAM used to go for more NVME, but we'd need more PCIe lanes. for that to matter.
for the RAM to make a major difference, you'd need 2 memory controllers on the opposite side of the sockets
Good to see you, Gordon. I miss ya welcome back 🙆🏾♂️
Thanks!
Thank you!!
Steve is a GOAT!!!! a technological scientist who sticks to the facts!
Great stuff, guys. Very best wishes to Gordon.
Gordon 😍
Trying to push ray tracing as though raster is "solved" and everyone has GPUs running games at 4K ultra 540FPS cool and quiet, it's just the most BS marketing, consumer hostile thing that's happened in the GPU space.
Could you just cut the ATX to different cards that are connected together? Perhaps CPU on one card and the memory slots on another side of that single card? Connectors on one card, soundcard on it's own, dedicated GPU and storage card on their own. CPU and GPU cards could be horizontal and properly supported for heavier cooling solutions.
Best wishes to Gordon!
Green cone of shame ! I was thinking about the same thing when people started talking about ducts
One big improvement would be to have the entire IO panel be a custom slot. Having the IO independent of the Motherboard. It would look like a 20-100 pin interface that you would slide a rectangular panel in that has all of your display, usb, mouse/keyboard... the entire IO of the computer.
I see this as being a higher priority than having usb or audio on the front of the case...
But why though
@@noname-gp6hk Think about it.
As someone currently using an inverted motherboard case, the Silverstone RL08, any future motherboard spec should 100% put the GPU on top of the motherboard. You could even mount it horizontally and include case features that just jut out of the case to hold the bottom of the gpu board, rather than hang the GPU vertically which limits the kinds of coolers you can put on it. Then you can mount fans on top of the case to directly blow into the gpu, or even mount a tower cooler on the gpu and mount fans front and back just like how tower CPU coolers work.
Gordon Mah Ung is the GOAT of computer stuff. Steve Burke is the only person to ever approach that level. Seeing them working together is like one of those superhero stories where they finally join forces and kick everybody's ass. Steve just needs to adopt Gordon's rants. Nobody can string together profanity like Gordon. It's like music. With F words.
ray tracing still feels like a bolt on ....until games are all built up without rasterization we wont get the full benifits and wasted dev time
Gordon and Steve?! 💚💚💚
Just to confirm, a premiere isnt live ?
1080 - $600
2080 - $800
3080 - $800
4080 - $1200
Thats the issue right there for the high end
What an insane idea, you are just prescribing change for change sake. Atx works. Who’s your sponsor? PCIE needed for direct storage and if you get your way the EU and FCC will hammer the home built pc market on certification for energy reasons.
Amen. "It's thirty years old!" is such a lazy and arrogant argument. What's next, revamp everyone's indoor plumbing because the tech hasn't been revamped in this century?
They tried to change with BTX and even with Dell opting to use it for years it still didnt change.
ATX: I don't see placement of components on the board is an issue. There is freedom to move them.
The real ATX problem is card slots originally designed with low power cards, now running GPUs that need 3 or 4 slots, and destroy themselves, sagging under their own weight. Airflow is all wrong for GPU cards. CPU has the right airflow design where it flows through a standard tower from front to back. You needs something like that for GPU.
I was thinking of replacing these cards with something that stretches between the front and back of the case, getting more support for weight and controlling a long straight path for ventilation. External connections would be at the opposite side to the PCIe connection, which would require a different case design.
At least, for cards that need to be big. For smaller ones(including SSD's) you might have a few EDSFF E3 slots in a row.
Steve Burke question at 4:30 "what does Nvidia have", I came in 12 minutes later or so to the broadcast . . . Nvidia offers a whole product in various forms up for debate on utility value. mb
Steve pointing out "people just look for the green and pink ones" regarding all the 3.5mm on a motherboard reminds me of something that really irks me: motherboard makers who decide to make their board "look cool" by making all the audio jacks black. Like gee, thanks a lot. There's already way more ports than anyone needs and now they aren't color coded anymore so I can find the ones I need. Now I need to refer to the manual. Ridiculous.
Usb ports too!
58:20
THANK YOU. The ultra cramped 1990's designs simply does not make sense today. These HUGE graphics cards thrown a monkey wrench in every MB/case today. Everything is *cramped* - doesn't make sense.
Buying used is the option to get more for your money. It still makes sense to be able to get something good on a low budget. I would think if AMD and/or Intel released powerful enough APUs (like 16 CUs) they could fill the really low budget market since it ain't much you are getting with improved GPU performance with a dedicated at that point.
For Steve's edification: the wheel was invented ~3200B.C.E in Eastern Europe - roughly 800 years after the straw, which ancient Sumerians used to avoid the bitter dregs of their beer, this proving that partying has always been more important to people than work. That predates Rome by several thousand years, not decades. The wheel, to the Romans, would be like paper is to us, ancient technology. The wheel was actually a rather difficult innovation to achieve, owing to the physics involved. Prior to copper tools, it wasn't possible to make an axle that was thick enough to bear a significant load but also reduced axle friction enough to be efficient. Stone just really didn't provide for the level of woodworking necessary. That's why we can narrow down the invention of the wheel to a fairly small region, it was an innovation that was made once and spread, instead of being invented in multiple places concurrently.
So now you all have just a little more useless knowledge you can use at parties to demonstrate that you have no social skills and spend your days with your face buried in a screen learning things no sane person needs to know!
Gordon's argument of ATX needing to "pass on" (let's say) is going to reach a huge impediment of its own - _money._ Not from manufacturers, he's addressed thatm much. But being a known quantity, lots of budget PC enthusiasts will roll their eyes at the decommissioning of ATX as a standard because then, that means _oops_ in fifteen years, there goes all of their cheap hardware. Everybody wants to save money, so nobody's moved on from ATX.
The last major transition in the PC space was from S-100 to ATX. People with vintage systems have a hard enough time _as it is_ finding replacement S-100 hardware, the time difference is really its only saving grace because nobody really remembers the oldest of old ATX stuff, but everybody in the vintage enthusiast space remembers the greatest of the great S-100 stuff. During the transition away from ATX, many people are going to be burned, and _burned out_ of the PC space *because* of needing to find just the right hardware for the right formfactor.
What we really need to do is just standardize the PC case, and then new standards should be able to flourish from that with a standardized mounting system for the mainboard tray. If the case needs to get more fat or more thin, engineers can create inter-compatible variations with the capability to use any mainboard wall and rear panels they want, because then the case doesn't entirely hold back use of hardware nearly as much if those components can be replaced wholesale.
Gordon Best Wishes! hope you're doing well love to see you back.
ATX replacement layout idea.
CPU socket lays flat, looking in a PC case it would be where a GPU sits but with the chip facing up
PCIe slot underneath, vertical GPU.
RAM under the CPU, shortest traces. Could use something like CAMM.
12VO power supply. With the RAM on the back of the board, there would be a straight line from the power connector to the VRM to the CPU.
U.2 or something similar for SSDs, adapters for m.2s would smooth transition.
Overall the board would be ~ITX width but longer front to back. By moving m.2 off the board you can save real estate there. Toss motherboard audio and just bundle with a decent USB DAC instead.
Keeping CPU orientation the way it is and lengthening case standoffs would limit cooler height for slimmer cases.
I comment agree on this, it’s more than a like agree
I disagree with removing onboard audio, as well as the seeming lack of secondary or tertiary PCIe slots.
Having one or two onboard M.2 slots as well would be a good idea for more compact PCs, and possibly cheaper considering the cost of U.2 cables.
Guys!! The wheel didn't really work that well, until bearings were invented so the axle is held within rollers, rather than just rubbing against a greased surface.
I bought a liquid cooled 4090 at full MSRP and thought it was outrageous, but that same card costs $150 more now, so I "guess" I got a good deal.?!
Cheers 🍻
Regarding graphics card placement, placing the grfx card near the outside of the case allows it to pull fresh air for itself, and feed the case a little as well. 1:26:00