Hell yeah, Gordon's back!! And I must say that no one does set presentation like Gordon. This made my year, get well soon Gordon and Happy Holidays!! May Santa bring you a speedy recovery and the finest gas station coffee there is ☕😁
ATX won't go away if they do that, we'll see what happens with ARM PCs but even laptops have needed options on components. It's Apple soldering on SSDs and disempowering end users.
@@RobBCactive Apple is absolutely not the only one doing it. You used to be able to upgrade the CPU in a laptop. Nowadays every laptop CPU is soldered to the board, and many laptops have soldered ram. Apple is certainly the worst offender, but once they get away with it other companies start doing the same shit.
Great to see Gordon back! On the discussion on ATX, there are some use-cases (industrial, file servers etc) that need multiple PCIe slots (interface cards for lab equipment, RS-232 controllers, storage controllers etc). Storage devices will also need a reasonable wattage on the +5V / +3.3V for file servers. Intel needs to consider all of these use-cases when designing a standard. M.2 is fine when you only have
peterwstacey Where you buy board ? Many boards support RS 232, or Pin Headers. What storage u need, 6 NVMe drives. U.2 drives ? What board you bought ? What is the need ? Need a File Cloud app ?
Some industrial uses need e.g 16 UARTs for a scientific lab monitoring setup. Add in a few NI PCIe cards for RS, and you need a full stack of slots. We're not talking about linking to one UPS, it's more akin to physicists needing to control multiple lasers in a laboratory from the same base clock. Railway junction controllers can also use a lot of UARTs. The one I found with the most PCIe slots that also had good VRMs was the ASUS Prime Z790-A WiFi. The MSI Z790-A was also a good candidate
Thank you for mentioning this. I understand creating some motherboard standard to accommodate one large GPU and more M.2 devices, but for file servers and HEDT systems, having multiple graphics cards for VFIO, Ethernet and USB controllers, WiFi, and so on, are all very important for some people. Perhaps M.2 can replace some less complex or bandwidth heavy cards, but for storage controllers, m.2 riser cards, or especially GPUs, having full slots would be preferable. This would also need to be accompanied by a greater number of lanes on consumer processors.
One thing worth taking another look at is: does the GPU have to be an add-on card? By which I mean, should we go to the old co-processor model? Have a dedicated space for a GPU socket and DIMM slot-type arrangement. While a standarised socket is probably too much to ask, maybe the basic concept behind AGP wasn't wrong. The GPU is so fundamental and with so many peculiarities compared to your average PCIe card, not least of which is the power requirements and the sheer weight of the air cooling solution. A standardised location could allow things like GPU tower-cooling, or maybe combined AiOs that serve both CPU and GPU. On a slightly unrelated note, one major pet peeve of mine is that you often can't upgrade a cases front panel i/o. It's a bit ridiculous to have to change your entire case just to upgrade your front usb port, especially now that the old 5.25" and floppy drive bays are also dead in the consumer market.
Those are great points. Wouldn't a socket on the motherboard enable as much bandwidth as you wanted? All three GPU manufacturers are also making CPUS now, so... two chips (or more?) would seem smart. Front panel I/O is insanely bad atm. You're right, it should be normal in a mid-range case to just slot in a replacement panel that has the USB4 or whatever you want it to have.
Any redesign of ATX must focus around the GPU. This shit was NEVER meant to be this big, or hot. Some sort of sandwich layout makes the most sense, and then you could make it flow-thru or side-intake, and both would outperform the solutions we have now.
I've felt if GPUs are staying air-cooled then I'd like them above the CPU, so air can be drawn out at the top rear, with cool air mixing in from front. Part of that is so CPU AIO failures are less likely to do bonus damage the most expensive component. A secondary benefit can be GPU & NVME slot not wanting the same space. An issue is Intel have CPUs using ridiculous power levels needing top or front mounted AIOs.
Regarding graphics card placement, placing the grfx card near the outside of the case allows it to pull fresh air for itself, and feed the case a little as well. 1:26:00
Another thing I've thought about a lot is more modular GPU design. Standardize a couple of PCB form factors, standardize the cooler mounts, and give the users an option to buy cards and coolers separately. This way we could add extra cooling to a struggling GPU or reuse the cooler and upgrade just the GPU.
since you're at it, why not change the PCI bracket height? modern graphic cards are already taller than the PCI specs, why not work around a new design that allows for 120mm fans?
@@unintelligiblue that would mess up compatibility with all other cards though. Sourcing half vs full height PCI brackets for specific cards is already enough of a mess as it is, adding a third variant wouldn't help. Also, there's nothing standing in the way of manufacturers just doing it anyway with the brackets we already have, just like put 120mm fans on a card, whatever.
Gordon development points are strong (especially the ram airflow) on ATX except that he glossed over "the it works" comment quickly. As an oldie having built close to 100 PCs I remember the old saying at times like this. If it ain't broke don't fix it!"
Should move the cpu socket lower and put a 3-4 slot place for gpu on top. That way you can do exhaust from top or bottom of the case with gpu fans. No more janky airflow shite.
1:25:55 all but the top slot are by-8 slots or less, so you can't put it in any other slot then the top slot (assuming gaming use case). If you go to something like a dual-socket server board, then you might have some choices...
When it comes to redesigning ATX layouts, I like the idea of a socketable GPU. Let us buy our own GDDR6 so we can get the capacity we want and avoid the headache of running out of vram. Let us move our existing GPU coolers to a new socketed GPU. This could massively cut the costs of a GPU upgrade to just the die package and substrate itself instead of having a closet full of obsolete GPUs with huge coolers and unused vram on them. It's odd that we have so much freedom when it comes to CPU, RAM, PSU, cooling, storage and motherboard, but have dictators telling us what our options are when it comes to the GPU.
well, socketable Vram isn't going to happen. that stuff needs to be soldered down with the chip to get the necessary signal integrity. There's also the problem of power delivery, since boardmakers would have to spec their boards to potentially have an oc'd 4090 in the socket, when the end user is only going to install a 4060. That's either going to make for really expensive boards or a really confusing buyers experience when some boards only fit certain power limits.
@@cosmic_cupcake Perhaps for lower end GPUs or less memory bandwidth intensive workloads, a new GDDR standard would allow for socketed memory. As for power delivery, there could be different classes of boards for low, medium, and high power consumption GPUs, similar to what we have for coolers and motherboards. Additionally, these GPU upgrades would also allow for cross generation or perhaps even cross-vendor GPU swaps. Then the upgrade options would go beyond even CPUs
Gordon does bring up a good point about ATX. ATX standard has been very stagnant because "it's just cheaper to leave it as it is, even though things can be improved" I think the biggest contributing factors are the PSU and GPU, they are the biggest parts that keeps on getting bigger.
Of course Nvidia cares about gamers, i mean does PC World and Gamers Nexus care about their viewers? Its not the type of care like your family or friends care about you. Like Nvidia or AMD or Steve or Gordon arent going to be there for you for emotiobal support. But they do care that you watch their content and like ot subscribe and in Nvidias case that you buy their products. Its just a transactional relationship and its all that it needs to be. What can you do for me and maybe we agree on the value. Thats it. So i would say Nvidia and AMD and even Intel care enough about the gaming industry to invest in R&D to make cool products (tools) we can use to play video games and make workstation products for developers and technologies.
@@Wobbothe3rd Pretty much. Life's too short. Some people work hard and have some time to spare here and there they want to play their games they do what they need to get immersed in some escapism.
I've only seen Gordon a few times, mostly in guest appearances of channels like Gamers Nexus but he seemed like a really cool dude. Very knowledgeable and passionate. Hope you get well soon!
Graphics cards would benefit from a better orientation and position. Either parallel to the top of the case and near to it or oriented vertically with fans facing away from the rest of the motherboard. We could even bring back ducting if the GPU can be in a better place.
I saw a ducted case this week I think on KitGuru. It looked a bit clunky, but if the motherboard/GPU layout/Gen 5 NVME/RAM layout were redone to be cheaper, use less material, and be easier to cool, it'd also be easy to take advantage of dual, tri or quad compartment design and ducting if that's better. Which would be easily demonstrable.
Bless you Gordon for advocating advancement beyond ATX. I used macs for a time, and when I came back around go building a PC. I felt the heaviness of these stagnant standards as I endeavored to build what I thought would be airflow optimized. I ended up creating a vertically oriented flow through design, with a cooler master case with two 200mm fans in the bottom, and putting the graphics card on a riser also vertically mounted. Bringing every fin of the case into vertical orientation, and the ram inline with the airflow too. As I imagined how I would engineer a better standardized system, I think in terms of large fans that each drive an air tunnel dedicated to a system component. Probably 140mm for ease of convention. But 200mm or more would be even better. Essentially, the io continues to be standardized as front and rear, while airflow travels upwards from the bottom, components become long modules that span the height of the case. So a power supply, and a GPU, and motherboard, all exist in modular tunnels of the same size. Perhaps the motherboard spans the backplate as the interface of these standardized modules. Perhaps the motherboard is accessed on both sides by 3 tunnels, for a total of 6 dedicated modules. Perhaps some modules are half length and others are full length . With 12 universal bus connections spread across both sides, like pcie slots. 2 per full length module. And 1 per half length module. Perhaps one side of the central motherboard would have specialized connectors, while the other has universal pcie like connectors. The power supply having a specialized standard, and the ram/hard drives having another sharing the same module. On the module opposite the CPU. This backside of specialized ports would allow for the slow creep of new standards without moving the whole thing forward all or nothing. Like a laptop with only usb-c is actually less desirable than one with mixed ports. Having two sides allows for one side to be old as the other is new, and I feel that gives the standard room to grow. But maybe that’s a bit timid, perhaps it would be best to make a pcie interface that can accommodate ram and power delivery and SSD’s. And just have vertically mounted rows of them for easy module swapping. I imagine certain sectors would also find this approach preferable their own plug and play simplicity. Like if being installed in the wing of a plane.
Not sure why we need to get rid of ATX just because it has been around a long time. That means it was engineered right from the start and has been able to serve the needs up until now - and probably for many more years. If we are going to do this, we need to engineer for additional PCIe lanes and accommodating multiple fat cards - which will also require adjusting the case designs too. We need a spec that can handle the next 30 years. But, we also need to look at really multiple variations for larger and smaller needs - like mATX, ITX and EATX. Granted, we already have all those, so a new spec would need to be very compelling to make it worth doing. A module design does sound pretty cool as long as the interconnects don't become the bottlenecks of innovation.
Many of us grew up with Gordon and would purchase pc mag because he was some of our favorite reviewer. I stopped getting pc mags shortly after Gordon went from senior reviewer to chief. His words carry a lot of weight if you grew up with him as a brand name in the pc world.
Why the need for new atx form factor. You can use existing larger ff to replace ATX right. It would just make things even more expensive for manufacturing and so on. for video cards you could get risers and just increase the width of the of the case but the length stays the same.
Could you just cut the ATX to different cards that are connected together? Perhaps CPU on one card and the memory slots on another side of that single card? Connectors on one card, soundcard on it's own, dedicated GPU and storage card on their own. CPU and GPU cards could be horizontal and properly supported for heavier cooling solutions.
I have to say going from the ease of SATA connection drives to how permanent M.2 drives feel was jarring and I can see how cooling NVME is going to become an increasing issue with graphics cards and storage all generating more and more heat.
I have actually thought about this for a while. Here is how I would change ATX. I would basically turn everything into an add in card. The only thing the base motherboard would be responsible for is pcie lanes, booting, and power delivery (ATX12VO). I would flip the CPU vertical similar to how intel tried with the pentium 3s though they would run parallel to the pcie slots for better cooling. I would put it 3 pcie slot width's from the top of the board, the cpu card would have 3 pcie slots reserved for it and then there would be 3 more pcie slots widths below it. This would make the overall dimensions of the board similar to the miniITX boards as the rear length of the board would only be 18mm x 9 slots for a total of 162mm in length on the back. Once you add a little of space for the screw holes you are going to be very similar. A major benefit of this kind of design is that the cpu manufacturer could do their own cooling. They already do this occasionally with some CPUs having its own cooler anyway but this would be expected in this standard. A huge benefit of this though is that it shouldn't be foreign to AMD or intel as they already design coolers for their GPUs and it would allow for direct die cooling without an IHS getting in the way. Why is the CPU in the middle in my design? Well this would allow for the shortest trace lengths to the CPU hopefully helping keep signal integrity. Why would the CPU get 3 dedicated slots on its own? Well cooling would be a main point. It would make it so that it would have sufficient lanes for communication. We would get up to 48 lanes on consumer systems which would be awesome. though to this point I would be supportive of a new PCIE slot that would allow for more power delivery and more pcie lanes. The one on the mac pro from 2019 allowed up to 475W of power delivery so I know it is possible. Other changes that would come with this. RAM. I feel that ram will be on die with the CPU before much longer. Apple is already doing this and given the density of technologies like HBM I feel it is a matter of time before AMD and Intel follow suit. I do think some customers will like expandable memory though and for these users would would have CAMM be the standard and it would simply be directly on back of the CPU card. This would literally put the memory as close to the CPU as possible. Literally millimeters away. Performance should be great. You could have 2 channels in a left right configuration or 4 channels in a left right up down configuration. UEFI would also change. It would need to be something more similar to libreboot as this is something that would simply be in charge of initializing hardware. This has some pretty major implications as ideally you either would never need to change your motherboard when changing CPUs. The slot would be standard so you could move from generation to generation no issue and so long as you didn't want to potentially update to a newer pcie standard you wouldn't need to. This would however mean that either the standard would need to be so agnostic that the process for initializing hardware wouldn't need to change from generation to generation or you would need to be able to update the onboard libreboot frequently to add support for new CPUs. The next major change would be your basic IO. First physically, as we are covering the entire back of the pc in slots there isn't much space for IO on the board. Your IO from the motherboard would need to be kept lower than 5-6 mm as is everything else on the motherboard. As everything coming off the PC would be PCIE lanes you would need compatible standards as well so things like USB C ports with usb4 and thunderbolt would kind of need to be standard. The majority of the IO however I would break out into its own addin card(s). This would leave it up to the user on how they wanted to spec out their computer for IO. I would think current motherboard manufactures would sell their own IO cards that could have things like wifi, ethernet, and USB. They could also purchase individual cards if they wanter higher end IO. Just like how they already do. The important thing here is that It would be up to the user to decide what IO they wanted or needed for their setup. Last major change here is storage. Since we are switching to an all PCIE motherboard the number of PCIE lanes actually becomes an issue. Now I would hope we could add more lanes coming from the CPU but that only goes so far. To solve this I would expect some add in cards to be pcie mux(hopefully that is the right term) card that would take the incoming up to 16 pcie lanes and either split them out to 16 different storage devices or if the chip was sophisticated, and likely expensive, enough it could share that pcie bandwidth between however many storage devices it wanted. This is my idea. I hope it has sparked some ideas of your own. Edit: A benefit of this design is that it would be much easier to do maintenance compared to what you were describing. All of the memory and storage would be kept on top of the motherboard. This would keep case design simpler. It would also still be deploy-able in something like a rack mount case. Also I stated the board here in the most compact form factor as it seemed like that was one of the main desires. A motherboard manufacturer could have a bigger board relatively easily. Simply add space between each pcie slot to make them dual slot on their spacing. A lot of add in cards could probably use this space anyway for things like cooling or if they were something like the pcie mux card and it had a lot of cables coming off the board. Doing this would increase the total width on the back to about 270mm It would only be about an inch shorter than the current ATX standard but it would have a lot of advantages in cooling, expandability, upgradability, and sustainability.
@matthewsmith3817 yes. lane bifurcation is part of my explanation and is the cheaper way honestly I have one of those ASUS hyper M.2 cards in my storage server. As far as m.2 cards go it is a much better way of doing it. There are cheaper options though that forgo the massive aluminum heatsink on it though if you don't need the top of the line. It can go further though. Apex storage has an add in card that supports up to 21 m.2 drives. They do this by having a chip on the board that basically acts like a network switch but for PCIE lanes and allows up to 84 pcie lanes through this approach. They are limited to 16 lanes of bandwidth for anything leaving the card just like you would be limited by your uplink speed for using the internet but there is a lot you can do there without having to hit that bandwidth and even if you do, the chances of you hitting all of the drives simultaneously is going to be rare enough that for most people it would be a non issue.
Pretty similar to my idea. Maybe the PCIe connections can be overbuilt for future generations, then tested against them by the manufacturers later when their specs are finalized.
Steve Burke question at 4:30 "what does Nvidia have", I came in 12 minutes later or so to the broadcast . . . Nvidia offers a whole product in various forms up for debate on utility value. mb
I have a PowerMac G5 from 2003. It has separate chambers for GPU/drives and CPU, fan ducts and dedicated PCI fingers for additional GPU power. 20 years old but still ahead of modern PCs.
Buying used is the option to get more for your money. It still makes sense to be able to get something good on a low budget. I would think if AMD and/or Intel released powerful enough APUs (like 16 CUs) they could fill the really low budget market since it ain't much you are getting with improved GPU performance with a dedicated at that point.
As someone currently using an inverted motherboard case, the Silverstone RL08, any future motherboard spec should 100% put the GPU on top of the motherboard. You could even mount it horizontally and include case features that just jut out of the case to hold the bottom of the gpu board, rather than hang the GPU vertically which limits the kinds of coolers you can put on it. Then you can mount fans on top of the case to directly blow into the gpu, or even mount a tower cooler on the gpu and mount fans front and back just like how tower CPU coolers work.
Id like the see changes to motherboard connections, why is there so many. The fan connector is different from the USB header, then we have power connectors for sata, then we have front panel connectors which are little tiny things that don't have the same connection layout because motherboard manufacturers want to do things their own way. I'd love to see changes to ATX but maybe form factors need to be open source to get a consensus on future improvements
One big improvement would be to have the entire IO panel be a custom slot. Having the IO independent of the Motherboard. It would look like a 20-100 pin interface that you would slide a rectangular panel in that has all of your display, usb, mouse/keyboard... the entire IO of the computer. I see this as being a higher priority than having usb or audio on the front of the case...
To be fully honest i never noticed any difference between turning ray tracing and tesselation on and off, so to me they feel like "lower performance buttons". If i was rich, i might care about names on paper, but having to work for the money i throw at nvidia, i strictly want higher framerates for the longest time possible before upgrading. Everything else is just marketing jazz, wich i couldn't care less about. Edit: I terms of hating changes and not wanting ATX to change i have a mediocre opinion: First of all if it ain't broke don't bloody fix it. I'm sick and tired of what we call in german "worse-provement", where people want to make something "better" but it ends up a complete mess. Secondly, if you change something, then make it a propper, well thought out change and fully standardise it, so in the future stuff is compatible. Everything should be as much compatible and interchangeable as possible, so that we are able to use perfectly functional hardware for as long as possible. To bring a stupid example, wich doesn't really make sense, but explains my mind set on these things: There was no way i would have upgraded to DDR4 in 2017, if it wasn't necessary. Yes with DDR3 i may had lost theoretically 15% cpu performance for example, but as i play havily gpu intensive games, i simply don't care about maximum cpu performance. Instead now i have working DDR3 memory in the box, wich i got in 2010 and i would be happily still using today, but i cannot make the decision if i want to. That's the main problem. When customers don't get to decide what they get and how they use the stuff in combination, wich in its core is on of the main reasons why most pc people don't want a console. They want a system, where every major part is customer upgradeable and you are not forced to buy the new entire mediocre box every x years. If ATX is to go away, in my opinion that is fully fine with the condition that it would be replaced by a good standard, so that i can buy standardised cases, motherboards, etc. wich are compatible, without having to search nieche specific products, to get a working system.
So, my thought has been, boards the same size as ATX, place NVME slots on the back, extra height standoffs with a duct that grabs some of the airflow to direct over the back move the RAM to be in line with the PCIE slots, with channel A abive the CPU, and channel B below the CPU(where the top NVME used to go) you could use the space where RAM used to go for more NVME, but we'd need more PCIe lanes. for that to matter.
ATX replacement layout idea. CPU socket lays flat, looking in a PC case it would be where a GPU sits but with the chip facing up PCIe slot underneath, vertical GPU. RAM under the CPU, shortest traces. Could use something like CAMM. 12VO power supply. With the RAM on the back of the board, there would be a straight line from the power connector to the VRM to the CPU. U.2 or something similar for SSDs, adapters for m.2s would smooth transition. Overall the board would be ~ITX width but longer front to back. By moving m.2 off the board you can save real estate there. Toss motherboard audio and just bundle with a decent USB DAC instead. Keeping CPU orientation the way it is and lengthening case standoffs would limit cooler height for slimmer cases.
I disagree with removing onboard audio, as well as the seeming lack of secondary or tertiary PCIe slots. Having one or two onboard M.2 slots as well would be a good idea for more compact PCs, and possibly cheaper considering the cost of U.2 cables.
58:20 THANK YOU. The ultra cramped 1990's designs simply does not make sense today. These HUGE graphics cards thrown a monkey wrench in every MB/case today. Everything is *cramped* - doesn't make sense.
Steve, how's that Puget Systems ProArtZ790 board holding up? Are you managing to keep it cool enough? Water cooled? and the Thunderbolt 4 working as should?
1:03:05 IMHO pho the atx soup! I want some motherboards that have decent IO without having to pay for server parts or elite-level 4000 USD CPU and 1000 USD motherboards. remember we pay close to 20% import tax on everything in Europe. Is it so much to ask to connect 4-6 nvme drives, 6-10 sata drives and 2-3 PCIe (16, 8, 8) cards? And of course about 10 USB 3 ports... I remember using a BTX board from intel that had decent slots for network and raid cards in 2007 or so. Yeah... ATX is a dinosaur at this point. Keep the RAM and PSU and PCIe cards somewhat interchangeable, and throw everything else out.
The 3000 series was scooped up by scalpers day one. The 3090 launched first, so maybe a couple of days before the SHTF for that card, but I was refreshing like mad launch second for the 3080. I wanted an FE and this was before anyone was talking about anything unusual. To be fair, I did have a less desirable (to me) 3080 in my cart that i didn't buy (it was Best Buy, which getting an item in cart doesn't guarantee a purchase; during the worst of it) hoping to see the FE show available. Anyway, every 3080 released in the US market sold out within an hour, and every subsequent release wasn't widely available until 4000 series was about to come out.
I .. somehow recall Steve and Gordan having an ATX argument like this before. iirc, something about it being a catch-22. Manufacturing, Case wont change unless Mobo changes, Mobo wont change unless Case changes. There just isn't any organization that pushes for collaboration on the topic, like JEDEC does for memory.
Steve pointing out "people just look for the green and pink ones" regarding all the 3.5mm on a motherboard reminds me of something that really irks me: motherboard makers who decide to make their board "look cool" by making all the audio jacks black. Like gee, thanks a lot. There's already way more ports than anyone needs and now they aren't color coded anymore so I can find the ones I need. Now I need to refer to the manual. Ridiculous.
Just add standoffs to the ATX mounting system and offer 15 cm of space behind the board. That would ease the pain on case manufacturers as they only need to change the middle of the case design. That would allow you to add at least another 4 pci slots to every board. You then need boards to make use of those : dedicated AI support, dedicated security card to monitor network traffic, etc. But you need the cards designed first as demand for adding in more cards would be needed before you have to maximise the pcie count.
On the MOBO config, as someone that has built about 6 or 7 systems, the only or at least biggets annoyance I have with building on ATX, si that cards on top are annoying to deal with, specially on system with tower coolers. The last build was my own system a few months back, and the mobo I picked has the ssd gen5 at the top slot which is the slot my card fits because of the size (7900xtx) so I have both the ssd and also the cmos battery behind the video card, which is a pain in the ass when I need to swap the SSD, or when I had to take out the cmos battery when setting up EXPO speeds... Also doesn't help that in other release the video card from the slop, I had almost no room to move the plastic safety piece because of the tower cooler, I know that it's my fault for getting a tower cooler, the 7900xtx and the MOBO which made building harder, but still seems like a terrible oversight in MOBO design on hindsight.
Gordon Mah Ung is the GOAT of computer stuff. Steve Burke is the only person to ever approach that level. Seeing them working together is like one of those superhero stories where they finally join forces and kick everybody's ass. Steve just needs to adopt Gordon's rants. Nobody can string together profanity like Gordon. It's like music. With F words.
I wish Gordon swift recovery. Gordon get well !!
I love Gordon Get Off My Lawn Mah Ung so much and hope he continues to get better. We all have a much better time when he is around.
Yaaah! Nice to see Gordon back!
He looks fucked up
What happened?
Cancer is the reason Gordon is sick for those who have asked.
So terrific to see Gordon! Not surprised at all he's a Science Officer
Gordon going off on console things coherently is the welfare check I needed to see. He's ok.
lol
Hell yeah, Gordon's back!! And I must say that no one does set presentation like Gordon. This made my year, get well soon Gordon and Happy Holidays!! May Santa bring you a speedy recovery and the finest gas station coffee there is ☕😁
Gordon has been giving us the PC news my whole life; stay well rest easy brother the GOAT has more work to do!
I'm afraid that if atx goes away all of the motherboard makers will just solder everything down and we'll end up with giant laptops
or they will sell it separately as an accessories. want another nvme slot ? 40$ ! want another pcie slot ? 100$ !
Companies are frothing at the mouth for that opportunity, you just know it
Case specific boards? I could see it.
ATX won't go away if they do that, we'll see what happens with ARM PCs but even laptops have needed options on components.
It's Apple soldering on SSDs and disempowering end users.
@@RobBCactive Apple is absolutely not the only one doing it. You used to be able to upgrade the CPU in a laptop. Nowadays every laptop CPU is soldered to the board, and many laptops have soldered ram.
Apple is certainly the worst offender, but once they get away with it other companies start doing the same shit.
Great to see Gordon back!
On the discussion on ATX, there are some use-cases (industrial, file servers etc) that need multiple PCIe slots (interface cards for lab equipment, RS-232 controllers, storage controllers etc). Storage devices will also need a reasonable wattage on the +5V / +3.3V for file servers. Intel needs to consider all of these use-cases when designing a standard. M.2 is fine when you only have
peterwstacey
Where you buy board ?
Many boards support RS 232, or Pin Headers.
What storage u need, 6 NVMe drives. U.2 drives ?
What board you bought ? What is the need ? Need a File Cloud app ?
Some industrial uses need e.g 16 UARTs for a scientific lab monitoring setup. Add in a few NI PCIe cards for RS, and you need a full stack of slots. We're not talking about linking to one UPS, it's more akin to physicists needing to control multiple lasers in a laboratory from the same base clock. Railway junction controllers can also use a lot of UARTs.
The one I found with the most PCIe slots that also had good VRMs was the ASUS Prime Z790-A WiFi. The MSI Z790-A was also a good candidate
Thank you for mentioning this. I understand creating some motherboard standard to accommodate one large GPU and more M.2 devices, but for file servers and HEDT systems, having multiple graphics cards for VFIO, Ethernet and USB controllers, WiFi, and so on, are all very important for some people. Perhaps M.2 can replace some less complex or bandwidth heavy cards, but for storage controllers, m.2 riser cards, or especially GPUs, having full slots would be preferable. This would also need to be accompanied by a greater number of lanes on consumer processors.
So great to see the OG back in the saddle! Get well soon Gordon!
i dont know of atx needs to go, i think horizontal mount cases need to come back. working with gravity instead of against it solves many problems.
One thing worth taking another look at is: does the GPU have to be an add-on card? By which I mean, should we go to the old co-processor model? Have a dedicated space for a GPU socket and DIMM slot-type arrangement.
While a standarised socket is probably too much to ask, maybe the basic concept behind AGP wasn't wrong. The GPU is so fundamental and with so many peculiarities compared to your average PCIe card, not least of which is the power requirements and the sheer weight of the air cooling solution. A standardised location could allow things like GPU tower-cooling, or maybe combined AiOs that serve both CPU and GPU.
On a slightly unrelated note, one major pet peeve of mine is that you often can't upgrade a cases front panel i/o. It's a bit ridiculous to have to change your entire case just to upgrade your front usb port, especially now that the old 5.25" and floppy drive bays are also dead in the consumer market.
Those are great points. Wouldn't a socket on the motherboard enable as much bandwidth as you wanted? All three GPU manufacturers are also making CPUS now, so... two chips (or more?) would seem smart.
Front panel I/O is insanely bad atm. You're right, it should be normal in a mid-range case to just slot in a replacement panel that has the USB4 or whatever you want it to have.
I would totally buy into the GUTS form factor if it ever became a thing! Great to see Gordon and Steve again!
Great to see Gordon again 🥰, get well soon, you got this😊💪
Wishing you well Gordon! We all care for you and so glad to see and hear you! Love your humor most of all! Enjoy your presence!
THIS MADE MY DAY... SEEING GORDON BACK IS THE BEST NEWS FOR ME... EVER... ❤❤❤
gordan is a fucking baller and i wish him only the best
didn't know he is a Blood and deals drugs.... Baller is not a Basketballer
Any redesign of ATX must focus around the GPU. This shit was NEVER meant to be this big, or hot.
Some sort of sandwich layout makes the most sense, and then you could make it flow-thru or side-intake, and both would outperform the solutions we have now.
I've felt if GPUs are staying air-cooled then I'd like them above the CPU, so air can be drawn out at the top rear, with cool air mixing in from front.
Part of that is so CPU AIO failures are less likely to do bonus damage the most expensive component. A secondary benefit can be GPU & NVME slot not wanting the same space.
An issue is Intel have CPUs using ridiculous power levels needing top or front mounted AIOs.
It’s great to hear and see Gordon. I hope you’re doing well sir
So glad to see Gordon looking strong. Get well man.
I don't always agree with Gordon but I love listening to him. Get well soon Gordon.
Regarding graphics card placement, placing the grfx card near the outside of the case allows it to pull fresh air for itself, and feed the case a little as well. 1:26:00
seeing gordon and him getting better is an amazing gift for thousands of people, here is hoping
Happy to see Gordon! Plus Adam and Steve!
Another thing I've thought about a lot is more modular GPU design. Standardize a couple of PCB form factors, standardize the cooler mounts, and give the users an option to buy cards and coolers separately. This way we could add extra cooling to a struggling GPU or reuse the cooler and upgrade just the GPU.
since you're at it, why not change the PCI bracket height? modern graphic cards are already taller than the PCI specs, why not work around a new design that allows for 120mm fans?
@@unintelligiblue that would mess up compatibility with all other cards though. Sourcing half vs full height PCI brackets for specific cards is already enough of a mess as it is, adding a third variant wouldn't help.
Also, there's nothing standing in the way of manufacturers just doing it anyway with the brackets we already have, just like put 120mm fans on a card, whatever.
You sound great, Gordon. Voice is as strong as ever. So good to see you and hear your insights.
Gordon development points are strong (especially the ram airflow) on ATX except that he glossed over "the it works" comment quickly. As an oldie having built close to 100 PCs I remember the old saying at times like this. If it ain't broke don't fix it!"
so to fix ATX, make the EVGA style rotated socket as standard and swap the main M.2 for U.2 where the Ram used to be
Maybe EDSFF E3 instead of U.2, so the same slots could later be used for both SSDs and smaller add-in cards. Would require a new case standard though.
Great to hear Gordon's voice 👍, and Steve chuckling at his comments
Hello to all three bosses,especially to Gordon.All the best wishes Gordon.
Gordon, praying for you brother , stay strong , never give up ! You are one my favorite tech tubers 😎
Should move the cpu socket lower and put a 3-4 slot place for gpu on top. That way you can do exhaust from top or bottom of the case with gpu fans. No more janky airflow shite.
Best wishes to you Gordon. Out here killing it as per, love to see it!
1:25:55 all but the top slot are by-8 slots or less, so you can't put it in any other slot then the top slot (assuming gaming use case). If you go to something like a dual-socket server board, then you might have some choices...
Thank you for the podcast version! With these hosts this is going to be a great episode!
It's always great to see Gordon! Hang in there buddy!
Sending some love to Gordon, get well soon! ❤🔥❤🔥
Please more Gordon and Steve duo podcast. They are like the anger translators for the state of the pc/console gaming space.
When it comes to redesigning ATX layouts, I like the idea of a socketable GPU. Let us buy our own GDDR6 so we can get the capacity we want and avoid the headache of running out of vram. Let us move our existing GPU coolers to a new socketed GPU. This could massively cut the costs of a GPU upgrade to just the die package and substrate itself instead of having a closet full of obsolete GPUs with huge coolers and unused vram on them.
It's odd that we have so much freedom when it comes to CPU, RAM, PSU, cooling, storage and motherboard, but have dictators telling us what our options are when it comes to the GPU.
well, socketable Vram isn't going to happen. that stuff needs to be soldered down with the chip to get the necessary signal integrity. There's also the problem of power delivery, since boardmakers would have to spec their boards to potentially have an oc'd 4090 in the socket, when the end user is only going to install a 4060. That's either going to make for really expensive boards or a really confusing buyers experience when some boards only fit certain power limits.
@@cosmic_cupcake Perhaps for lower end GPUs or less memory bandwidth intensive workloads, a new GDDR standard would allow for socketed memory.
As for power delivery, there could be different classes of boards for low, medium, and high power consumption GPUs, similar to what we have for coolers and motherboards. Additionally, these GPU upgrades would also allow for cross generation or perhaps even cross-vendor GPU swaps. Then the upgrade options would go beyond even CPUs
Gordon does bring up a good point about ATX.
ATX standard has been very stagnant because "it's just cheaper to leave it as it is, even though things can be improved"
I think the biggest contributing factors are the PSU and GPU, they are the biggest parts that keeps on getting bigger.
Of course Nvidia cares about gamers, i mean does PC World and Gamers Nexus care about their viewers?
Its not the type of care like your family or friends care about you. Like Nvidia or AMD or Steve or Gordon arent going to be there for you for emotiobal support. But they do care that you watch their content and like ot subscribe and in Nvidias case that you buy their products.
Its just a transactional relationship and its all that it needs to be. What can you do for me and maybe we agree on the value. Thats it. So i would say Nvidia and AMD and even Intel care enough about the gaming industry to invest in R&D to make cool products (tools) we can use to play video games and make workstation products for developers and technologies.
Touché. And 80% of gamers are happy to enter into that transaction.
@@Wobbothe3rd Pretty much. Life's too short. Some people work hard and have some time to spare here and there they want to play their games they do what they need to get immersed in some escapism.
I've only seen Gordon a few times, mostly in guest appearances of channels like Gamers Nexus but he seemed like a really cool dude. Very knowledgeable and passionate.
Hope you get well soon!
Graphics cards would benefit from a better orientation and position.
Either parallel to the top of the case and near to it or oriented vertically with fans facing away from the rest of the motherboard.
We could even bring back ducting if the GPU can be in a better place.
Sol System
Make a video where you do them !
Games need them in from of the window now, or the RGB strips on them.
I saw a ducted case this week I think on KitGuru. It looked a bit clunky, but if the motherboard/GPU layout/Gen 5 NVME/RAM layout were redone to be cheaper, use less material, and be easier to cool, it'd also be easy to take advantage of dual, tri or quad compartment design and ducting if that's better. Which would be easily demonstrable.
Bless you Gordon for advocating advancement beyond ATX. I used macs for a time, and when I came back around go building a PC. I felt the heaviness of these stagnant standards as I endeavored to build what I thought would be airflow optimized.
I ended up creating a vertically oriented flow through design, with a cooler master case with two 200mm fans in the bottom, and putting the graphics card on a riser also vertically mounted. Bringing every fin of the case into vertical orientation, and the ram inline with the airflow too.
As I imagined how I would engineer a better standardized system, I think in terms of large fans that each drive an air tunnel dedicated to a system component.
Probably 140mm for ease of convention. But 200mm or more would be even better.
Essentially, the io continues to be standardized as front and rear, while airflow travels upwards from the bottom, components become long modules that span the height of the case.
So a power supply, and a GPU, and motherboard, all exist in modular tunnels of the same size. Perhaps the motherboard spans the backplate as the interface of these standardized modules. Perhaps the motherboard is accessed on both sides by 3 tunnels, for a total of 6 dedicated modules. Perhaps some modules are half length and others are full length . With 12 universal bus connections spread across both sides, like pcie slots. 2 per full length module. And 1 per half length module.
Perhaps one side of the central motherboard would have specialized connectors, while the other has universal pcie like connectors. The power supply having a specialized standard, and the ram/hard drives having another sharing the same module. On the module opposite the CPU. This backside of specialized ports would allow for the slow creep of new standards without moving the whole thing forward all or nothing. Like a laptop with only usb-c is actually less desirable than one with mixed ports. Having two sides allows for one side to be old as the other is new, and I feel that gives the standard room to grow.
But maybe that’s a bit timid, perhaps it would be best to make a pcie interface that can accommodate ram and power delivery and SSD’s. And just have vertically mounted rows of them for easy module swapping.
I imagine certain sectors would also find this approach preferable their own plug and play simplicity. Like if being installed in the wing of a plane.
Not sure why we need to get rid of ATX just because it has been around a long time. That means it was engineered right from the start and has been able to serve the needs up until now - and probably for many more years. If we are going to do this, we need to engineer for additional PCIe lanes and accommodating multiple fat cards - which will also require adjusting the case designs too. We need a spec that can handle the next 30 years. But, we also need to look at really multiple variations for larger and smaller needs - like mATX, ITX and EATX. Granted, we already have all those, so a new spec would need to be very compelling to make it worth doing. A module design does sound pretty cool as long as the interconnects don't become the bottlenecks of innovation.
Nice to see you up and about Gordon. I wish nothing but the best for you.
Great to see you back Gordon!
Many of us grew up with Gordon and would purchase pc mag because he was some of our favorite reviewer. I stopped getting pc mags shortly after Gordon went from senior reviewer to chief. His words carry a lot of weight if you grew up with him as a brand name in the pc world.
Great to see you 3! :)
What a fun podcast! One of the best ever. More with the tech trio.
Why the need for new atx form factor. You can use existing larger ff to replace ATX right. It would just make things even more expensive for manufacturing and so on. for video cards you could get risers and just increase the width of the of the case but the length stays the same.
Great to see you back Gordon! Wishing you the best!
So glad to see Gordan again 🎉😊
This is like seeing the Beatles getting back together with Jimmy Hendrix as a guest. Wow!
Have some love and good vibes from a stranger, Gordon and the guys ❤A good conversation with some interesting points and some light hearted humour.
Woohoo Gordon is back!!!! Hoping for a speedy recovery!
Could you just cut the ATX to different cards that are connected together? Perhaps CPU on one card and the memory slots on another side of that single card? Connectors on one card, soundcard on it's own, dedicated GPU and storage card on their own. CPU and GPU cards could be horizontal and properly supported for heavier cooling solutions.
Hell yeah Gordon! Glad you are back and hope your treatments go well! Hang in there, we miss you!
I have to say going from the ease of SATA connection drives to how permanent M.2 drives feel was jarring and I can see how cooling NVME is going to become an increasing issue with graphics cards and storage all generating more and more heat.
I have actually thought about this for a while. Here is how I would change ATX. I would basically turn everything into an add in card. The only thing the base motherboard would be responsible for is pcie lanes, booting, and power delivery (ATX12VO). I would flip the CPU vertical similar to how intel tried with the pentium 3s though they would run parallel to the pcie slots for better cooling. I would put it 3 pcie slot width's from the top of the board, the cpu card would have 3 pcie slots reserved for it and then there would be 3 more pcie slots widths below it. This would make the overall dimensions of the board similar to the miniITX boards as the rear length of the board would only be 18mm x 9 slots for a total of 162mm in length on the back. Once you add a little of space for the screw holes you are going to be very similar. A major benefit of this kind of design is that the cpu manufacturer could do their own cooling. They already do this occasionally with some CPUs having its own cooler anyway but this would be expected in this standard. A huge benefit of this though is that it shouldn't be foreign to AMD or intel as they already design coolers for their GPUs and it would allow for direct die cooling without an IHS getting in the way.
Why is the CPU in the middle in my design? Well this would allow for the shortest trace lengths to the CPU hopefully helping keep signal integrity. Why would the CPU get 3 dedicated slots on its own? Well cooling would be a main point. It would make it so that it would have sufficient lanes for communication. We would get up to 48 lanes on consumer systems which would be awesome. though to this point I would be supportive of a new PCIE slot that would allow for more power delivery and more pcie lanes. The one on the mac pro from 2019 allowed up to 475W of power delivery so I know it is possible.
Other changes that would come with this. RAM. I feel that ram will be on die with the CPU before much longer. Apple is already doing this and given the density of technologies like HBM I feel it is a matter of time before AMD and Intel follow suit. I do think some customers will like expandable memory though and for these users would would have CAMM be the standard and it would simply be directly on back of the CPU card. This would literally put the memory as close to the CPU as possible. Literally millimeters away. Performance should be great. You could have 2 channels in a left right configuration or 4 channels in a left right up down configuration.
UEFI would also change. It would need to be something more similar to libreboot as this is something that would simply be in charge of initializing hardware. This has some pretty major implications as ideally you either would never need to change your motherboard when changing CPUs. The slot would be standard so you could move from generation to generation no issue and so long as you didn't want to potentially update to a newer pcie standard you wouldn't need to. This would however mean that either the standard would need to be so agnostic that the process for initializing hardware wouldn't need to change from generation to generation or you would need to be able to update the onboard libreboot frequently to add support for new CPUs.
The next major change would be your basic IO. First physically, as we are covering the entire back of the pc in slots there isn't much space for IO on the board. Your IO from the motherboard would need to be kept lower than 5-6 mm as is everything else on the motherboard. As everything coming off the PC would be PCIE lanes you would need compatible standards as well so things like USB C ports with usb4 and thunderbolt would kind of need to be standard. The majority of the IO however I would break out into its own addin card(s). This would leave it up to the user on how they wanted to spec out their computer for IO. I would think current motherboard manufactures would sell their own IO cards that could have things like wifi, ethernet, and USB. They could also purchase individual cards if they wanter higher end IO. Just like how they already do. The important thing here is that It would be up to the user to decide what IO they wanted or needed for their setup.
Last major change here is storage. Since we are switching to an all PCIE motherboard the number of PCIE lanes actually becomes an issue. Now I would hope we could add more lanes coming from the CPU but that only goes so far. To solve this I would expect some add in cards to be pcie mux(hopefully that is the right term) card that would take the incoming up to 16 pcie lanes and either split them out to 16 different storage devices or if the chip was sophisticated, and likely expensive, enough it could share that pcie bandwidth between however many storage devices it wanted.
This is my idea. I hope it has sparked some ideas of your own.
Edit: A benefit of this design is that it would be much easier to do maintenance compared to what you were describing. All of the memory and storage would be kept on top of the motherboard. This would keep case design simpler. It would also still be deploy-able in something like a rack mount case. Also I stated the board here in the most compact form factor as it seemed like that was one of the main desires. A motherboard manufacturer could have a bigger board relatively easily. Simply add space between each pcie slot to make them dual slot on their spacing. A lot of add in cards could probably use this space anyway for things like cooling or if they were something like the pcie mux card and it had a lot of cables coming off the board. Doing this would increase the total width on the back to about 270mm It would only be about an inch shorter than the current ATX standard but it would have a lot of advantages in cooling, expandability, upgradability, and sustainability.
@matthewsmith3817 yes. lane bifurcation is part of my explanation and is the cheaper way honestly I have one of those ASUS hyper M.2 cards in my storage server. As far as m.2 cards go it is a much better way of doing it. There are cheaper options though that forgo the massive aluminum heatsink on it though if you don't need the top of the line. It can go further though. Apex storage has an add in card that supports up to 21 m.2 drives. They do this by having a chip on the board that basically acts like a network switch but for PCIE lanes and allows up to 84 pcie lanes through this approach. They are limited to 16 lanes of bandwidth for anything leaving the card just like you would be limited by your uplink speed for using the internet but there is a lot you can do there without having to hit that bandwidth and even if you do, the chances of you hitting all of the drives simultaneously is going to be rare enough that for most people it would be a non issue.
Pretty similar to my idea.
Maybe the PCIe connections can be overbuilt for future generations, then tested against them by the manufacturers later when their specs are finalized.
Steve Burke question at 4:30 "what does Nvidia have", I came in 12 minutes later or so to the broadcast . . . Nvidia offers a whole product in various forms up for debate on utility value. mb
I don't remember Steve ever doing a podcast, I'm very excited!
he was on 3 weeks ago?
@@NulJern Didn't see it, thanks for telling me
Nice to see you on Gordon....get well soon!
I have a PowerMac G5 from 2003. It has separate chambers for GPU/drives and CPU, fan ducts and dedicated PCI fingers for additional GPU power. 20 years old but still ahead of modern PCs.
Buying used is the option to get more for your money. It still makes sense to be able to get something good on a low budget. I would think if AMD and/or Intel released powerful enough APUs (like 16 CUs) they could fill the really low budget market since it ain't much you are getting with improved GPU performance with a dedicated at that point.
Steve being a pokemon otaku makes a lot of sense
Good to see you Gordon Love you take care.
As someone currently using an inverted motherboard case, the Silverstone RL08, any future motherboard spec should 100% put the GPU on top of the motherboard. You could even mount it horizontally and include case features that just jut out of the case to hold the bottom of the gpu board, rather than hang the GPU vertically which limits the kinds of coolers you can put on it. Then you can mount fans on top of the case to directly blow into the gpu, or even mount a tower cooler on the gpu and mount fans front and back just like how tower CPU coolers work.
Good to see you, Gordon. I miss ya welcome back 🙆🏾♂️
They tried to change with BTX and even with Dell opting to use it for years it still didnt change.
Great stuff, guys. Very best wishes to Gordon.
Great to see Gordon back on the show.
Id like the see changes to motherboard connections, why is there so many. The fan connector is different from the USB header, then we have power connectors for sata, then we have front panel connectors which are little tiny things that don't have the same connection layout because motherboard manufacturers want to do things their own way.
I'd love to see changes to ATX but maybe form factors need to be open source to get a consensus on future improvements
Gordon Best Wishes! hope you're doing well love to see you back.
Green cone of shame ! I was thinking about the same thing when people started talking about ducts
reposition the RAM and introduce atx12vo for sure!
One big improvement would be to have the entire IO panel be a custom slot. Having the IO independent of the Motherboard. It would look like a 20-100 pin interface that you would slide a rectangular panel in that has all of your display, usb, mouse/keyboard... the entire IO of the computer.
I see this as being a higher priority than having usb or audio on the front of the case...
But why though
@@noname-gp6hk Think about it.
To be fully honest i never noticed any difference between turning ray tracing and tesselation on and off, so to me they feel like "lower performance buttons". If i was rich, i might care about names on paper, but having to work for the money i throw at nvidia, i strictly want higher framerates for the longest time possible before upgrading. Everything else is just marketing jazz, wich i couldn't care less about.
Edit: I terms of hating changes and not wanting ATX to change i have a mediocre opinion:
First of all if it ain't broke don't bloody fix it. I'm sick and tired of what we call in german "worse-provement", where people want to make something "better" but it ends up a complete mess.
Secondly, if you change something, then make it a propper, well thought out change and fully standardise it, so in the future stuff is compatible. Everything should be as much compatible and interchangeable as possible, so that we are able to use perfectly functional hardware for as long as possible. To bring a stupid example, wich doesn't really make sense, but explains my mind set on these things:
There was no way i would have upgraded to DDR4 in 2017, if it wasn't necessary. Yes with DDR3 i may had lost theoretically 15% cpu performance for example, but as i play havily gpu intensive games, i simply don't care about maximum cpu performance. Instead now i have working DDR3 memory in the box, wich i got in 2010 and i would be happily still using today, but i cannot make the decision if i want to.
That's the main problem. When customers don't get to decide what they get and how they use the stuff in combination, wich in its core is on of the main reasons why most pc people don't want a console. They want a system, where every major part is customer upgradeable and you are not forced to buy the new entire mediocre box every x years.
If ATX is to go away, in my opinion that is fully fine with the condition that it would be replaced by a good standard, so that i can buy standardised cases, motherboards, etc. wich are compatible, without having to search nieche specific products, to get a working system.
So, my thought has been, boards the same size as ATX, place NVME slots on the back, extra height standoffs with a duct that grabs some of the airflow to direct over the back
move the RAM to be in line with the PCIE slots, with channel A abive the CPU, and channel B below the CPU(where the top NVME used to go)
you could use the space where RAM used to go for more NVME, but we'd need more PCIe lanes. for that to matter.
for the RAM to make a major difference, you'd need 2 memory controllers on the opposite side of the sockets
ATX replacement layout idea.
CPU socket lays flat, looking in a PC case it would be where a GPU sits but with the chip facing up
PCIe slot underneath, vertical GPU.
RAM under the CPU, shortest traces. Could use something like CAMM.
12VO power supply. With the RAM on the back of the board, there would be a straight line from the power connector to the VRM to the CPU.
U.2 or something similar for SSDs, adapters for m.2s would smooth transition.
Overall the board would be ~ITX width but longer front to back. By moving m.2 off the board you can save real estate there. Toss motherboard audio and just bundle with a decent USB DAC instead.
Keeping CPU orientation the way it is and lengthening case standoffs would limit cooler height for slimmer cases.
I comment agree on this, it’s more than a like agree
I disagree with removing onboard audio, as well as the seeming lack of secondary or tertiary PCIe slots.
Having one or two onboard M.2 slots as well would be a good idea for more compact PCs, and possibly cheaper considering the cost of U.2 cables.
Get well Gordon!
58:20
THANK YOU. The ultra cramped 1990's designs simply does not make sense today. These HUGE graphics cards thrown a monkey wrench in every MB/case today. Everything is *cramped* - doesn't make sense.
But be careful. ATX is like Assad. Everyone who said he must go ended up going first 💀
Assad Technology Extended or Assad Terminator Executioner?
Gordon! Love you, get better.
Steve, how's that Puget Systems ProArtZ790 board holding up? Are you managing to keep it cool enough? Water cooled? and the Thunderbolt 4 working as should?
1:03:05 IMHO pho the atx soup! I want some motherboards that have decent IO without having to pay for server parts or elite-level 4000 USD CPU and 1000 USD motherboards. remember we pay close to 20% import tax on everything in Europe. Is it so much to ask to connect 4-6 nvme drives, 6-10 sata drives and 2-3 PCIe (16, 8, 8) cards? And of course about 10 USB 3 ports... I remember using a BTX board from intel that had decent slots for network and raid cards in 2007 or so. Yeah... ATX is a dinosaur at this point. Keep the RAM and PSU and PCIe cards somewhat interchangeable, and throw everything else out.
The 3000 series was scooped up by scalpers day one. The 3090 launched first, so maybe a couple of days before the SHTF for that card, but I was refreshing like mad launch second for the 3080. I wanted an FE and this was before anyone was talking about anything unusual. To be fair, I did have a less desirable (to me) 3080 in my cart that i didn't buy (it was Best Buy, which getting an item in cart doesn't guarantee a purchase; during the worst of it) hoping to see the FE show available. Anyway, every 3080 released in the US market sold out within an hour, and every subsequent release wasn't widely available until 4000 series was about to come out.
Just to confirm, a premiere isnt live ?
I .. somehow recall Steve and Gordan having an ATX argument like this before. iirc, something about it being a catch-22. Manufacturing, Case wont change unless Mobo changes, Mobo wont change unless Case changes. There just isn't any organization that pushes for collaboration on the topic, like JEDEC does for memory.
Steve is a GOAT!!!! a technological scientist who sticks to the facts!
Get well soon Gordon ❤
Long term the direction is probably going more the direction of Apple SoC.
RAM reaches speeds where it will need to be closer to the CPU to work.
Steve pointing out "people just look for the green and pink ones" regarding all the 3.5mm on a motherboard reminds me of something that really irks me: motherboard makers who decide to make their board "look cool" by making all the audio jacks black. Like gee, thanks a lot. There's already way more ports than anyone needs and now they aren't color coded anymore so I can find the ones I need. Now I need to refer to the manual. Ridiculous.
Usb ports too!
Gordon and Steve?! 💚💚💚
The power connector used on Intel's S1600JP4. PSU makers would literally just save on wire costs; and motherboards would just have space savings.
Best wishes to Gordon!
what a great full nerd i think its one of the best iv seen
Great to see Gordon again. 💪
Just add standoffs to the ATX mounting system and offer 15 cm of space behind the board. That would ease the pain on case manufacturers as they only need to change the middle of the case design. That would allow you to add at least another 4 pci slots to every board. You then need boards to make use of those : dedicated AI support, dedicated security card to monitor network traffic, etc. But you need the cards designed first as demand for adding in more cards would be needed before you have to maximise the pcie count.
On the MOBO config, as someone that has built about 6 or 7 systems, the only or at least biggets annoyance I have with building on ATX, si that cards on top are annoying to deal with, specially on system with tower coolers.
The last build was my own system a few months back, and the mobo I picked has the ssd gen5 at the top slot which is the slot my card fits because of the size (7900xtx) so I have both the ssd and also the cmos battery behind the video card, which is a pain in the ass when I need to swap the SSD, or when I had to take out the cmos battery when setting up EXPO speeds...
Also doesn't help that in other release the video card from the slop, I had almost no room to move the plastic safety piece because of the tower cooler, I know that it's my fault for getting a tower cooler, the 7900xtx and the MOBO which made building harder, but still seems like a terrible oversight in MOBO design on hindsight.
Gordon Mah Ung is the GOAT of computer stuff. Steve Burke is the only person to ever approach that level. Seeing them working together is like one of those superhero stories where they finally join forces and kick everybody's ass. Steve just needs to adopt Gordon's rants. Nobody can string together profanity like Gordon. It's like music. With F words.
This is the longest I've even heard of Steve sitting down.