Yes, It’s Real: PCI Express x32

Поделиться
HTML-код
  • Опубликовано: 6 май 2024
  • Check out the MSI MPG GUNGNIR 300R AIRFLOW at lmg.gg/zCGkN
    You've heard of PCI Express x16, but did you know there's such a thing as x32?
    Leave a reply with your requests for future episodes.
    ► GET MERCH: lttstore.com
    ► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
    ► SPONSORS, AFFILIATES, AND PARTNERS: lmg.gg/partners
    FOLLOW US ELSEWHERE
    ---------------------------------------------------
    Twitter: / linustech
    Facebook: / linustech
    Instagram: / linustech
    TikTok: / linustech
    Twitch: / linustech
  • НаукаНаука

Комментарии • 496

  • @marcosousa336
    @marcosousa336 2 месяца назад +882

    Scooby Doo and the gang unmasking this ghost as SLI/Crossfire

    • @tech-wondo4273
      @tech-wondo4273 2 месяца назад +24

      Ikr? idk why it had to be a whole vid

    • @BlueEyedVibeChecker
      @BlueEyedVibeChecker 2 месяца назад +8

      @tech-wondo4273 "Money! Ak yakyakyakyak"

    • @prawny12009
      @prawny12009 2 месяца назад +4

      Aren't those limited to x8 x8?

    • @SterkeYerke5555
      @SterkeYerke5555 2 месяца назад +9

      @@prawny12009 Not necessarily. It depends on your motherboard

    • @brovid-19
      @brovid-19 2 месяца назад +2

      I award you seven ahyuks and a guffaw.

  • @anowl6370
    @anowl6370 2 месяца назад +143

    PCI-E does support x32 single link devices, even if it does not use a single socket. It is specified in the PCI Express capability.
    There is also x12

    • @steelwolf411
      @steelwolf411 2 месяца назад +18

      Also x24 in some high end stuff.

    • @shanent5793
      @shanent5793 2 месяца назад +7

      No one ever used it, hence the removal from the latest revision

    • @steelwolf411
      @steelwolf411 2 месяца назад +9

      @@shanent5793 It was used in Cisco UCS for some VICs as well as other things. Also I believe it was used by IBM for specific cryptography accelerators.

    • @cameramaker
      @cameramaker 2 месяца назад +10

      @@steelwolf411 there is no x24 in the spec. Some PHI MXM cards claimed x24 but it was running in either 2x12 / 3x8 / 6x4 mode.

    • @bootchoo96
      @bootchoo96 2 месяца назад +2

      I'm just waiting on x64

  • @Vade420
    @Vade420 2 месяца назад +417

    Thank you Mr. Handsome Mustache man

    • @drummerdoingstuff5020
      @drummerdoingstuff5020 2 месяца назад +4

      Grinder called…Jk😂

    • @realfoggy
      @realfoggy 2 месяца назад +7

      His wife would agree

    • @ImMadHD
      @ImMadHD 2 месяца назад +4

      He really is so cute 🥰

    • @Blox117
      @Blox117 2 месяца назад +4

      i was thinking of the other guy when you said mustache man

    • @Fracomusica
      @Fracomusica 2 месяца назад

      Lmao

  • @JohnneyleeRollins
    @JohnneyleeRollins 2 месяца назад +1017

    x16 is all you’ll ever need - bill gates, probably

    • @buff9267
      @buff9267 2 месяца назад +24

      turns out bill is lame

    • @darrengreen7906
      @darrengreen7906 2 месяца назад +3

      hahaha

    • @Nightykk
      @Nightykk 2 месяца назад +36

      Based on a quote he never said - possibly, probably.

    • @lexecomplexe4083
      @lexecomplexe4083 2 месяца назад +9

      PCI didn't even exist yet, let alone PCIe.

    • @chovekb
      @chovekb 2 месяца назад +1

      Sure, like 16 x x16 it's like a 16 core CPU LOOOL

  • @marcosousa336
    @marcosousa336 2 месяца назад +124

    This just sounds like SLI/Crossfire with extra steps

    • @Arctic_silverstreak
      @Arctic_silverstreak 2 месяца назад +14

      Well sli is used for synchronizing gpu while this is is just fancy name/way to aggregate high speed network card

    • @StrokeMahEgo
      @StrokeMahEgo 2 месяца назад +7

      Don't forget nvlink haha

  • @dennisfahey2379
    @dennisfahey2379 2 месяца назад +35

    x32 and beyond are very common in ultra high end modular servers. If you look at the server manufacturer Trenton Systems, they have massive PCI-E array capability. Of course its still PCI-E a migration from PCI and that has its bottlenecks but when you want parallelism they do it very well. (Not affiliated; just impressed)

    • @shanent5793
      @shanent5793 2 месяца назад +2

      You are mistaken, there has never been an implementation of x32, which is why it was deleted from PCIe 6.0

    • @sakaraist
      @sakaraist 2 месяца назад +2

      @@shanent5793 Weird, then why do I have a x32 NIC on my desk? It was only not used in consumer boards, it very much so exists in the commercial space. You often find them as riser cards, x48 is the highest i've personally dealt with.
      I've also got an x32 FPGA dev kit sitting at my bench.

    • @shanent5793
      @shanent5793 2 месяца назад +3

      @@sakaraist If they were referring to the total number of lanes, then this wouldn't be noteworthy because RYZEN Threadripper consumer boards have had more than 32 lanes for several years already, but they're never referred to as PCIe x32 devices. Riser cards are just glue, not end devices and are out of scope.
      In the case of NICs, they may have two x16 ports that can be connected to different sockets in a system to save inter-socket bandwidth, but PCIe will still treat them as two separate devices.
      FPGAs could of course be programmed to implement PCIe x32, but if you want to use the hardened PCIe IP it will still be x16.
      If your devices have actually negotiated a PCIe x32 link at the hardware level, I would love to know the part numbers because even PCI-SIG doesn't know about them and they're definitely not off-the-shelf

    • @jnharton
      @jnharton Месяц назад

      @@shanent5793This needs more upvotes, honestly. ----- Just because the slot can carry 32 lanes doesn't mean that there are must be any true 32 lane devices.
      Makes perfect sense that you might make a single board that is a carrier for more thin one device and use a single slot. Especially in an industrial context where one larger slot might be better than a bunch of extra slots and little cards everywhere.
      Kind of a throwback to the days of large card edge connectors for parallel buses, only using each signal line as a separate communications lane.

    • @robertmitchell5019
      @robertmitchell5019 Месяц назад +1

      @@shanent5793 Wow did you watch the same video I did? @ 3:30 they show Nvidia cards using X32. (off the shelf BTW). They call it Infiniband because NVIDIA. And yes I know Infiniband is the communication standard that uses the PCIE x32 specs.. Just like NVME is the communication standard that uses PCIE x4.

  • @PrairieDad
    @PrairieDad 2 месяца назад +100

    Riley Yoda needs to be a regular thing.

  • @dan_loup
    @dan_loup 2 месяца назад +67

    A pretty good slot to put in your Virtua fighter cartridge

  • @stevenneaves8079
    @stevenneaves8079 2 месяца назад +21

    Back in the day X32 meant something different to us entry level audio production folks 😂

    • @ToastyMozart
      @ToastyMozart 2 месяца назад +4

      And Sega fans.

    • @SuperS05
      @SuperS05 Месяц назад

      I still use an behringer x32♥️

  • @CoopersHyper
    @CoopersHyper 2 месяца назад +179

    1:30 the binary says: "Robert was herrr" 🤓

    • @robertm1112
      @robertm1112 2 месяца назад +3

      nice

    • @DodgerX
      @DodgerX 2 месяца назад

      Hey robert ​@@robertm1112

    • @Trident_Euclid
      @Trident_Euclid 2 месяца назад +3

      🤓

    • @carabooseOG
      @carabooseOG 2 месяца назад +1

      How do you have that much free time?

    • @CoopersHyper
      @CoopersHyper 2 месяца назад

      @@carabooseOG i dont lol, i just put it in a binary to text translator lol

  • @2muchjpop
    @2muchjpop 2 месяца назад +119

    SLI and crossfire failed back then, but with modern high speed interconnect tech, I think we can bring it back.

    • @zozodj2r
      @zozodj2r 2 месяца назад +48

      When it comes to gaming, it wasn't about the interconnect. It was about the sync between the two which had frame lag.

    • @christophermullins7163
      @christophermullins7163 2 месяца назад +17

      SLI or crossfire will never make sense.. it didn't back then as it's difficult to get working much less smoothly. Best case it to have all chips and memory as close to one another as physically possible. Considering we regularly +30-70% uplift in GPUs just 1.5 years later.. you're better off just throwing out your old flagship and get the new one than to try and mate 2 together. It will use more than 2x power and deliver much less than 2x performance. I get that this was probably .mostly a joke but I am just here to bring the real world to the discussion.

    • @illustriouschin
      @illustriouschin 2 месяца назад +10

      Marketing just needs a way to spin it and we'll be buying 2-4 cards again for no reason again.

    • @WilliamNeacy
      @WilliamNeacy 2 месяца назад +5

      Yes, I'm just not happy buying one $1000+ GPU. I want to have to buy multiple $1000+ GPU's!

    • @shanthoshravi5073
      @shanthoshravi5073 2 месяца назад +6

      Nvidia would much rather you buy a 1200 dollar 4080 than two 300 dollar 4060s

  • @JBATahoe
    @JBATahoe 2 месяца назад +23

    0:18 You guys worked really hard on this shot; you probably should’ve stayed on it longer. 😂

  • @sshuggi
    @sshuggi 2 месяца назад +25

    That just sounds like SLI with extra steps.

    • @TheHammerGuy94
      @TheHammerGuy94 2 месяца назад

      Without the proprietary connector

    • @eliadbu
      @eliadbu 2 месяца назад

      why are you people keep comparing it to SLI, it has nothing to do with SLI. It is more like link aggregation.

    • @TheHammerGuy94
      @TheHammerGuy94 Месяц назад

      @@eliadbu SLI needs both the PCIE lanes and an extra SLI bridge to enable faster data between the cards.
      But this was from the time when PCIE wasn't fast enough for NVidia's standards.
      now with PCIE 4 and 5 being as fast as it is, we mostly don't need the SLI Bridge anymore. keyword: mostly.
      but in simpler terms, X32 lanes is more like using RAID 0 on storage.

    • @eliadbu
      @eliadbu Месяц назад

      @@TheHammerGuy94 In SLI, the PCI_E is used to communicate with both devices at the same time as they both work in unison to render interlacing frames, this is more like having a second card its whole purpose is to pass the communication to the main card, so it is not as RAID 0 - as with RAID 0 both devices are part of an array and are the same.
      We don't need SLI bridge anymore because SLI is pretty much a dead technology.

  • @user-fl4pi2ut9c
    @user-fl4pi2ut9c 2 месяца назад +13

    ... You telling me I don't need it. I'm an American. I don't need multiple 64 thread Ryzen Epic servers. But I got em, and they got 128 PCIE lanes each!

  • @4RILDIGITAL
    @4RILDIGITAL 2 месяца назад +2

    Simplifying complex tech stuff like PCI Express x32 - just brilliant. Keep up the informative and clear tech explanations.

  • @5urg3x
    @5urg3x 2 месяца назад +14

    MSI tech support are the worst in the industry. You know what they told me? This is verbatim: “We don’t troubleshoot incompatibility”

    • @vickeythegamer7527
      @vickeythegamer7527 Месяц назад

      😂

    • @maxstr
      @maxstr Месяц назад

      Really?? In the past, MSI has always had the best warranty and repair service. I had a video card that was displaying weird corrupt garbage after like 6 months, and they replaced it at no cost. I had an MSI laptop that I smashed the screen by shutting the lid on a pencil, and MSI replaced the screen under their one-time replacement warranty. But that was years ago, so I'm guessing things have changed?

    • @simongreen9862
      @simongreen9862 Месяц назад

      I don't know; my 2017 AM4 motherboard is still getting BIOS updates as of January 2024, which was necessary for me to swap the original 1080Ti with a new 4070 I got last month.

    • @5urg3x
      @5urg3x Месяц назад

      @@simongreen9862Can we take a moment and ask the question why the fuck isn’t UEFI/BIOS firmware open source? Really should be.

    • @simongreen9862
      @simongreen9862 Месяц назад

      @@5urg3x I agree with you there!

  • @Xaqaria
    @Xaqaria 2 месяца назад +4

    The Mellanox NICs also allow them to be connected to PCIe lanes from both CPUs. It levels out the network latency by not requiring ½ of the traffic to jump an interprocessor link to get to the NIC.

  • @theloudestscreamiveeverscrem
    @theloudestscreamiveeverscrem 2 месяца назад +118

    So... This is just SLI?

    • @Dr2Chainz
      @Dr2Chainz 2 месяца назад +3

      Had the same thought hah!

    • @sussteve226
      @sussteve226 2 месяца назад +8

      No, it is not

    • @raycert07
      @raycert07 2 месяца назад +5

      Sli for non gpus

    • @Arctic_silverstreak
      @Arctic_silverstreak 2 месяца назад +14

      Not really, just a fancy name of link aggregation for, mostly, network card

    • @ManuFortis
      @ManuFortis 2 месяца назад +12

      Kind of, but not really. It is using similar methods, but it's not exactly the same. This is probably closer to what is done on AMD's workstation cards with being able to attach a display synch module between multiple workstation gpu's to output as a single monitor signal with one of these: AMD FirePro™ S400 Sync Module for instance with AMD's workstation cards. (Nvidia has their own version, but I don't know the details.)
      If you look at the card shown by Riley in the video, you'll see that cable connecting them. I'm not sure of it's exact connector specs, but it will be somewhat similar in nature to the JU6001 connector that can be found on the AMD WX series cards. Sometimes it's populated with an actual socket/port. Sometimes not.
      Essentially, if I understand correctly, instead of each of the cards sharing the workload intended between them, they are all doing their own work, or shared work perhaps in some cases, and outputting it all to the same monitor. It's a subtle but important difference, because SLI/Crossfire typically is used for splitting workloads between GPU's to get a better end result, where as display synch (as I will call it for now), is more about combining separate or even shared workloads into a single tangible visual result.
      That sync card is effectively doing what Riley explained about the x32 setup, and the asynchronous data streams typical of PCI compared to when they are... well... synced.
      Maybe not the worlds best explanation, but I hope it helps.

  • @ddevin
    @ddevin 2 месяца назад +4

    GPUs are getting so wide these days, they might as well support PCIe x32

  • @crabwalktechnic
    @crabwalktechnic 2 месяца назад +2

    LTT is like the MCU where this video is just setting up the next home server episode.

  • @lazibayer
    @lazibayer Месяц назад +1

    I glanced at the thumbnail and thought it was about a new longer x series barrel for p320 for some reason.

  • @Roukos_Rks
    @Roukos_Rks 2 месяца назад +7

    Now let's wait for x64

    • @fujinshu
      @fujinshu Месяц назад +1

      And then maybe x86?

  • @StingyGeek
    @StingyGeek 2 месяца назад +16

    A 32 lane PCI bus, awesome! GPU card makers can use it for their premium cards...and only use four lanes. Awesome....

    • @jamegumb7298
      @jamegumb7298 2 месяца назад +2

      Each time someone buys any current Intel 1700 and you add an ssd it gets bumped down to ×8 anyway leaving 4 of the very few lanes you have useless.
      AMD has the same thing, in practiced expect a card to always ×8.
      Then any bench you see where the compare ×8 to ×16 there is minimal to no difference unless you go down a generation.
      Just make GPU link on desktop ×8 by default and make room for 2 more NVMe slots.

    • @commanderoof4578
      @commanderoof4578 2 месяца назад

      ⁠@@jamegumb7298AMD does not have the same thing
      Unless its a dogshit motherboard you can have 2X NVMe slots at full speed at the time and an X16 slot
      Its when you go past 2 you run into issues as your either adding multiple to the chipset or you start stealing lanes
      Without any performance loss on conflicts you can have these configurations in AM5
      2x NVMe + 1 x16
      4x NVMe + 1 x16

    • @DeerJerky
      @DeerJerky 2 месяца назад +1

      @@jamegumb7298 eh, on AM4 I have 2 NVMes, one on gen 4 and the other on gen 3. My GPU is still on 16 gen 4 lanes, and iirc AM5 only increased the lane count

  • @gcs8
    @gcs8 2 месяца назад +1

    Cisco iirc has a PCI-E x24 for their MLOM + NIC (they may call it a VIC) on some of their stuff.

  • @cianxan
    @cianxan 2 месяца назад

    This video reminded me of SLI. The physical setup looks identical, you got two devices occupying two PCI Express x16 slots and have an extra cable/connection between the devices.

  • @EriksRemess
    @EriksRemess 2 месяца назад +5

    last intel mac pro had two 16x slots combined for dual gpu amd cards. I guess technically that’s 32x

    • @cameramaker
      @cameramaker 2 месяца назад

      its not. many servers have a long slot for holding riser boards (eg. 3 cards in 2U rack mount server), but those are NOT SINGLE DEVICE slots. Same as a dual x16 for dual gpu is not a single PCIe device.

  • @matthiasredler5760
    @matthiasredler5760 2 месяца назад +2

    In the early 90s even simple soundcards need the ISA slot...and were long an beefy.

  • @chrism6880
    @chrism6880 2 месяца назад +5

    Doesn't the most recent Mac Pro have a double PCIx16 link to their custom AMD GPU?

  • @nono-oz4gv
    @nono-oz4gv 2 месяца назад +6

    lmao 2 4090s on one card would be absolutely insane

    • @PixyEm
      @PixyEm 2 месяца назад +3

      Nvivia Titan Z 2024 Edition

    • @benwu7980
      @benwu7980 2 месяца назад

      There was a time when stuff like that did get made. I bought a Dell that was meant to have a 7950GX-2, but it arrived with an Ati card.

    • @jondonnelly4831
      @jondonnelly4831 Месяц назад

      The cooling would be problematic, would need a 360mm radiator maybe a 420mm. Though I guess if you afford one the cooling and power costs wont matter ! Big problem with SLI is that memory is become a bottleneck. The two cards VRAM don't add together. So 2 X 24 is still just 24. So it would need like 2 x 48. Fuk that would be insanely expensive.

  • @Mr.Morden
    @Mr.Morden 2 месяца назад +1

    Reminds me of those old school gargantuan 16bit ISA slots used to overcome speed limits.

  • @Th3M0nk
    @Th3M0nk 2 месяца назад +1

    In FPGA is fairly common to see X32, Microsoft had a board that allow u to control two FPGAs with this lanes, the trick was even it was an X32 actually it was like emulating the connection been two X16 lanes by readdressing the lanes.

  • @eldibs
    @eldibs 2 месяца назад +4

    "I'm sure some of you are already thinking of ways you can justify your purchase." Wow, calling me out just like that?

  • @TheRealDrae
    @TheRealDrae 2 месяца назад +1

    I KNEW IT, i was sure I've seen an oversize PCIe somewhere!

  • @michaellegg9381
    @michaellegg9381 2 месяца назад +1

    Just a thought 💭🤔.. if you want a super small SFF build the motherboard only has 1 PCI express slot usually plus some nvme slots. So if you have 1x 32x pci express slot you could have 1 card that has the GPU and SSDs and dedicated npu and a 10gb NIC ect all in 1 expansion card especially if you have 1 side of the PCB for the GPU and the other side for the npu and SSDs and NIC and all the other hardware you want. It would make for very capable SFF builds or very very tidy full size builds that only has the motherboard and CPU and cooler and ram and 1 expansion card that's a mix of all kinds of different hardware.. so as much as we don't need 32x PCI express lane's for general hardware but the idea and the 32x slot could definitely be used up..

  • @davidschaub7965
    @davidschaub7965 Месяц назад

    I've seen server motherboards with X24 physical slots that just connect to exciting PCIe switches.

  • @Aragorn7884
    @Aragorn7884 2 месяца назад +6

    x64 just needs 5 more to work properly...😏

  • @MrMman30
    @MrMman30 2 месяца назад +2

    The last time I saw a product with an x and a 32 next to it was in 1994.
    That didn't go well!
    Here is hoping this is not a gimmicky in-between product and is an actual leap into the future.
    #SEGA #32x

    • @kousakasan7882
      @kousakasan7882 Месяц назад

      I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.

  • @ivofernandes88
    @ivofernandes88 2 месяца назад

    The end pointing the reference to Linus got me dead 🤣🤣

  • @Daniel15au
    @Daniel15au Месяц назад

    3:58 I like that the connector is labeled as "black cable" even though it's not black.

  • @lgfs
    @lgfs 2 месяца назад +1

    My god that segue reminded me of STEFON in SNL... The MSI MPG Gungnir 4000 Battleflow Monster Extreme has EVERYTHING!

  • @ramonbmovies
    @ramonbmovies 2 месяца назад

    that was the best quickie I've had in years.

  • @thestickmahn2446
    @thestickmahn2446 2 месяца назад +1

    "Not fast enough? Just add more lane!"

  • @flyguille
    @flyguille Месяц назад

    It need in all the lanes to arrive the bits at the same time, routing 32x x2 (each lane is a differential pair) , 64 tracks to trace all to the same chip is very hard, all traces must have the exact same lenght, or there will be penalties, delays penalties.

  • @kousakasan7882
    @kousakasan7882 Месяц назад

    In the early 90s, I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.

  • @DiamondTear
    @DiamondTear 2 месяца назад

    0:05 was the only B-roll available of a motherboard with PCI and PCIe slots?

  • @cem_kaya
    @cem_kaya 2 месяца назад +1

    There are also OCP ports

  • @tripleohno
    @tripleohno 2 месяца назад

    Miss my a8n32x sli fsb would post above 340 nuts board handled anything I tossed at it back then. Will be missed ( oh it's in a box still memories)

  • @chaosfenix
    @chaosfenix 2 месяца назад +1

    I would like to see an update to the actual PCIE slot standard.
    It doesn't have to be exactly like this in that I don't care about the specifics like the connector type but I think I would like this architecture. It would be something like an MCIO connector that would only have 4 lanes by default. That is it. No more than that would be allowed in the connector. Each individual connector would be specced to provide power between 50-100W. I don't care the specific range. Just that it should be able to provide up to a specified power. You would still support pcie bifurcation which means you could then turn a 4 lane port into a 2x2, a 2x1x1, or a 1x1x1x1. This could be amazing for addin cards as if you wanted to add a bunch of PCIE devices you would simply assign a single pcie lane to them. Honestly not too much different here from the current spec. Here is where it would get spicy though. Part of the spec would be spacing between each individual MCIO connector. The reason for that is because you would also allow not only for the bifurcation of the slot but for the combination of the slots as well. Maybe the devices going greater than 4 would simply use Driver Binding like you said, I imagine it would be relatively easy to bind 2-4 4 lane pcie connections. Single mode would be the default but you could choose to combine up to 4 of the slots together as well in the bios. This would mean that you could still have devices connect to up to 16 PCIE lanes if you wanted but if you didn't then you would simply have 4 individual MCIO connectors you could direct attach to instead. It would be hugely more versatile. Also you would get your greater power delivery in that a more power hungry device using all 4 connectors would be supplied with like up to 200-400W of power directly. Sure you are going to have devices, especially GPUs, that still need additional power but that should be rare if they could work with up to 400W.
    I think you could even allow some backwards capability if you made available an adapter to go from the 4 MCIO connectors and PCIE. Then you would just need to provide cheap standoffs for the screws at the back. This wouldn't be a problem for most cards but if you had a chonker like a 4090 you could have z height issues in the case. For most regular cards it wouldn't be an issue though and the issue would go away eventually as people switched to the new standard.

  • @acarrillo8277
    @acarrillo8277 2 месяца назад +1

    Looks over at the EDSFF 4C+ slot a PCIe x32 slot in wide use in server PCIe cards. I guess we wont tell him about you.

  • @Unknown_skittle
    @Unknown_skittle Месяц назад

    X32 is often used to connect 2 server nodes together

  • @zeekjones1
    @zeekjones1 2 месяца назад

    I feel the bandwidth could be used by an SFF with some sort of breakout expansion slots.

  • @hummel6364
    @hummel6364 Месяц назад

    Of course I knew, the server in my basement has two of them... although it just uses them for risers with different slot setups.

  • @ssjbardock79
    @ssjbardock79 2 месяца назад +1

    Riley sounds like the announcer from the price is right when he does his sponsor bit

  • @Edward135i
    @Edward135i 2 месяца назад

    0:00 woah MSI Z68A-GD80 that was my first ever gaming motherboard that baby Linus is showing.

  • @myne00
    @myne00 2 месяца назад

    I'm still surprised optical connections aren't used yet (again? (SPIDF)).
    I'm expecting USB (or whatever apple calls it next) to have a fibre down the middle in that tiny blank part of the C connector at some point.
    Bend insensitive SMOF is cheap enough now that is plausible at scale. SFPs are getting there too.

  • @NicoleMay316
    @NicoleMay316 2 месяца назад +2

    I would love it if using one PCIE slot didnt disable another. I dont think we're ready for the jump to x32 until this bandwidth limitation for lanes is addressed.

    • @rightwingsafetysquad9872
      @rightwingsafetysquad9872 2 месяца назад +2

      That limitation doesn't exist in the products that use x32. Desktop CPUs may only have 8-24 lanes, but server chips have hundreds.

    • @alexturnbackthearmy1907
      @alexturnbackthearmy1907 2 месяца назад +1

      @@rightwingsafetysquad9872 True. Old server processors have WAY more PCIE lines then even top of the line modern desktop processors (PCIE 3.0 tho), and if that isnt enough, just get yourself dual cpu system.

  • @alphaomega154
    @alphaomega154 2 месяца назад +1

    yup. and i also just hinted an "idea" in one of recent digital foundry's video comment section about : independent GDDRxX memory module/sticks in M.2 form factor.(could come in any format, 2230, 2242 or 2280) for MULTI PURPOSES of usage cases. from adding more VRAM to BOTH iGPU and discrete GPU, to actually adding CPU fast remote cache(extra huge L4 cache anyone?). i see a market for that. i hope somebody pick the idea up.
    imagine you have an iGPU, and then simply plug an M.2 16GB GDDR6X memory sticks into one of your M.2 slots, and have the iGPU driver to recognize it and have some instructions to use it, will make your iGPU has 16GB of GDDR6X VRAM. and your CPU could steal some of the paging of it for its "extra cache" of theoritical L4 cache. your OS simply need to add instuctions to use it if it detected available.

    • @alexturnbackthearmy1907
      @alexturnbackthearmy1907 2 месяца назад

      Good idea...but it already exists, called RAM sticks. They are also much faster then m.2 device will ever be. You can even use it as very fast SSD (volatile one, so dont store anything important in ram).

  • @KavorkaDesigns
    @KavorkaDesigns 2 месяца назад

    Why weren't the number of CPU lanes ava mentioned? Mine has 24 lanes, there is no 32 possible 2x 16's or not, am I wrong? One will be 16x and the other 8x

  • @kenzieduckmoo
    @kenzieduckmoo 2 месяца назад

    it still cracks me up whenever someone says dada instead of data

  • @alexandermcclure6185
    @alexandermcclure6185 2 месяца назад

    After you said "beyond 16 lanes..." my pc froze for a moment. LOL!

  • @grove9373
    @grove9373 2 месяца назад

    can you explain what driver overhead is?

  • @brondsterb9702
    @brondsterb9702 2 месяца назад +2

    wonder how many years it'll be before PCIE X16 is phased out..... remember how long AGP slots lasted for...... only time will tell..... and who knows what it'll be replaced by....

    • @Arctic_silverstreak
      @Arctic_silverstreak 2 месяца назад +2

      I mean physically the connection maybe phased out but i think it's very unlikely that the pcie itself will be phased out too

    • @chrisbaker8533
      @chrisbaker8533 2 месяца назад

      AGP only lasted for about 13 years, 1997 to 2010.
      Pci-e is currently at 22 years. launched in 2002.
      As far as when it might get phased out, when ever it stops being able to handle the data we need to transfer.
      Maybe 10 to 15 years on the current trajectory.
      OR it may wind up like USB and never die.
      lol

    • @sakaraist
      @sakaraist 2 месяца назад

      @@chrisbaker8533 On desktops possibly, However PCIE is a core component of a metric shitload of embedded systems and fpga dev boards.

  • @XenXenOfficial
    @XenXenOfficial 2 месяца назад +2

    Wait a minute. That binary looks suspicious. All either having started with 01 or 011. It's ascii! Quickly, someone translate it!
    Edit: ive noticed some binary as 01000000 which isnt ascii, but it is 1 away from capital A. BUT, A huge majority of the stuff looks like readable letters

  • @krisb853
    @krisb853 2 месяца назад +1

    I am glad that we got SLI PCIE before GTA6.

  • @jondonnelly4831
    @jondonnelly4831 Месяц назад

    With 3kg+ graphics cards a long slot would be a good idea long as it can still accept 16x. Extra power too, maybe 4050 work without cables and no card droop.

  • @Walker_96365
    @Walker_96365 2 месяца назад +1

    Technically, a pcie gen 5x16 slot is like a pcie gen 1x256 slot

  • @truthdoesnotexist
    @truthdoesnotexist 2 месяца назад

    I was just wondering this yesterday

  • @muixgohanx3579
    @muixgohanx3579 2 месяца назад

    Sooo if on X32 PCI devices talk with each other, can’t you do SLI with it? Wasn’t that the problem that the NVLink was too slow and they couldn’t really communicate?

  • @oli_onion
    @oli_onion Месяц назад

    Would a dual gpu card benefit from the x32 possibly allowing for more gpus in a smaller space in a sever?

  • @lemmonsinmyeyes
    @lemmonsinmyeyes 2 месяца назад

    Wasn’t the intel cheese grater Mac Pro have a dual x16 for a dual-fire pro thing? I remember seeing their fancy slot that was supposed to be for high bandwidth gpu stuff

  • @Thomas-VA
    @Thomas-VA 2 месяца назад

    need all that sweet x32 for the next great A.I. film, music, art and book creation app / bit miner.

  • @VideoManDan
    @VideoManDan 2 месяца назад

    1:47 Isn't this exactly what old GPU's with an SLI bridge did? Or am I not understanding correctly?

  • @chrisbirch2002
    @chrisbirch2002 2 месяца назад

    aha the last time i did driver binding was to bond 2 56k dialup modems together into 1....in 1999

  • @lordelliott42
    @lordelliott42 2 месяца назад

    1:47 The way you explain that sounds a lot like SLI graphics cards.

  • @PhilipKerry
    @PhilipKerry Месяц назад

    So using this system it would be theoretically possible to have two linked x16 slots with an rtx 4090 in each , this would give a backdoor form of sli ....

  • @katanasteel
    @katanasteel Месяц назад

    Well if you have 128 lanes per package with your epyc cpus

  • @andonel
    @andonel 2 месяца назад +2

    so x32 is just two x16 in a trench coat?

  • @TheKdcool
    @TheKdcool 2 месяца назад

    That ending was great 😂

  • @TheHammerGuy94
    @TheHammerGuy94 2 месяца назад

    So... SLI without the connector?

  • @Squishyobsidian
    @Squishyobsidian 2 месяца назад

    I think a double length connection would help with sag no?
    Just don’t wanna have to plug that shit in

  • @tittledieselperformancellc
    @tittledieselperformancellc 2 месяца назад

    I've always wanted to run my 10g nic's in SLI!

  • @Shuttterbugg
    @Shuttterbugg 2 месяца назад

    Whats crazy is pcie 6 and 7 sre out and we srent even using 5 yet...

  • @vladislavkaras491
    @vladislavkaras491 2 месяца назад

    Thanks for the video!

  • @joshzwies3601
    @joshzwies3601 Месяц назад

    So, SLI for networks then?

  • @adrenaliner91
    @adrenaliner91 2 месяца назад

    400 Gigabit huh? And I am here with my mobile-DSL hybrid connection that maxes out at a 150mbit connection and that will never ever be more than that in this skyscraper.

  • @DrewUniverse
    @DrewUniverse 2 месяца назад

    At the end I somehow thought Riley was going to say QuickTechy.. ah, maybe next time.

  • @BenjaminWheeler0510
    @BenjaminWheeler0510 2 месяца назад +1

    Pour one out for the man-hours spent on the 1-second star wars clip at 0:18. Worth it.

  • @wolfeadventures
    @wolfeadventures 2 месяца назад +1

    0:26 that’s what he said.

  • @MaesterTasl
    @MaesterTasl Месяц назад

    So it's Crossfire/SLI for server network cards basically.

  • @zlibz4582
    @zlibz4582 2 месяца назад

    this will be useful for the upcoming intel cps and nvidia gpus

  • @JeremyWorcester1
    @JeremyWorcester1 2 месяца назад

    Very beginning of video, points to pci slot that is right next to PCIe slot, says this is PCIe slot lol.

  • @cjames4739
    @cjames4739 2 месяца назад +1

    The X is referred to as "by" though. So a PCIE x4 is called PCIE by 4 and so on

    • @Vermicious
      @Vermicious 2 месяца назад

      Thankyou. It's like when people refer to camera zoom eg. 4x as "4 ex". Infuriating

    • @Vermicious
      @Vermicious Месяц назад

      @shall_we_kindly It’s a multiplication. Is 5 x 10 “five ex ten”?

  • @casper75559
    @casper75559 Месяц назад

    Sheesh… I was thinking I can finally get the full use of my gpu but it’s fine, I’ll stay on 8x8 for a while longer.

  • @tsmspace
    @tsmspace 2 месяца назад +1

    yeah but how do we get GAMES that can multithread batchcalls, shadows, physics etc. that are all crushing my gameplay since they are all singlethreaded. ?? I'm sure there are good reasons for the single-threadedness but I need to have greater render distance of the objects which means having hundreds more objects which means thousands more batch-calls which means 10fps because it's all on one thread because of well it needs to be otherwise the game gets ahead of itself but there's really no way to use all of this "80+ cores" ??

    • @tsmspace
      @tsmspace 2 месяца назад

      you can x32 my ____ if it's not going to make the game work better. No one cares how it looks once they want to really play, they want it to be capable.

  • @johnlife9926
    @johnlife9926 2 месяца назад

    What if we just connected the pcie devices with some type of bridge to handle the communication...

  • @Drunken_Hamster
    @Drunken_Hamster 2 месяца назад

    Imagine they bring back SLI/Crossfire via PCIE-6.0 x32. Imagine 2-4 5090s or 8950 XTXs in one rig pushing 8k 4+ rays and 4+ bounces path tracing at 120fps.

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 2 месяца назад +1

    Kind of ironic since Intel‘s current LGA 1700 platform is pretty bad regarding PCIe flexibility, for example not being able to do PCIe Bifurcation.

  • @slothnium
    @slothnium 2 месяца назад +1

    I'm curious, why isn't there a PCIe x3, x6, or x12? (1+2, 2+4, 4+8)

    • @shanent5793
      @shanent5793 2 месяца назад +1

      Because that's not how it works, the video is incorrect and PCIe link widths are hardware and have nothing to do with software. The fundamental reason for preferring x2 and x4 has to do with efficient clock division. Links larger than x4 are built up from multiple x4 (still at the hardware level) so x12 was actually in the spec but was recently removed from Gen 6 because it was never used. The x3 and x6 connections would be asymmetric and were not included because they complicate the lane reversal feature which allows designers to flip the order of the lanes for design convenience

    • @alexturnbackthearmy1907
      @alexturnbackthearmy1907 2 месяца назад

      @@shanent5793 Well unless you replace 1+2 connection by 1+1+1...at which point you may start asking yourself, wouldnt it be easier and more compact to just use X4 slot to begin with.

    • @shanent5793
      @shanent5793 2 месяца назад +1

      @@alexturnbackthearmy1907 they would still be three separate links from a hardware perspective, so the OS would have to manage three devices. In principle there could be multiple devices on one card, as long as the host supports it and they are aligned in a binary sequence eg. 1+1+1 or 2+1, but not 1+2

  • @KRuslan1000
    @KRuslan1000 2 месяца назад

    Sounds like the old SLI at full speed

  • @BaieDesBaies
    @BaieDesBaies Месяц назад

    How comes that we can have more than 2 or 3 M2 SSD if one of them uses 4 PCIe lanes and graphics card uses 16 when CPU only supports 20 lanes ?