[38] We're not even at PCIe 6.0 Yet!

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 214

  • @artemis1825
    @artemis1825 Месяц назад +99

    I want a girl to look at me the same way Dr Cutress looks at the PCIe board 14:09
    Excellent video!

    • @striker44
      @striker44 Месяц назад +3

      You should be looking at NoC instead of a girl, Mr. artemis. 😂

    • @j340_official
      @j340_official Месяц назад +1

      Buy her some Chips and Cheese 😅

  • @CaedenV
    @CaedenV Месяц назад +72

    7?! I'm still on pcie3, which doesn't even saturate my rtx3090 (at least not for the 4k games I play).
    It is truly wild just how much further the server and AI markets have gone and totally left the desktop sector behind!

    • @StefanHolmes
      @StefanHolmes Месяц назад +33

      What's wild is that PCIe 7 will be able to do in 1 lane what your 16X PCIe 3 is doing today. Roughly speaking.

    • @thor.halsli
      @thor.halsli Месяц назад +8

      It may not saturate the bandwidth but your 1% lows and 0.1% lows is way better on pcie4 and even more so on pcie5(your miles may vary depending game)

    • @jake20479
      @jake20479 Месяц назад +3

      bruh im on pcie 3.0 with a 4080S/10900k 😅

    • @JJFX-
      @JJFX- Месяц назад +1

      ​@@thor.halsliI certainly wouldn't say that as a blanket statement but yes it's far more likely to be the case for gen3 than going from gen4 to gen5 (on existing hardware).

    • @noname-gp6hk
      @noname-gp6hk Месяц назад +8

      This isn't for you. Leading edge bus design is not being driven by desktop PC applications anymore.

  • @Ultrajamz
    @Ultrajamz Месяц назад +108

    My issue is, my GPU takes up many pcie slots….

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 Месяц назад +11

      Same. PCIe Switches and optical transceivers to easily move stuff away from the slots would be great. Electrical cables are inviting issues when looking at PCIe Gen4 or faster.

    • @Darkknight512
      @Darkknight512 Месяц назад +9

      Now that few people have hard drives and DVD drives in the front of their computer anymore it would be interesting if we mount a GPU in the front or maybe top as if it were a radiator.

    • @gabrielpi314
      @gabrielpi314 Месяц назад +15

      Yup, super frustrating when setting up homelab servers. Like a game of tetris trying to squeeze in GPUs, high speed networking and storage controllers on some motherboards.
      Would love to see OCP come downmarket. Make the rear-panel modular so I can swap out 75% of the USB ports that I don't need for an OCP NIC.

    • @minus3dbintheteens60
      @minus3dbintheteens60 Месяц назад +1

      watercool it.

    • @myne00
      @myne00 Месяц назад +1

      @gabrielpi314
      Sounds like you need a mining motherboard. Some have 7-8 slots spaced 3-4 apart.

  • @1armbiker
    @1armbiker Месяц назад +21

    I saw that sneaky transition at 1:40… you can’t hide from me.

  • @Lossmars
    @Lossmars Месяц назад +12

    Thank you Ian for this very original and clean Interview.

  • @j340_official
    @j340_official Месяц назад +4

    This is a fascinating interview and your guest is extremely knowledgeable and professional and well spoken. And we thank him for his contributions. Great job! All aboard the PCI Express!

  • @carpetbomberz
    @carpetbomberz Месяц назад +6

    No sarcasm intended, but I do love this channel. Sooo much better than any Linus Tech Tips hot-takes on any of the covered technologies.

  • @bradleyp3655
    @bradleyp3655 Месяц назад +29

    I am no engineer but I have been using this technology for over 40 years. The next paradigm shift is photonic circuitry. Electronics are too slow, to power hungry, and too expensive. There is a physical limit to both node design and interconnectivity. The fast you go the more power is required.

    • @PtYt24
      @PtYt24 Месяц назад +12

      I have been looking forward to development in that area, but that's like 15+ years away and intel which was heavily involved just had huge set backs so don't see those things coming anytime soon.

    • @drescherjm
      @drescherjm Месяц назад +5

      When I was a student in electrical engineering in the early to mid 1990s we had many discussions that are basically what you said. We thought PCs would not be able to get much over 100MHz because of these limitations and that we would have to move to optical circuitry to overcome these limitations. Now 30+ years later look where we are..

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 Месяц назад +4

      I still remember Intel cucking us with photonics, before Thunderbolt came out it was said to be using optical cables (hence the codename “Light Peak”).

    • @Brians256
      @Brians256 Месяц назад +6

      Silicon photonics is in active development right now, and you can see the activity and maturity level by looking at the wafer probing companies such as FormFactor and its offerings in this space. The customers are there. Photonics was technically viable but more expensive than copper back in the Light Peak era, but copper solutions are now becoming more expensive as signaling rates increase. With AI and other high data-rate applications bringing in the money, photonics has a real market to serve where copper cannot compete.

    • @JJFX-
      @JJFX- Месяц назад +3

      ​@@drescherjmYes but TBF many also didn't totally believe that and there were clear advancements even just in manufacturing that were obviously holding a lot back. Then in the early 2000's Intel said 10GHz chips could be on the horizon and look how that went. There's still certainly ways to go (mostly up) and we'll certainly see efficiency gains but there are some clear indications of where constantly pushing for more and more raw performance is going to become increasingly less and less practical on multiple levels.

  • @svuvich
    @svuvich Месяц назад +7

    If my machine is learning it better be bringing good grades home

    • @xazzzi
      @xazzzi Месяц назад +2

      Feed it top quality organic electricity then, you don't want it throttled, do you?

  • @joraforever9899
    @joraforever9899 Месяц назад +29

    Ian why are you so pale? Are the camera settings ok?

    • @Punisher9419
      @Punisher9419 Месяц назад +58

      He's from the North of England where the sun doesn't shine. We are all like Gollum up here.

    • @incription
      @incription Месяц назад +1

      I would feel nautious too thinking about all that compute

    • @TechTechPotato
      @TechTechPotato  Месяц назад +59

      Heh, I filmed this after just landing at the airport from an 11 hour economy flight. Was quite tired, but dedicated!

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 Месяц назад +5

      Ian couldn’t star in a contemporary Tolkien adaptation for “modern audiences”.

    • @CaedenV
      @CaedenV Месяц назад +13

      Ian is the white-balance :)

  • @scarletspidernz
    @scarletspidernz Месяц назад +31

    Just skip PCIE 6 at this point
    With PCIE 7 on desktop means fewer lanes needed for each device and more nvme and pcie slots can be added.

    • @JJFX-
      @JJFX- Месяц назад +10

      Assuming lane counts don't decrease as less of them are deemed necessary.

    • @PAPO1990
      @PAPO1990 Месяц назад +19

      there's no skipping Gen6, Gen7 isn't even a completed spec yet, and Gen6 is only JUST in data centres, this stuff takes time to trickle down to consumer devices.

    • @eat.a.dick.google
      @eat.a.dick.google Месяц назад +8

      PCIe 7 isnt even finalized as a spec yet. No way they will be skipping 6. It's impossible with the timeline.

    • @myne00
      @myne00 Месяц назад +1

      My guess is that the server and desktop will bifurcate. Desktops will remain electronic because there is only one card generally so converting signals twice is a waste of time. Servers will need the flexibility.

    • @PAPO1990
      @PAPO1990 Месяц назад +1

      @@myne00 maybe eventually, but also in-silicon optics are developing really well. We may see fiber optic links within consumer desktops in our lifetime

  • @Eric-ue5mm
    @Eric-ue5mm 4 дня назад

    16:15 thats so exciting, i cant wait to see that.

  • @olo398
    @olo398 Месяц назад +2

    good info! thanks.

  • @whyjay9959
    @whyjay9959 Месяц назад +6

    If I understood correctly(not sure of the terminology), they managed to keep 6.0's signal integrity requirements only a little more strict than 5.0's- Are they succeeding in doing that with 7.0 as well?

  • @John_thetrader
    @John_thetrader 12 дней назад

    loved the talk... very intresting to see the importance of $ SNPS. kudos for you, maybe over time theyll change it to Cuda s ...

  • @johnknightiii1351
    @johnknightiii1351 Месяц назад +1

    I'm more excited about the improvement in latency it will have

  • @mytech6779
    @mytech6779 Месяц назад +4

    Setting a standard is a long way from working production implementations. I will be surprised if v6 can maintain the generational speed doubling without a large number of strings attached.
    v5 hit substantial signal integrity issues in practical commodity production. It's why you see so many boards that offer a combination of v4-only slots with only one v5 slot despite the hassle and bom cost of adding extra control chips, and the v5 slot is physically close to the CPU.

    • @_yuri
      @_yuri Месяц назад +2

      consumer platforms have budget restrictions that commercial datacentres do not.

    • @mytech6779
      @mytech6779 Месяц назад

      @@_yuri data centers also have higher operational requirements, they don't spend money for fun.
      PCIe v3 and [with some limits] v4 were able to link between rack enclosures with cabling but the increased frequency of v5 has reduced maximum length and latency requirements such that it cannot extend beyond the enclosure even with switches and signal boosters. So they can't use it for internode communication.

  • @solidreactor
    @solidreactor Месяц назад +1

    Finally optical! Hopefully this will tinkle down to DP as well, 4K 240Hz or higher is hard to support on long distances

  • @capability-snob
    @capability-snob Месяц назад +2

    "All mips and no I/O" really has come a stretch since old mate Seymour was at Control Data.

    • @yxyk-fr
      @yxyk-fr Месяц назад

      and ordered transistors only by millions from Fairchild 😀

  • @MeneghetteMmcs
    @MeneghetteMmcs Месяц назад +2

    Cant wait for the 1tbps pcie 10 (it will change nothing for us)

  • @ruspj
    @ruspj Месяц назад +2

    bandwidth is clearly allways important and improving but how would using optical comminication for PCIe effect latency / ping time. wouldnt converting from electronic to optical and then back to electrical result into longer ping times? or can this be done without an impact or possibly even faster?
    there was a mention of building optical PCIe into chips - if it can be faster with lower latency i wander if optical connections would ever replace some (or even most) pins in future generations of procesors

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 Месяц назад +13

    Come on, just gimme transparent optical transceivers for any kind of PCIe connectors or slots and power-efficient PCIe Switches.
    Is that too much to ask without the stuff costing an arm and a leg?

    • @Brians256
      @Brians256 Месяц назад +5

      Currently, yes. It's too hard to cheaply make the optical-electrical interfaces. High-speed and high-bandwidth lasers, modulators, receivers, and so on are still expensive. Some of that, from what I hear from customers, is the rapidly changing methods of packaging. For example, do you edge-couple or surface-couple the optics? What process are you using for the optical chiplet vs the logic? Who is making the best optical chiplets right now and what is their road-map for die-shrinks and costs? It's not just the process node that matters, but the advanced packaging offerings. Mixing and matching Global Foundry silicon with advanced packaging from TSMC or Intel is possible but adds costs/complexities.

    • @myne00
      @myne00 Месяц назад +4

      Just look at the price of 100gb Ethernet SFPs.
      That's effectively what you're asking for. The layer 2 protocol can be whatever you want including pcie.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 Месяц назад

      Is there a reason why the cheaper kind of transceivers like 100 Gbit QSFP28 models couldn’t be used for this purpose?

    • @Brians256
      @Brians256 Месяц назад

      @@abavariannormiepleb9470 that's really what the optical PCIe implementations used. It's very possible but much more expensive than a simple slot connector. Not enough demand for a hundred dollar cable? I'm totally guessing on price, though. It's hard for me to estimate what it would be if mass market adoption happened.

  • @EyesOfByes
    @EyesOfByes Месяц назад +2

    In short, GTA VII will load in 0.05 seconds

  • @t3amrr237
    @t3amrr237 Месяц назад

    Awsome Job ..I would like to see an interview with someone that is Responsible for the Camm2 Memory Development. I think it's the Future for Desktop

  • @FriedrichWinkler
    @FriedrichWinkler Месяц назад +6

    I could imagine a card edge connector solution in the form factor of Previous PCIe 1x that connects power and optical data. These would then connect to photonics on package transievers. Should make for a very neat inside of server solutions. Tntrested to see what physical solutuion the different companies come up with.

    • @boballmendinger3799
      @boballmendinger3799 Месяц назад

      I used to work with Telco network equipment in which the card backplane connectors are exactly like that. Tellabs Titan Titan 5500 digital cross connect is one example.

  • @Squilliam-Fancyson
    @Squilliam-Fancyson Месяц назад +2

    Nvidia Blackwell already comes with PCi-e 6.0 interface. But yeah technically not a released part.

  • @accursedshrek
    @accursedshrek Месяц назад

    great interview thanks

  • @yourma2000
    @yourma2000 Месяц назад

    I just want my devices to connect to the CPU via fibre optics. Replace the x16 slot with a row of optical connectors that can each (or more if more data is needed) connect a single device.
    Mini-ITX becomes normalized and isn't priced at a premium, completely remove PCIe slot layout from the buying equation, devices not physically connected to the board and can be moved about the system based on case choice, no need for high quality PCIe copper tracers/added board cost, etc, etc.

  • @pavanpatel6990
    @pavanpatel6990 Месяц назад +1

    Hi Ian
    I have question on Serdes, what are the parameter hyperscaler or buyer use while choosing for same configuration?
    112g pam4 , whose serdes got choosen from snps/cdn/awave

  • @PAPO1990
    @PAPO1990 Месяц назад +1

    I'm looking forward to consumer devices using less lanes, I want more flexibility in expansion cards in my PC, I know I'm weird, but I want to keep a FireWire card, and a TV tuner card running, without impeding the bandwidth my gfx card needs, plus in the era of M.2 I want room for multiple drives too. for Gen 4, that's roughly 32 lanes, and Gen5 SSD's can be nice, so mainstream Gen6, if well implemented should be where things get interesting, just giving everything half as many lanes. Gen7 is exciting though, just a long way off still.

    • @MrHaggyy
      @MrHaggyy Месяц назад

      "Gen 5 SSD's can be nice" What are you doing that actually requires 32 GBytes/sec read-write? Thats just shy of 2 TB/min.
      Watch every single TV channel there is at the same time?

    • @PAPO1990
      @PAPO1990 Месяц назад +1

      @@MrHaggyy Most SSD’s already only use 4 lanes. So in this case a Gen5 M.2 would be 4 lanes. But then a Gen6 version would only need 2 lanes for the same speed.

  • @davidswanson9269
    @davidswanson9269 Месяц назад

    I would like to see a paradigm shift in motherboard design a hybrid electrical/optical build, continue with cards or modules but only provide power. The PCIe signaling components would be accomplished by optical fiber thus eliminating signaling over copper traces on the motherboard but only within the cards or modules themselves. Theoretically, you could swap out an INTEL CPU for an AMD CPU module or even an ARM. PCIe over fiber is not a new technology, it is available today but not in this form at the consumer level. There are many pros to this approach such as reducing manufacturing costs like copper use and E-waste. Hardware repair is as simple as a card or modules swap out, true plug and play.

  • @GraveUypo
    @GraveUypo Месяц назад +1

    i really want better I/O
    I think that for a long time I/O has been tethering on the edge of "just barely enough". but over time, more and more things started fighting for those few lanes. NVME made pci-e gen 3 useless. it can't even max a single one, let alone when you need to expand and they end up eating more lanes, and ruining your gpu's performance. i mean, it's not like our cpus have infinite PCI-E lanes, so the few that are there better be fast enough to hold the most powerful GPU with only 4 lanes so we have some to spare for other things. like more gpus. or nvme drives, or wifi cards.

  • @jjdizz1l
    @jjdizz1l Месяц назад

    More of THIS!

  • @IRWBRW964
    @IRWBRW964 Месяц назад

    Most people really don't understand that this is mainly designed for data centers and not consumer devices.

  • @sambojinbojin-sam6550
    @sambojinbojin-sam6550 Месяц назад

    Green screen chat edit ftw! Still, thanks, it's always great to have some insight to what might be coming up. Cheers!
    (This is better than BG3! ❤)

  • @Kiyuja
    @Kiyuja Месяц назад +1

    cant wait for this to hit mainstream in 2037

    • @yxyk-fr
      @yxyk-fr Месяц назад

      if you can afford it 🙂

    • @Kiyuja
      @Kiyuja Месяц назад

      @@yxyk-fr that wont be an issue

  • @shintsu01
    @shintsu01 Месяц назад

    one of the challanges i see is ammount of hardware we can connect to a desktop CPU due to the limit of lanes.
    I wonder if we can keep the same or even more ammount of lames with the future versions of pcie allowing devices to require less lanes to achieve the required performance making it possible to setup more devices in a consumer grade system.
    is this a unrealistic expectation or can we see this happening? since i assume we do not require some of these datarates for consumer grade hardware.

  • @matgaw123
    @matgaw123 Месяц назад

    Connecting gpu using fiberoptic would be great and noise would be very low over very long distance

  • @virtualmonk2072
    @virtualmonk2072 Месяц назад

    I wondered if the switch to glass as a substrate would alleviate signal loss

  • @zodwraith5745
    @zodwraith5745 Месяц назад +4

    I get that you're excited about the technical and performance aspects of AI. It's hardware, it's possibilities, it's training, and it's inference, but I think we're quickly running into a moral dilemma with how quickly it's spreading.
    Not only do we have these models scraping human ingenuity, writing, artistic, and musical talents (often WITHOUT the artists knowledge let alone consent), but it's doing it will the full intent of it's end purpose being to *_REPLACE_* these artists for the soulless corporate suits looking to save a buck and no longer employ said talent.
    I just find it baffling how many people mindlessly fantasize about what AI can do _for_ them without taking into account what it can do to _hurt_ you once greed sets in. And when has greed ever _NOT_ set in?

  • @albyboy4278
    @albyboy4278 Месяц назад +2

    I don't give a f what data centers and AI Mambo Jambo needs to interconnected 6k of GPUs together faster, if the technology doesn't bring benefits to normal hardware for normal consumers, it's only propaganda to make money on AI investments for server company not for normal people.
    We have Nvidia GPUs that can't saturate even the PCIe 3.
    If we have GPUs costing 2k with destroyed normal consumer market is because of those AI M.F.
    Non of them will see a single $ from me.

  • @LethalBB
    @LethalBB Месяц назад +5

    That's a great question
    That's a great question
    That's a great question
    Like talking to bloody Alexa

    • @yxyk-fr
      @yxyk-fr Месяц назад

      Synopsys has birthed her son 😛
      yeah, I had the same feeling, like he's talking to a politician ... all words, no contents 😕

  • @EhNothing
    @EhNothing Месяц назад +4

    Came for a video about PCIe, got a video about AI :-/

  • @pishbot
    @pishbot Месяц назад

    high-performance compute

  • @filipvanham6052
    @filipvanham6052 Месяц назад

    15:25 it's all in the details

  • @Squilliam-Fancyson
    @Squilliam-Fancyson Месяц назад

    up to 256 GB/s via x16 configuration... Actually might be more bandwidth than Nvidia's upcoming 5060tis memory interface will provide.

  • @maxmustermann5932
    @maxmustermann5932 Месяц назад +1

    Did you just sneakily namedrop Cadence at 7:45? ;-)

    • @yxyk-fr
      @yxyk-fr Месяц назад

      my reaction too ! 😀

  • @hburke7799
    @hburke7799 Месяц назад

    ah yes, now motehrbaords and pcie switches can continue to skyrocket in price.

  • @mrobinson9297
    @mrobinson9297 Месяц назад

    thats cool. pcie 7 is already around the corner. bus speed and clocks are a huge bottleneck to ai

  • @todorkolev7565
    @todorkolev7565 Месяц назад +7

    I am just going to put this for those still wondering: PCI-3 is absolutely fine for every use case in storage and even the best GPUs barely lose 5% if they are installed in a PCI-3...
    PCI-4 might make sense but it's mostly for vanity
    PCI-5 is overkill for the next 6 years for sure

    • @CaedenV
      @CaedenV Месяц назад +1

      absolutely! Maybe it is different with the 4090 and next gen cards, but PCIe3 is more than enough to not be the bottleneck for my 3090. I mean... what kind of ram and SSD raid array would I need to make PCIe3 the bottle neck in any workload on my computer lol.
      All of this is wild to me!

    • @eloquentemu
      @eloquentemu Месяц назад +4

      From the enterprise side this is clearly wrong as a single 100G connection requires x16 PCIe3. Even on the consumer side this is becoming progressively less true, especially as we are starting to see things like x4 or x8 GPUs and the latest NVMe drives can definitely saturate PCIe3. Maybe that won't result in huge improvements to game load times, but I do expect it'll matter as things like Direct Storage take off. Something like PCIe5 opens the door to letting a consumer ~20 lane CPU run 2 GPUs (e.g. a GPU and AI card) at x8 and performing like a "normal" x16 PCIe4.

    • @Brians256
      @Brians256 Месяц назад +3

      PCIe 7.0 is definitely not yet needed for home or small business customers. If you'll note, Ian and Priyank both focused on business use cases like the big-AI training farms. These are billion-dollar clusters where you need high bandwidth (terabytes/second) and low latency (tens or hundreds of nanoseconds).

    • @Thats_Mr_Random_Person_to_you
      @Thats_Mr_Random_Person_to_you Месяц назад +5

      Applying 'consumer' logic to 'enterprise' (or real bleeding edge scientific world) is not really suitable.
      1GbE is fine for consumers 2.5GbE for enthusiasts, but I'm the SME I work in 100GbE is regularly used, and for large tech enterprise and datacentre stuff 400GbE is not uncommon. Using network interface speeds as an example as NICs hang off PCIe just as much as anything else like RAM and accelerators
      PCIe isn't a consumer standard, it just so happens consumer hardware uses is.

    • @noname-gp6hk
      @noname-gp6hk Месяц назад +3

      Gaming technology and server technology used to be fairly closely related. New leading edge server technology is no longer applicable to end users on home PCs. This is not being driven by PCs and isn't designed for them.

  • @murraywebster1228
    @murraywebster1228 Месяц назад +1

    When does fibre start to be used instead of copper….

    • @arnabbiswasalsodeep
      @arnabbiswasalsodeep Месяц назад +3

      never, because everything would need transceiver to convert it back to electrical signal for the transistors

  • @davidaz4933
    @davidaz4933 Месяц назад

    SamTec - Bulls Eye
    Bulls Eye Rugged Solid FEP Dielectric, 25 AWG Microwave Cable Assembly ?

  • @tadmarshall2739
    @tadmarshall2739 Месяц назад

    I wish that you would take a few seconds to explain some of the jargon and acronyms you are using. I can guess what RTL stands for, but I'm probably wrong, so maybe at least spell out the acronyms .

    • @henrikoldcorn
      @henrikoldcorn Месяц назад +2

      The content has to be tailored for an audience, you can’t make it for everyone or Ian would still be there explaining what a transistor is.

  • @spacegaiden
    @spacegaiden Месяц назад

    if were not at 6 yet and almost done with 7, then 7 becomes six! you start again and begin research on revision 8, which is now 7!

    • @yxyk-fr
      @yxyk-fr Месяц назад

      and 7 ate 9...
      ...
      ... OK 😛

  • @FSK1138
    @FSK1138 28 дней назад

    cambrian period for ai 😎

  • @mentalplayground
    @mentalplayground Месяц назад

    I have missed 6.0 :)

  • @overtoke
    @overtoke 29 дней назад

    too bad we can't skip 5 and 6...

  • @jamesjonnes
    @jamesjonnes Месяц назад

    PCIe is a big mistake. The CPU, memory, and GPU should always be fused together. That's faster by orders of magnitude, such as Apple's fused CPUs.

    • @karl0ssus1
      @karl0ssus1 Месяц назад +1

      You can't build a data center on a chip.

    • @jamesjonnes
      @jamesjonnes Месяц назад

      @@karl0ssus1 You'll see in a few years.

    • @karl0ssus1
      @karl0ssus1 Месяц назад +1

      ​@@jamesjonnes I doubt it. Even if SOCs did become dominant in consumer space, at the data center level, they would still being trying to link these theoretical monster SOCs to achieve even greater compute power.

  • @scudsturm1
    @scudsturm1 Месяц назад

    4 24pins? weird stuff

  • @peppybocan
    @peppybocan Месяц назад +2

    Maybe, just maybe, they should focus on the DRAM situation. We do have fast enough interconnect (PCIe), but we seem to struggle with DRAM latency.

    • @Brians256
      @Brians256 Месяц назад +3

      DRAM is something that is "stuck" due to physics with the current approach. Latency and cell size appear to be at their physical limits unless we can change the fundamental design. Thus, we get more and better caching, better coherence, and more bandwidth as a poor replacement.

    • @peppybocan
      @peppybocan Месяц назад

      ​@@Brians256 well then that's a billion dollar business area. From what I have read, it looks like MOSFETs are in use. I wonder why we can't just the RAM to be of type SRAM (flip-flop)?

    • @peppybocan
      @peppybocan Месяц назад

      is there even research in this area? Wikipedia doesn't seem to say much.

    • @Brians256
      @Brians256 Месяц назад +3

      @@peppybocan Are you asking about memory research? There's loads of research and engineering. Engineers refine the current approach at each process node and you can safely assume that serious money is invested into making each node as productive as possible. Research science is done on many different types of memory (e.g., magnetoresistive, phase change) as well as shifting some compute
      inside the memory modules.

    • @striker44
      @striker44 Месяц назад

      ​@@peppybocanthere is ton of R&D but not for Wikipedia publication.

  • @PaulGrayUK
    @PaulGrayUK Месяц назад

    I'm still waiting for GPUs which you can upgrade the memory, which would make GPUs cheaper, more future proof. Will a future BUS like PCIe x help solve this?

  •  Месяц назад +2

    Color is a bit off :) Or a vacation is needed.

  • @ViewBothSides
    @ViewBothSides Месяц назад +6

    There's a lot of hype around ML but let's be honest, it's not actually doing anything particularly useful for most people today. We're probably going to spend at least the next 5 years trying to turn the crap off, like Recall on Windows, or AI-generated BS being given in search results instead of the actual results.

  • @ultraveridical
    @ultraveridical Месяц назад

    Remember to take vitamin D.

  • @cannesahs
    @cannesahs Месяц назад

    Anyone count AI words?

  • @Nourrights_psalm118.8
    @Nourrights_psalm118.8 Месяц назад

    Cough cough... Hifi audio cable

  • @langhans156
    @langhans156 Месяц назад

    The model with big datacenters won't work! No company in their right mind lets their employees put company data on the servers of such providers.

    • @striker44
      @striker44 Месяц назад +1

      What is the timestamp?

  • @tonupif
    @tonupif Месяц назад +1

    К чему приводит монополия скорость wifi увеличилась в 1000 раз, скорость PCIe в 8, я подключил 6 жёстких дисков в RAID 0, к 6 разьёмам SATA!!! и получил скорость чтения из кэша в 2 раза выше. чем скорость обмена с моей PCIe картой 3090 nvidia, это основное достижение двух компаний INTEL и AMD глобальная стогнация во всём. Так работает любая монополия или сговор на рынке. 30 лет индустрия топчется на месте, болтает и тратит деньги в какую то ерунду.

  • @DS-pk4eh
    @DS-pk4eh Месяц назад

    So, lets just skip version 6, at least the upgrade will be awesome with Nvida RTX 8090 AI

  • @momoanddudu
    @momoanddudu Месяц назад

    How many home users care about this?
    Desktop CPUs offer 20 PCIE channels, and motherboards offer maybe 2 x16 slots and an x4 slot, with users using only the CPU's x16 slot for a graphics adapter. Only gamers care whether that slot is the latest & greatest PCIE.
    Accelerators are used in servers (mostly in the cloud), and rarely seen at home

  • @timun4493
    @timun4493 Месяц назад +1

    light is faster than electrons is a correct statement to make but totally irrelevant

    • @LeicaM11
      @LeicaM11 Месяц назад

      Electrons are propagating energy with about 270,000km/s, light is traveling at about 300,000km/s, that is somehow „close“, so comparable in those scales.

  • @conza1989
    @conza1989 Месяц назад +2

    They're going too fast... It's annoying, PCIe 3 lasted probably too long, PCIe4 was too short, I would suggest most of us are thinking of GPU saturation of the bus rather than other uses like SSDs etc. My PC is using PCIe4, previous one was PCIe3, I don't want GPUs to be utilising PCIe 5 or 6 in a couple of years, as it means that upgrading would mean throwing away the whole system, if I wanted to update. Just make the standard and let it sit for a time, jeez, if we jump up to PCIe 7, just leave it there.

    • @noname-gp6hk
      @noname-gp6hk Месяц назад +10

      This isn't for PCs.

    • @minus3dbintheteens60
      @minus3dbintheteens60 Месяц назад +3

      You would throw an entire system away simply because a tech bro said 6 and you have a 5? Thank god 3dmark have that pcie benchmark to show people that it hasn't and doesn't matter if you slash a GPUs bandwidth in half by either halving transmission speed or width. Hell you can cut a 7900xtx down to pcie3 and then cut 3/4 of the lanes off it and it barely gives a shit of under 5%. It's not like a GPU is sending massive amounts of data down the slot, not unless you're overflowing into system RAM, then you would want pcie bandwidth as fast as DDR..

    • @austin2994
      @austin2994 Месяц назад +4

      This is for 10+ gpus to exchange AI data. We not going to buy this at microcenter

    • @striker44
      @striker44 Месяц назад +1

      You buy every gen upgrade? The tech companies love you. BTW, this is not for current consumer desktops. Listen closely to the application space they are targeting.

    • @MrHaggyy
      @MrHaggyy Месяц назад

      This is for Nvidia A100 clusters(several dozen 4090), or 128 Core Chips from Ampere, ARM, RISC, or something like the IBM Power family with TBs of RAM per chip.
      For a single GPU, even a RTX 5090, PCIe 3 or 4 if you want less lanes is more than enough.

  • @zorrozalai
    @zorrozalai Месяц назад

    Anything beyond PCIe 4.0 (3.0) is totally unnecessary for gamers. 7.0 gives us larger AI models, and faster NVMe drives for data centers.

    • @eat.a.dick.google
      @eat.a.dick.google Месяц назад +1

      Faster Fiber Channel networking. Faster Ethernet networking. Faster GPUs. Faster AI accelerators. Faster CXL.

  • @yuan.pingchen3056
    @yuan.pingchen3056 Месяц назад

    i want a A.I. girl robot.....the AI evolution is too slow.....

  • @ZA26
    @ZA26 Месяц назад

    n r zee btw

  • @mapscorp
    @mapscorp Месяц назад

  • @ppwalk05
    @ppwalk05 Месяц назад

    There are dozen of us clients!