Swap GPUs at the Press of a Button: Liqid Fabric PCIe Magic Trick!

Поделиться
HTML-код
  • Опубликовано: 13 дек 2024

Комментарии • 278

  • @ProjectPhysX
    @ProjectPhysX 7 месяцев назад +19

    That PCIe tech is just fantastic for software testing. I test my OpenCL codes on Intel, Nvidia, AMD, Arm, and Apple GPU drivers to make sure I don't step on any driver bugs. For benchmarks that need the full PCIe bandwidth, this system is perfect.

    • @cdoublejj
      @cdoublejj 3 месяца назад

      ....oh god... i'm about to go down another rabbit hole....lauhd halp me

  • @TeeEllohwhydee
    @TeeEllohwhydee 7 месяцев назад +136

    The computer isn't real. The fabric isn't real. Nothing actually exists. We're all just PCI-express lanes virtualized in some super computer in the cloud. And I still can't get 60fps.

    • @AlumarsX
      @AlumarsX 7 месяцев назад +1

      Goddamn Nvidia all that money and keeping us fps capped

    • @gorana.37
      @gorana.37 7 месяцев назад +1

      🤣🤣

    • @jannegrey
      @jannegrey 7 месяцев назад +2

      There is no spoon taken to the extreme.

    • @fhsp17
      @fhsp17 7 месяцев назад

      The hivemind secret guardians saw that. They will get you.

    • @nicknorthcutt7680
      @nicknorthcutt7680 6 месяцев назад +1

      😂😂😂

  • @wizpig64
    @wizpig64 7 месяцев назад +94

    WOW! imagine having 6 different CPUs and 6 GPUs, rotating through all 36 combinations to hunt for regressions! Thank you for sharing this magic trick!

    • @joejane9977
      @joejane9977 7 месяцев назад +7

      imagine if windows worked well

    • @onisama9589
      @onisama9589 7 месяцев назад

      Most likely the windows box would need to be shutdown before you switch or the OS will crash.

    • @jjaymick6265
      @jjaymick6265 6 месяцев назад +4

      I do this daily in my lab. 16 different servers, 16 GPUs (4 groups of 4) and do fully automated regressions for AI/ML models/GPU driver stacks / Cuda version comparisons. Like I have said in other posts once you stitch this together with Ansible / Digital Rebar things get really interesting. Now that everything is automated... I just simply input a series of hardware and software combos to test and the system does all the work while I sleep. Just wake up review the results and input the next series of tests. There is no more cost effective way for one person to test the thousands of combinations.

    • @formes2388
      @formes2388 6 месяцев назад

      @@joejane9977 It does. I mean, it works well enough that few people go through the hastle of conciously switching. It's more a default switch if people go start using a tablet as a primary device, do to not needing a full fat desktop for their day to day needs.
      For perspective of where I am coming from: I have a trio of Linux systems, a pair of windows systems; one of the windows systems is also dual booted to 'nix. Used to have a macOS system but, have no need of one, and better things to spend money on.
      For some stuff: Linux is great; thing is, I have better things to do with my time than tinker with configs to get things running - so sometimes, a windows system just works.

  • @Ultrajamz
    @Ultrajamz 7 месяцев назад +123

    So I can literally hotswap my 4090’s as they melt like a belt fed gpu pc?

    • @HORNOMINATOR
      @HORNOMINATOR 7 месяцев назад +15

      thats like a gatlinggun for gpus. dont bring manifacturers on ideas.

    • @TigonIII
      @TigonIII 7 месяцев назад +8

      Melt? Like turning them to liquid, pretty on brand. ;)

    • @nicknorthcutt7680
      @nicknorthcutt7680 6 месяцев назад +2

      Lmao

    • @KD-_-
      @KD-_- 6 месяцев назад +3

      The VHPWR connector might be durable enough to justify hand loading because the belt system could dislodge the connectors from the next one in line.
      Would need to do analysis.

    • @xlr555usa
      @xlr555usa 2 месяца назад

      That's why I'm trying to pick up more 3090s

  • @Maxjoker98
    @Maxjoker98 7 месяцев назад +6

    I've been waiting for this video ever since Wendell first started talking about/with the Liquid people. Glad it's finally here!

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 7 месяцев назад +112

    Please Liqid, introduce a tier for homelab users!

    • @popeter
      @popeter 7 месяцев назад +10

      oh yea could do so much, proxmox systems on dual ITX all sharing GPU and network off one of these

    • @marcogenovesi8570
      @marcogenovesi8570 7 месяцев назад +9

      I doubt this can be made affordable for common mortals

    • @AnirudhTammireddy
      @AnirudhTammireddy 7 месяцев назад +7

      Please deposit your 2 kidneys and 1 eye before you make any such requests.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 7 месяцев назад +2

      My humble dream setup would be a “barebones” kit consisting of the PCIe AIC adapters for the normal “client” motherboard and the “server” board that offers four x16 slots. You’d have to get your own cases and PSU solution for the “server” side.

    • @mritunjaymusale
      @mritunjaymusale 7 месяцев назад +1

      @@marcogenovesi8570 you can tho, in terms of hardware it's just a pci switch the hard part is the low level code to match the right pci device to right cpu and on top of that software that connects it to workflows that can understand this.

  • @d0hanzibi
    @d0hanzibi 7 месяцев назад +20

    Hell yea, we need that consumerized!

  • @Andros.G
    @Andros.G 4 месяца назад +3

    Oof, I both love and hate the way he hinted at running multiple GPUs running on the same PC in tandem via SLI. Reminds of the old SLI and crossfire days. Kind of wish we still utilized that tech, but it was such a headache to troubleshoot.

  • @seanunderscorepry
    @seanunderscorepry 7 месяцев назад +8

    I was skeptical that I'd find anything useful or interesting in this video since the use-case doesn't suit me personally, but Wendell could explain paint drying on a wall and make it entertaining / informative.

  • @cs7899
    @cs7899 7 месяцев назад +8

    Love Wendell's off label videos

  • @totallyuneekname
    @totallyuneekname 7 месяцев назад +129

    Can't wait for the Linus Tech Tips lab team to announce their use of Liqid in two years

    • @mritunjaymusale
      @mritunjaymusale 7 месяцев назад +18

      I mentioned this idea in his comments when Wendell was doing interviews with the liqid guys, but Linus being the dictator he is in his comments has banned me from commenting.

    • @krishal99
      @krishal99 7 месяцев назад +28

      @@mritunjaymusale sure buddy

    • @janskala22
      @janskala22 7 месяцев назад +13

      LTT does already use Liqid, just not this product. You can see in one of their videos they have a 2U Liqid server in their main rack. It seemd like a rebranded DELL server, but still from Liqid.

    • @totallyuneekname
      @totallyuneekname 7 месяцев назад

      Ah TIL, thanks for the info @janskala22

    • @tim3172
      @tim3172 7 месяцев назад

      Can't wait for you to type "ltt liqid" into RUclips search and realize LTT has videos from the last 3 years showcasing Liquid products.

  • @chaosfenix
    @chaosfenix 7 месяцев назад +15

    I would love this in the home setting. If it is hot pluggable it is also programmable which means that you could upgrade GPUs periodically but instead of just throwing it away you would push it down the list on your priority. Hubby and Wifey could get priority on the fastest GPU and if you have multiple kids they would be lower priority. If mom and dad aren't playing at the moment though they could just get the fastest GPU to use. You could centralize all of your hardware in a server in a closet and then have weaker terminal devices. They could have an amazing screen, keyboard, etc but they could cheap out on the CPU, RAM, GPU etc because those would just be composed when they booted up. Similar to how computers will switch between an integrated GPU and a dGPU now you could just use the cheap devices iGPU while doing the basics but then if you opened an application like a game it would dynamically mount the GPU from the rack. No more external GPUs for laptops and no more insanely expensive laptops with hardware that is obsolete for its intended task in 2 years.

    • @HORNOMINATOR
      @HORNOMINATOR 7 месяцев назад +2

      moooom?! why is my fortnite dropping fps lmao

    • @SK83RJOSH
      @SK83RJOSH 7 месяцев назад

      I would have concerns about cross talk and latency from like, signal amplifiers, in that scenario. I could not imagine trying to triage the issues this will introduce. 😂

    • @chaosfenix
      @chaosfenix 7 месяцев назад +1

      @@SK83RJOSH I think latency would be the biggest one. I am not sure what you mean by cross talk though. If you are meaning signal interference I don't think that would apply here any more than it would apply in any regular motherboard and network. If you are meaning about cross talk in wifi then this really would not be how I would do it. I would use fiber for all of this. Even Wifi 7 is nowhere near fast enough for this kind of connectiviy and would have way too much interference. Maybe if you had a 60ghz connection but that is about it.

  • @TheFlatronify
    @TheFlatronify 7 месяцев назад +7

    This would come in so handy in my small three node Proxmox cluster, assigning GPUs to different servers / VMs when necessary. The image would be streamed using Sunshine / Moonlight (similar to Parsec). I wish there was a 2 PCIe Slot consumer tier available for a price that enthusiasts would be willing to spend!

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад +1

      I use this every day in my lab running Prox / XCP-NG / KVM. Linux hot plug PCIe drivers work like a champ to move GPUs in an out of hypervisors. If only virt-io had reasonable support for hot-plug PCIe into the VM so I would not have to restart the VM every time I wanted to change GPUs to run a new test. Maybe someday.

  • @nicknorthcutt7680
    @nicknorthcutt7680 6 месяцев назад

    This is absolutely incredible! Wow, I didn't even realize how many possibilities this opens up. As always, another great video man.

  • @N....
    @N.... 6 месяцев назад

    A workaround for lack of hotplug is to just keep all the GPUs connected at once and disable/enable via Device Manager. Changing primary display to a display connected to the GPU works for most stuff but some games like to pick a different GPU than the primary display, hence disabling in Device Manager to prevent that.

  • @ralmslb
    @ralmslb 7 месяцев назад +9

    I would love to see performance tests comparing the impact of the cable length, etc.
    Essentially, the PCI speed impact not only in terms of latency but also throughput, the native solution vs LiquidFabrid products.
    I have a hard time believing that this solution has 0 downsides, hence wouldn't be surprised that the same GPU has worse performance over LiquidFabric.

    • @MiG82au
      @MiG82au 7 месяцев назад

      Cable length is a red herring. An 8 m electrical cable only takes ~38 ns to pass a signal and the redriver (not retimer) adds sub 1 ns, while normal PCIe whole link latency is on the order of hundreds of ns. However, the switching of the Liqid fabric will add latency as will redrivers.

    • @paulblair898
      @paulblair898 7 месяцев назад +1

      There are most definitely downsides. Some PCIe devices drivers will crash with the introduction of additional latency because fundamental assumptions were made when writing them that don't handle the >100ns latency the liqid switch adds well. ~150ns additional latency is not trivial compared to the base latency of the device.

  • @andypetrow4228
    @andypetrow4228 7 месяцев назад

    I came for the magic.. I stayed for the soothing painting above the techbench

  • @pyroslev
    @pyroslev 7 месяцев назад +7

    This is wickedly cool. Practical or usable for me? Naw, not really. But seeing that messy workshop lived in, satisfying as the tech.

  • @MatMarrash
    @MatMarrash 6 месяцев назад

    If there's something you can cram into PCIe lanes, you bet Wendell's going to try it and then make an amazing video about it!

  • @scotthep
    @scotthep 7 месяцев назад

    For some reason this is one of the coolest things I've seen in a while.

  • @sebmendez8248
    @sebmendez8248 7 месяцев назад

    This could genuinely be useful for massive engineering firms, most engineering firms nowadays use 3d modelling and thus having a server side gpu setup could technically mean every single computer on site has access to a 4090 for model rendering and creation without buying and maintaining 100+ gpus.

  • @shinythings7
    @shinythings7 7 месяцев назад +1

    I was looking at the vfio stuff to have everything in a different part of the house. Now this seems like just as good of a solution. Having the larger heat generating components in a single box and having the mobo/cpu/os where you are would be a nice touch. Would be great for SFF mini pc's as well to REALLY lower your footprint on a desk or in an office/room.

  • @reptilianaliengamedev3612
    @reptilianaliengamedev3612 6 месяцев назад +1

    Hey if you have to record in that noisey environment again you can leave about 15 or 30 seconds of you being silent at the beginning or end of video to use as a noise profile. In Audacity use the noise reduction effect generate the noise profile than run it on the whole audio track. Then it should sound about 10x better, or nearly get rid of all noise.

    • @MartinRudat
      @MartinRudat 6 месяцев назад

      I'm surprised Wendell isn't using a pair of communication earmuffs; hearing protection coupled with a boom mic (or a bunch of mics and post-processing) possibly being fed directly to the camera.
      As far as I know a good, comfortable set of earmuffs, especially something like the Sensear brand (which allow you to have a casual conversation next to a diesel engine at full throttle) are more or less required equipment for someone that works in a data center all day.

  • @cem_kaya
    @cem_kaya 7 месяцев назад +9

    this might be very useful with CXL if it lives up to expectations.

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад +2

      Liqid already has demos of CXL memory pooling with their fabric. I would not expect it to reach production before mid 2025.

    • @hugevibez
      @hugevibez 7 месяцев назад +2

      CXL already goes far beyond this as it has cache coherency, so you can pool devices much more easily together. I see at as an evolution to this technology (and the nvswitch stuff), which CXL 3.0 and beyond expands on even further with the extended fabric capabilities and PCIe gen 6 speeds. I think that's where the holdup has been since it's a relatively new technology and those extended capabilities are significant for hyperscalar adoption which is what drives much of the industry and especially the interconnects subsector in the first place.

  • @solidreactor
    @solidreactor 7 месяцев назад +2

    I have been thinking about this use case for a year now, for UE5 development, testing and validation. Recently also thought about using image recognition with ML or "standard" computer vision (or a mix) for automatic validation.
    I can see this being valuable for both developers and also for tech media benchmarking. I just need to allocate time to dive into this.... or get it served "for free" by Wendel

  • @misimik
    @misimik 7 месяцев назад +1

    Guys, can you help me gather Wendel's most used phrases? Like
    - coloring outside the lines
    - this is not what you would normally do
    - this is MADNESS
    - ...

    • @tim3172
      @tim3172 7 месяцев назад

      He uses "RoCkInG" 19 times every video like he's a tween that needs extra time to take tests.

  • @markjackson264
    @markjackson264 9 дней назад

    Hey Wendell, just wanted to say thanks for the great videos and hello.

  • @mritunjaymusale
    @mritunjaymusale 7 месяцев назад

    I really wanted to do something similar in my Uni's server for deep learning since we had 2 GPU based systems that had multiple GPUs using this we could've pooled those GPUs together to make a system of 4 gpu in one click.

  • @Ironic-Social-Phobia
    @Ironic-Social-Phobia 7 месяцев назад +1

    Now we know what really happened to Ryan this week, Wendell was practicing his magic trick!

  • @Jdmorris143
    @Jdmorris143 7 месяцев назад

    Magic Wendell? Now I cannot get that image out of my head.

  • @ianemptymindtank
    @ianemptymindtank 5 месяцев назад

    Thinking about why my workplace needs this

  • @Harve6988
    @Harve6988 Месяц назад

    So much RGB in that AMD red door room....
    This is amazing though. You should maybe try and see if this works with all those eGPU set ups! (beelink etc) I can imagine having this box in your basement, and a mini PC in each room say, and then just using that GPU resource for gaming in each room when you want.
    Hyperconvergence in the home seems like an interesting future. I wonder whether we'll ever get something similar for USB devices (XHCI-IOV not withstanding as something that doesn't seem to be very supported anywhere). Can we not have a USB fabric for KVMs where each server has its own "USB fabric" card in it, and then you can just switch to using that card as if it were plugged in.
    Or what about whole home HDMI/DP fabrics that work similarly - you pass through a HDMI/DP or two over a fabric, and it seems to the display like it is just "plugged in". I think if they've got PCIe sorted out (which is pernicketty with the repeaters and latency) the other protocols could be worked out. OTOH - PCIe probably is the killer protocol and either everything else can be done on top of it, or there is not sufficient desire/market size for other protocols to be "hyperconverged".

  • @AzNcRzY85
    @AzNcRzY85 7 месяцев назад +2

    Wendell does it fit in the Minisforum MS-01?
    It woukd be a massive plus if it would and works.
    RTX A2000 12GB is already good but this is a complete game changer for alot of systems mini or full desktop.

  • @chrismurphy2769
    @chrismurphy2769 7 месяцев назад

    I've absolutely been wanting and dreaming of something like this

  • @DaxHamel
    @DaxHamel 7 месяцев назад

    Thanks Wendell. I'd like to see a video about network booting and imaging.

  • @immortalityIMT
    @immortalityIMT 6 месяцев назад

    How to do cluster for training LLM, first 4 x 8GB in one system and second 4x8gb over lan.

  • @dangerwr
    @dangerwr 7 месяцев назад

    I could see Steve and team at GamersNexus utilizing this for retesting older cards when new GPUs come out.

  • @jayprosser7349
    @jayprosser7349 7 месяцев назад

    The Wizard at Techpowerup must be aware of this.

  • @ShankayLoveLadyL
    @ShankayLoveLadyL 6 месяцев назад

    WoW.. this is truly amazing, impressive, I dunno... like, I usually expect smart stuff on this channel from my list of tech channels, but this time, what Wendell done is a complete another league.
    I bet Linus was thinking about something similar with his tech lab, but now there is someone to be hired for his project with automated mass testing.

  • @shodan6401
    @shodan6401 6 месяцев назад

    Man, I'm not an IT guy. I know next to nothing. But I love this sht...

  • @_GntlStone_
    @_GntlStone_ 7 месяцев назад +27

    Looking forward to a L1T + GN collaboration video on building this into a working gaming test setup (Pretty Please ☺️)

    • @Mervinion
      @Mervinion 7 месяцев назад +6

      Throw Hardware Unboxed into the mix. I think both Steves would love it. Only if you could do the same with CPUs...

  • @fanshaw
    @fanshaw 6 месяцев назад

    I just want this inside my workstation - a bank of x16 slots and I get to dynamically (or even statically, with dip switches) assign PCIE lanes to each slot or to the chipset.

  • @brandonhi3667
    @brandonhi3667 7 месяцев назад +1

    fantastic video!

  • @GorditoCrunchTime
    @GorditoCrunchTime 7 месяцев назад

    Wendell: “you may have noticed..”
    Me: “I noticed that Apple monitor!”

  • @annebokma4637
    @annebokma4637 7 месяцев назад

    I don't want an expensive box in my basement. In my attic high and DRY. 😂

  • @cal2127
    @cal2127 7 месяцев назад +2

    whats the price?

  • @Ben79k
    @Ben79k 7 месяцев назад

    I had no idea something like this was possible. Very cool. Its not the subject of the video, but that iMac you were demoing on, is it rigged up to use as just a monitor? Or is it actually running? Looks funny with the glass removed

  • @stamy
    @stamy 7 месяцев назад

    Wow very interesting !
    Can you control power on those PCI devices ? I mean lets say only one GPU powered on at a time, the one that is currently used remotely.
    Also how do you sent the video signal back to the monitor ? Are you using a extra long display port cable, or a fiber optic cable at some sort ?
    Thank you.
    PS: What is the approximative price of such a piece of hardware ?

  • @hugevibez
    @hugevibez 7 месяцев назад

    The real question is, does this support Looking Glass so you can do baremetal-to-baremetal video buffer sharing between hosts? I know it should technically be possible since PCIe devices on the same fabric/chassis can talk to one another. Yes, my mind goes to some wild places, I've also had dreams of Looking Glass over RDMA. Glad you've finally got one of these in your lab. Anxiously awaiting the CXL revolution which I might be able to afford in like a decade.

  • @mohammedgoder
    @mohammedgoder 7 месяцев назад +2

    Is there any PCIe rack-mount chassis that can allow this to be a rack-mounted solution?

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад

      Typical installation is rackmount. It is all standard 19 inch gear that gets deployed in datacenters around the world.

    • @mohammedgoder
      @mohammedgoder 7 месяцев назад

      @@jjaymick6265 can you post a model number that you'd recommend?

    • @mohammedgoder
      @mohammedgoder 7 месяцев назад

      @@jjaymick6265 Is there any particular model that you'd recommend.

    • @jjaymick6265
      @jjaymick6265 6 месяцев назад

      @@mohammedgoder Somehow my previous comment got removed. If you are looking for supported systems / fabric device etc the best place to check is on Liqid's website. Under resources they publish a HCL of "Liqid Tested/Approved" devices.

    • @mohammedgoder
      @mohammedgoder 6 месяцев назад

      I found it. Wendell mentioned it in the video.
      Liqid makes rackmount PCIe enclosures.

  • @Gooberpatrol66
    @Gooberpatrol66 7 месяцев назад

    This would be great for KVM. Plug USB cards into PCIE, and send your peripherals to all your computers.

  • @chrisamon5762
    @chrisamon5762 7 месяцев назад

    I might actually be able to use all my pc addiction parts now!!!!!

  • @XtianApi
    @XtianApi 4 месяца назад +1

    If you work in the enterprise, watch this. If you don't work in the enterprise, it's just going to make you mad because you can't afford it

  • @ryanw.9828
    @ryanw.9828 7 месяцев назад +1

    Hardware unboxed! Steve!!!!

  • @Rkcuddles
    @Rkcuddles 3 месяца назад

    It can save a lot of money in a house of gamers. You can dynamically allocate the best gpu to whoever is playing a demanding video game!!! And you don’t need multiple high end graphics cards when most of the time the $300 option is just fine.
    Less buying stuff yes yes yes.

  • @NdxtremePro
    @NdxtremePro 7 месяцев назад

    This seems tailor made for all those single slot consumer boards that get sold. It would make them much more useful.
    I can imagine it could in some future reduce the cost of a recording studio with all of their specialized audio cards if their could spend 1/1th the cost on the motherboard, and share the cards across multiple equipment.
    I could seen cryptominers using the best cards depending on the current pricing.
    I could see switching GPUs depending on which gives the best gaming performance.
    How about retro machines using older PCI-E cards with VM's?
    I imagine the bandwidth of older GPUs wouldn't sature the bus, so you could connect them to the system and pas them through to individual VMs?
    Or, some PCI-E 1.0 cards in Crossfire and SLI with a one slot motherboard.
    Way overkill, but seriously cool tech.
    Speaking of that, you could ger some PCI-E to PCI-X audio equipment, pass that through to some Windows XP VMs, and get that latency goodness and unrestricted access audio engineers loved in a modern one slot solution.
    Enterprise side, I could see creating a massive networking VM set with one of these cards in each of the main systems slot, and attached to a separate PCI-E box, each setup with those multifunction cards. Custom bespoke network switch.

    • @tim3172
      @tim3172 7 месяцев назад

      1/1th the cost?

    • @NdxtremePro
      @NdxtremePro 7 месяцев назад

      @@tim3172 1/10th, lol.

  • @SprocketHoles
    @SprocketHoles 5 месяцев назад

    Image this biult into a laptop for an external dock. Full phat gpu running at full speed on the desk.

  • @Edsdrafts
    @Edsdrafts 7 месяцев назад

    How about power usage when you have all these GPUs running? Do the rest idle when unused at reasonable wattage / temp.? It's also hard doing game testing due to thermals as you are using different enclosure from standard PC etc. There muat be noticeable performance loss too.

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад

      I can't speak for client GPUs but enterprise GPUs have power saving features embedded into the cards. For instance an A100 at idle pulls around 50 watts. At full tilt it can pull close to 300'ish watts. The enclosure itself pulls about 80 watts empty (no GPUs). As far as performance loss. Based on my testing of AI/ML workloads on GPUs inside Liqid fabrics compared with published MLPerf results I would say performance loss is very minimal.

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 7 месяцев назад +2

    …could you hook up a second Liqid adapter in the same client system to a Gen5 x4 M.2 slot to not interfere with the 16 dGPU lanes?

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад +2

      Liqid does support having multiple HBAs in a single host. Each Fabric device provisioned on the fabric is directly provisioned to a specific HBA so your thinking of isolating disk IO from GPU IO would work.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 7 месяцев назад +1

      Thanks for that clarification.

  • @rootofskynet
    @rootofskynet 4 месяца назад

    you just blew my mind. i've got some gpus here, that urge for usage... :D

  • @lemmonsinmyeyes
    @lemmonsinmyeyes 7 месяцев назад

    This could greatly cut down on hardware for render farms in VFX. Neat

  • @AGEAnimations
    @AGEAnimations 7 месяцев назад

    Could this use all the GPUs for 3D Rendering in Octane or Redshift 3D for a single PC or is it just one GPU at a time? I know Wendell mentions SLI briefly but to have a GPU render machine connected to a small desktop pc would be ideal for a home server setup.

  • @C-M-E
    @C-M-E 2 месяца назад

    I remembered this vid from awhile back and acquired another workstation card, then this popped into my head and seemed like a Great way to consolidate a few cards for potential GPU render farming. Oof, this is a slick system but its 50 GRAND for the 'cheapest' option !!!!! Why is plugging in a bunch of PCIe cards into a network so expensive?!
    Man, seriously bummed. I thought this would be like $500 or so.

  • @stupiduser6646
    @stupiduser6646 23 дня назад

    Ok. Now I know what I am asking for Christmas.
    Could you use the network swap feature to connect to different networks?

  • @theftking
    @theftking 7 месяцев назад +1

    Obviously it's time to ditch USB and start using PCI-e for everything. Think of how sick the PCI-e mice and keyboards will be.

  • @SomnolentFudge
    @SomnolentFudge 7 месяцев назад +5

    I think we should only call it "liquid fabric" on Linux, on Windows it's just "wet blanket".

  • @brianmccullough4578
    @brianmccullough4578 7 месяцев назад

    Woooooo! PCI-E fabrics baby!!!

  • @shodan6401
    @shodan6401 6 месяцев назад

    I know that GPU riser cables are common, but realistically, how much latency is introduced by having the GPU at such a physical distance compared to being directly in the PCIe slot on the board?

  • @_neon_light_
    @_neon_light_ 7 месяцев назад

    From where can one buy this hardware? I can't find any info on Liqid's website. Google didn't help either.

  • @ryan.crosby
    @ryan.crosby 3 месяца назад

    > Windows 10/11 don't really do PCIe hotplug
    I wonder what kind of insane jank is sitting behind the Surface Book 2 GPU hotplugging. It's also weird that it supports hotplugging some PCIe devices over USB-C but actual PCIe hotplug doesn't work.

  • @stamy
    @stamy 7 месяцев назад

    Let's say you have a WS motherboard with 4 expansion slots PCIe x16.
    Can you dynamically activate/deactivate by software these PCIe slots so that the CPU can only see one at a time ? Each of the slot is populated with a GPU of course. This need then to be combined with a KVM to switch the video output to the monitor.

  • @AzureFlash
    @AzureFlash 7 месяцев назад

    Snake: "AaaAAH LIQIIIIID!!"

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 7 месяцев назад

    Thought of another question: Can the box that houses all the PCIe AICs hard-power off/on the individual PCIe slots via the management software in case there is a crashed state? Or do you have to do something physically at the box?

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад +1

      There is no slot power control features... There is however a bus reset feature of the Liqid fabric to ensure that devices are reset and in a good state prior to being presented to a host. So if you have a device in a bad state you can simply just remove it from the host and add it back in and it will get bus reset in the process. Per slot power control is a feature being looked at in future enclosures.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 7 месяцев назад

      Again, thanks for that clarification. Would definitely appreciate the per slot power on/off control, would be helpful for diagnosing maybe defective PCIe cards and of course also reduce power consumption with unused cards not just idling around.

  • @MoraFermi
    @MoraFermi 6 месяцев назад +1

    What I really want is a simple PCIe expansion enclosure that isn't garbage designed for bitcoin miners.

  • @NickByers-og9cx
    @NickByers-og9cx 7 месяцев назад +1

    How do I buy one of these switches, I must have one

  • @kirksteinklauber260
    @kirksteinklauber260 7 месяцев назад +2

    How much it costs??

  • @LeminskiTankscor
    @LeminskiTankscor 7 месяцев назад

    Oh my. This is something special.

  • @benny-fo7bd
    @benny-fo7bd 7 месяцев назад

    Man, a system that would convert or retrofit PCI fabric over and/or to Infiniband would shake up the home lab community if it was somehow compatible with Windows and Win Server and Proxmox and all that, or if it could somehow be acived just on the software side using only Infiniband as the transport layer.

  • @michaelsdailylife8563
    @michaelsdailylife8563 7 месяцев назад

    This is really interesting and cool tech!

  • @yttt2220
    @yttt2220 5 месяцев назад

    What smartwatch is Wendell wearing?

  • @Jimster481
    @Jimster481 6 месяцев назад

    Wow this is so amazing, I bet the pricing is far out of the range of a small office like mine though

  • @gollenda7852
    @gollenda7852 7 месяцев назад

    Where can I get a copy of that Wallpaper?

  • @HORNOMINATOR
    @HORNOMINATOR 7 месяцев назад

    cool and so. is that the same notebooks do that have a switchable igpu and dedicated gpu?

  • @rojovision
    @rojovision 7 месяцев назад

    What are the performance implications in a gaming scenario? I assume there must be some amount of drop, but I'd like to know how significant it is.

    • @Mpdarkguy
      @Mpdarkguy 7 месяцев назад

      A few ms of latency I reckon

  • @GameCyborgCh
    @GameCyborgCh 7 месяцев назад

    your test bench has an optical drive?

  • @Haleskinn
    @Haleskinn 7 месяцев назад

    @linustechtips some ideas for upcoming video? :P

  • @ThatKoukiZ31
    @ThatKoukiZ31 7 месяцев назад

    Ah! He admits it, he is a wizard!

  • @arnox4554
    @arnox4554 7 месяцев назад

    Maybe I'm misunderstanding this, but wouldn't the latency between the CPU and the GPU be really bad here? Especially with the setup Wendell has in the video?

  • @PsiQ
    @PsiQ 7 месяцев назад

    i might have missed it.. But would/will/could there be an option to shove around gpus (or AI hardware) on a server running multiple VMs to the VMs that currently need it, and "unplug" it from idle ones ? .. Well, ok, you would need to run multiple uplinks at some point i guess.. Or have all gpus directly slotted in your vm server.

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад +1

      The ability of Liqid to expose a single or multiple PCIe devices to a single or multiple hypervisors is 100% a reality. As long as you are using a linux based hypervisor hot-plug will just work. You can then expose those physical devices or vGPUs (if you can afford the license) to one or many virtual machines. The only gotcha is to change GPU types in the VM you will have to power-cycle the VM because I have not found any hypervisor (VMware / Prox/ XCP-NG / KVM-qemu that support hot-plug PCIe into a VM.

    • @PsiQ
      @PsiQ 7 месяцев назад

      @@jjaymick6265 thanks ;-) you seem to be going round answering questions here 🙂

    • @jjaymick6265
      @jjaymick6265 6 месяцев назад

      @@PsiQ I have 16 servers (Dell MX blades and various other 2U servers all attached to a Liqid fabric in my lab with various GPUs/NICs/FPGAs/NVMe that I have been working with for the past 3 years. So have a fair bit of experience on what it is capable of. Once you stitch it together with some CI/CD stuff via Digital Rebar or Ansible it become super powerful for testing and automation.

  • @leftcoastbeard
    @leftcoastbeard 7 месяцев назад

    Reminds me of Compute Express Link (CXL)

  • @OsX86H3AvY
    @OsX86H3AvY 7 месяцев назад

    Id like to be able to hotplug GPUs in my running VMs as well...how nice would it be to have say two or three VM boxes for CPU and MEM, one SSD box, one GPU box and one NIC box so you could just swap any nic/gpu/disk to any VM in any of those three boxes in any combination.... that'd be sweet....i definitely dont need it but that just makes me want it more

    • @jjaymick6265
      @jjaymick6265 6 месяцев назад

      Over the last couple days I have been working on this exact use case. In most environments this simply is not possible, however in KVM (libvirt) I have discovered the capability to hot-attach a PCIe device to a running VM like this... virsh attach-device VM-1 --file gpu1.xml --current . So with Liqid I can hot attach a GPU to the hypervisor and then hot attach said GPU all the way to the VM. The only thing I have not figured out how to get done is to get the BAR address space for GPU pre-allocated in the VM so the device is actually functional without a VM reboot. As of today the GPU will show up in the VM but drivers cannot bind to it because there is no bar space allocated for it so in lspci the device has a bunch of unregistered memory bars and drivers don't load. Once bar space can be pre-allocated in the VM I have confidence this will work. Baby steps.

  • @4megii
    @4megii 7 месяцев назад

    What sort of cable does this use? Could this be run over fibre instead?
    Also can you have a singular GPU Box with a few GPUs and then use those GPUs interchangeably with different hosts.
    My thought process is. GPU box in the basement with multiple PCs connected over a fibre cables so I can just switch GPU on any device connected to the fibre network.

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад

      The cable is a SFF-8644 cable using copper as a media. (mini-sas) There are companies that use optical media but they are fairly pricey.

  • @pristine3700
    @pristine3700 7 месяцев назад

    This seems tailored made for Steve from Hardware Unboxed. Shame it doesnt work that well with Windows for hot-plug. but i bet it would simplify benchmarking multiple GPUs on same CPU platform

  • @HumblyNeil
    @HumblyNeil 7 месяцев назад

    The iMac bezel blew me away...

  • @mathew2214
    @mathew2214 2 месяца назад

    Where do I buy? This product could remove so many headaches!

  • @newstandardaccount
    @newstandardaccount 6 месяцев назад

    The technology here is really amazing, and while I can easily see testing applications, it is hard for me to understand what the value is here. What sorts of use cases are relevant here? Why would I want something like this? EDIT: yes, for testing I can see the value - for example if I'm a games developer or QA tester looking to validate a number of GPU configurations, this could be insanely useful. Outside of testing configurations though, I don't see many applications. Maybe testing is enough.

  • @peterverhagen9560
    @peterverhagen9560 7 месяцев назад

    @4:17 I can see @GamersNexus looking forward to that. Automate the whole test suite? That is what that sounds like.

  • @dgo4490
    @dgo4490 7 месяцев назад

    How's the latency? Every PHY jump induces latency, so considering all the hardware involved, this should have at least 3 additional points of extra latency. So maybe like 4-5 times the round trip of native pcie...

    • @jjaymick6265
      @jjaymick6265 7 месяцев назад

      100ns per hop. In this specific setup that would mean 3 hops between the CPU and the GPU device. 1 hop at the HBA, 1 hop at the switch, 1 hop at the enclosure. so 600 nanoseconds round trip.

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 7 месяцев назад +3

    2:53 Tim Cook wouldn’t approve.

  • @felixspyrou
    @felixspyrou 7 месяцев назад

    Here take my money, this is amazing, me with a lot of computers and with I would be able to use my best GPU on all of them!