ZFS without a Server!?! It is DPU time!

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 171

  • @MiniArts159
    @MiniArts159 2 года назад +80

    it took many years, but we have finally done it.
    We have invented... hardware RAID (but this time with ZFS)!

    • @geekinasuit8333
      @geekinasuit8333 2 года назад +2

      LOL'd @ that, but we can install other file systems too, so we also invented users choice on HW RAID which is much better.

    • @jonathan3917
      @jonathan3917 2 года назад +2

      Software defined storage is still in it's infancy, offloading the networking and storage to a merged controller when clustering would be hugely beneficial. Something's gotta give with bandwidth constraints, using object storage is just a piece to the puzzle which does away some limitations of past file systems. The way people are going about it is distributing workloads and cutting out the host managed aspects which really circumvents the limitations out there right now.

  • @TitelSinistrel
    @TitelSinistrel 2 года назад +25

    These are what the NUC compute card should have been from the start. Everyone wanted it to be able to run it's own system and act as a PCIe device.

  • @tommihommi1
    @tommihommi1 2 года назад +102

    These are basically a full fledged server that can cosplay as a PCIe NIC

    • @salman.sheikh
      @salman.sheikh 2 года назад +10

      Same thoughts here. :) I feel as if we have come full circle with these DPUs.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +10

      Ha!

    • @cromefire_
      @cromefire_ 2 года назад +7

      It's basically a PI with PCIe and way better networking/performance

    • @tolpacourt
      @tolpacourt 2 года назад

      @@cromefire_ You mean a Raspberry Pi?

    • @cromefire_
      @cromefire_ 2 года назад

      @@tolpacourt Yeah

  • @ewenchan1239
    @ewenchan1239 2 года назад +10

    I LOVE how you put up the warning: "Doing this in production is likely to get you fired!"
    hahahaha.....
    You know your audience well.
    It's too bad that the DPUs are going to still be too expensive for me to deploy at home, otherwise, it would be pretty cool (rather than just deploying "vanilla" Mellanox ConnectX-4 100 Gbps IB NICs).

  • @MuckingFedic
    @MuckingFedic 2 года назад +31

    Hands down the best enterprise hardware content out there. I work in the cloud so I never get to do this stuff anymore. You make me miss it so much! Thank you for all the amazing content.

  • @MatthewHill
    @MatthewHill 2 года назад +9

    I consider myself a pretty hard-core technology enthusiast... but I _wish_ I could get as excited about tech as Patrick does. I love this stuff but Patrick is definitely on a whole other level than I am.

  • @esra_erimez
    @esra_erimez 2 года назад +5

    God, I love your enthusiasm

  • @JeffGeerling
    @JeffGeerling 2 года назад +7

    20:29 - "I can get better performance on the Pi!"

  • @jeremybarber2837
    @jeremybarber2837 2 года назад +8

    While the setup is indeed silly, I absolutely love it. Keep on breaking my brain, it rebuilds better every time.

    • @shapelessed
      @shapelessed 2 года назад +3

      You know... Breaking your brain and understanding what broke it is kinda what learning is...

  •  2 года назад +7

    I'm really loving the concept, as ZFS is very resource hungry, specially if using nvmes as storage. With this setup, you can offload all the storage load to this and take more advantage of your server resorces. Also, when this DPU could be recycled in the future as awesome homelab NAS, in the same way that you can reuse some SATA/SAS cards as JBODs, maybe even combine the two in a dumb chassis.

  • @laomivip
    @laomivip 2 года назад +3

    I remember you can configure the physical CX-6 port connection to actually only show up on the Arm side, and then you can do what you did here, but using the high speed ports. Also then the x86 side interface will in fact link to additional interface on the Arm side (similliar to vethX interface showing up on host system when giving VMs an ethernet interface). Thus you can actually run DNS/DHCP/PXE/NFS server directly on the BlueField cores and PXE boot the host this way (only caveat is on a cold boot you need to make sure the BlueField Arm system is booted before the host system boots)! This was fun when I achieved this, using the onboard eMMC to fit both the Arm system as well as an x86 system.

    • @udirt
      @udirt 2 года назад

      nice job!

  • @knomad666
    @knomad666 2 года назад +1

    I love the blatant warning for people who do not understand anything more subtle. Very cool demonstration! I wonder if it would be possible to setup an All-in-One x86 server running VMWare ESXi that has a DPU in it, where the DPU would have direct access to the storage that's formatted in ZFS and then presented to the x86 host server as a block device for it to use as a datastore.

  • @wayland7150
    @wayland7150 2 года назад +11

    I get what Patrick is driving at. By having the storage drives as iSCSI and more than one DPU you can have redundancy (RAIDz) on storage and redundancy on server (DPU).
    Back in the olden days the CPU could be a plug in card and the motherboard was just slots. S100?

    • @markbooth3066
      @markbooth3066 Год назад

      Both Blade servers and PICMG based embedded PC's still separate compute from infrastructure.

  • @trumanhw
    @trumanhw 2 года назад +1

    This is EXACTLY What I was thinking it would do and was already researching how much it would cost.
    I knew this would work! :-)
    It's just too bad the price for these Kioxia drives with NVMe-oF are super expensive (or the interfaces between them).

  • @christopherjackson2157
    @christopherjackson2157 2 года назад +2

    Its really hard to get my head around the implications of this type of storage solution. It is helpful to see how are more complete solution would look. Good job, thanks

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +5

      The next video is the NVMeoF version. That should help, but we needed to build a base demonstrating what a DPU is and why it is different

    • @christopherjackson2157
      @christopherjackson2157 2 года назад

      @@ServeTheHomeVideo I look forward to it !

  • @chromerims
    @chromerims Год назад

    Next level demo 👍 "AIC JBOX is super cool."
    Irony with clients using x86 albeit atop virtualized infra, . . . meanwhile providers know what's up.
    (Notes 1-5)
    I noticed STH's latest article on AWS Nitro v5, too.
    Kindest regards, neighbours and friends.
    Note 1
    Verily, ARM all the way. (4:10) I cannot disagree.
    Note 2
    iSCSI and ZFS were very on-point for this initial demo. Thanks for discussing oob_net0 and tmfifo_net0, plus the details of commenting out gateway/ nameservers.
    Note 3
    Important components for some DPUs beyond the NIC (e.g., eMMC, FPGAs, CPU/iGPU, accelerators) are each soldered (6:41).
    Whereas AWS Nitro has a variety of orchestrated and task-specific cards, an approach which seems more robust than a 'galleon' DPU.
    So when a single component faults, does the manager-admin just remove and replace an entire DPU unit (i.e. beyond failover capacity of any putative cluster of DPUs)?
    As such, I become less interested in future, second-hand Nvda/Mellanox Bluefield-2 units on Ebay.
    (In a more mature phase for DPUs, a consolidated reseller might do some raw materials recycling or mayhaps unit refurbishment/CPO(?).)
    Note:useless rant 4
    When the universe flips to Arm from x86, I shall wish it had occurred yesterday.
    Rant 5
    When IPv6 . . .

  • @xpgaming6977
    @xpgaming6977 2 года назад +12

    Neat! As a thought experiment, I've thought about what it would take to get some of the ZFS goodies on non-*nix systems. This gives a nice alternative to just running ZFS on a server and exposing zvols via iSCSI, especially since any caching would be local. ARC could be in this card's RAM and L2ARC+ZIL could be on a local NVMe drive.
    I'm guessing there would be a way to stick one of these in a server and have it expose a zvol as a local NVMe device?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +5

      Yes. Next though, we are going to show how to use a DPU to expose a NVMeoF drive as a device to a server without having to go through ZFS. ZFS is really just being shown here because folks can relate to it.

    • @NavinF
      @NavinF 2 года назад +1

      Huh that’s a neat idea, but I wouldn’t be surprised if 10pc pricing for these DPUs is more than the cost of an entire server

  • @69MrUsername69
    @69MrUsername69 2 года назад +2

    Keep it up! love this series :)

  • @n0madfernan257
    @n0madfernan257 2 года назад +1

    thanks for showing these exotic hardware

  • @captainobvious9188
    @captainobvious9188 Год назад

    The model in Infiniband with RDMA verbs has been around for 20+ years to offload all communication in-between any set of arbitrary processes across a fabric. DPUs are accomplishing the same thing with Ethernet and TCP/IP while adding a set of processes that are going to be common to most workload pipelines.
    Right now the DPU seems like a more expensive way to add processing power over adding more host cores to a fabric.

  • @MrRedTux
    @MrRedTux 2 года назад +4

    So we're going back to Mainframes, in that each major component is handled by a separate discrete processor.

    • @shapelessed
      @shapelessed 2 года назад +2

      Programmable processors are nice and all, but their main downside that optimising them for specific workloads is really hard... A good example is how M1 handles video encoding/decoding with dedicated hardware, super fast and runs on a few watts, compared to Intel's 250W to do the same job... And since it's dedicated you know strictly what it's used for and optimise it to crazy extend just for that task.

    • @TAP7a
      @TAP7a 2 года назад +1

      Except also each component is capable of GP compute, not just the main function

    • @cromefire_
      @cromefire_ 2 года назад +2

      @@shapelessed While you're point is generally valid, Video en- and decoding (in a very efficient way) is standard on all Intel and AMD processors with an iGPU. The only thing the M1 can do in that area that other processors can't, is handling Apples proprietary ProRES in hardware.

    • @jonathan3917
      @jonathan3917 2 года назад

      @@shapelessed Many components in a system has it's own micro controllers and firmware embedded within onboard "cache" storage. Software has been getting a ton of traction and attention but I think it's about time hardware gets taken to another level. Once local compute is sorted out, really the possibilities are endless.

  • @alfblack2
    @alfblack2 2 года назад +1

    my goodness. crazy exciting stuff.

  • @magfal
    @magfal 2 года назад +2

    If this can serve it's file system over pcie as an nvme to a host running windows, it's the first windows server running a production grade storage system locally.

  • @Bobbias
    @Bobbias 2 года назад +2

    Enterprise guys get some truly awesome tech to play with. It's too bad these things are still prohibitively expensive for the home user (to be fair, they're also extreme overkill for most (all?) home users so...) I wonder how long it will be before prices for used units become affordable for home users.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад

      I think BF2 prices will drop a lot when BF3 comes out. BF3 is a HUGE upgrade and this is still new tech.

  • @TAP7a
    @TAP7a 2 года назад +4

    What a time, when your NIC is running multiple VMs…

  • @44Bigs
    @44Bigs 2 года назад +4

    I’d really like to see how this would fit into Ceph clusters. I guess a DPU should be able to run OSDs for a jbox or SSDs for example?
    Also I’m struggling to understand how NVMEoF would fit into that scenario. Exciting technology nonetheless.

    • @InIMoeK
      @InIMoeK 2 года назад +1

      I think this would not be fast since the ARM cores are not fast enough to host NVMe OSD's. These things shine when presenting NVMe-oF devices to other machines. Ceph is already distributed by itself. Running an OSD on an NVMe device would be a different story.

    • @udirt
      @udirt 2 года назад

      a DPU should be perfect for MDS, kicking out latency is where you gain most.

  • @BloodyIron
    @BloodyIron 2 года назад +2

    But there are interfacing issues using iSCSI LUNs for ZFS in that ZFS can't directly manage the underlying hardware/storage... D:

  • @ishmaelmusgrave
    @ishmaelmusgrave 2 года назад +2

    Would the DPU work in a external GPU enclosure, as a head to like a disk shelf? not efficient, just wondering.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +2

      Yes. 100% you can do this but you would probably want a SAS HBA/ RAID controller on the PCIe complex

  • @hipantcii
    @hipantcii 2 года назад +1

    I am wondering what will be the performance if you have created the zraid on the DPU which has direct access to the storage over the PCIE and expose the dataset over NFS or as iSCSI target. I guess in this case the RAM will be limiting factor.
    It was fun video but the setup I proposed might not get you fired 😊

  • @tim-peerwiederkehr6067
    @tim-peerwiederkehr6067 Год назад +1

    Is there somewhere the part 2 of this video?

  • @rhowe212
    @rhowe212 2 года назад

    I view this as edge compute for the infra fabric, which is quite cool. What are your options for presenting the storage and networking to the hypervisor on the main host? Can the DPU present as a plethora of PCIe endpoints to the host which can then be directly passed through to VMs?
    One concern is that we're probably about to see an explosion in proprietary fabric protocols which are passed off as "don't worry - just run our ancient unpatched custom Ubuntu image on your DPU and everything will be fine - keep your fabric network isolated"

  • @spacehitchhiker4264
    @spacehitchhiker4264 2 года назад +2

    Those seem like they'd be nice for running Ceph

  • @SalvatorePellitteri
    @SalvatorePellitteri 2 года назад +1

    Nice system but how much cost all of this? 10X a simple x86 box? This is like flight a 2 years old from London to San Francisco with a Space Shuttle because you don't want to change the diaper to the baby mid flight in regular air line. Very nice but the economy do not make sense. Everyone who try to replace the simple solution (cheap and widely available) with one much more complex and expensive is wrong, keep it simple. I can see the DPU as an addon card to the simple solution to accelerate it, not to replace it. Distributed storage systems can make a great use of this DPUs. The most I can think is using this DPUs at the edge of a distributed storage, in the initiator server, to connect the server to the storage with part of the control plane storage logic running in the dpu and make fast packet switching using also very low latency protocol on top of DPDK to connect to the storage servers.

  • @janot928
    @janot928 2 года назад +3

    It would be really cool to see how you would do the 1 peta byte server that Linus does.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +3

      Disk or flash? We did a 100 drive Dell EMC PowerEdge XE7100 4U like a year ago

    • @JeffGeerling
      @JeffGeerling 2 года назад +3

      @@ServeTheHomeVideo I want to see you build a 1 PB flash storage server with NVMeoF

    • @janot928
      @janot928 2 года назад +1

      @@ServeTheHomeVideo flash, something that can also take full advantage of the drives capabilities

    • @zachariah380
      @zachariah380 2 года назад

      @@JeffGeerling me tooooo

  • @rayliu8401
    @rayliu8401 2 года назад +1

    ZFS, DPU..... Nice!

  • @Pixel-tl6fo
    @Pixel-tl6fo 2 года назад +1

    I don't know if this is a silly question, but does the DPU card have a PCIe switch that has the ARM system and the DPU on separate lanes? Or does the host system access the NIC parts of the DPU through the ARM subsystem? Just trying to wrap my head around ow two devices can operate on the same card.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      There is a diagram at 07:45 that may help to make sense.

  • @KingLarbear
    @KingLarbear 2 года назад +2

    If I ever get a custom home, I'm going to have to have a sound deadening home server room to kill that noise

    • @xeromist
      @xeromist 2 года назад

      As long as you know the power and cooling needs of your specific implementations there is no reason you have to live with noise. You can change power supplies, sinks, fans, even transplant a 1u server into a larger chassis. You will pay for these changes in reduced density and cost but in a home lab noise is sometimes a much greater concern.

    • @KingLarbear
      @KingLarbear 2 года назад

      @@xeromist this would be for in-home

  • @user-gt4yo4dt2x
    @user-gt4yo4dt2x 9 месяцев назад

    Thank you so much for the nicely explained video. I wonder about the packet parsing ability of this DPU. Can I access the incoming packet payload? and perform some simple processing. Meaning upto L7.

  • @qazwsx000xswzaq
    @qazwsx000xswzaq 2 года назад +1

    I am a bit confused. It seems everything is done on the ARM subsystem side. What does the BlueField 2 DPU do in the mix here? Is the ZFS operation offloaded to the DPU?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      Right now, to this demo was just on the Arm portion of the DPU not the ConnectX NIC side

    • @qazwsx000xswzaq
      @qazwsx000xswzaq 2 года назад

      @@ServeTheHomeVideo Ah I see. Thanks for your clarification.

  • @Jrorion8
    @Jrorion8 2 года назад +1

    Nice

  • @KingLarbear
    @KingLarbear 2 года назад +2

    These devices are getting complex but seem so simple from the point of a view of a common man... this is crazy how they're removing unnecessary steps but they have to add complex systems into this device but I would have no idea

    • @josephmagly
      @josephmagly 2 года назад +2

      its going to be key to realize new speeds, a lot of the way computers are today was to cope with the realities of spinning disks, limited bandwidth and compute. Today much of the current capabilities of silicon can be used to completely change how we do computing. The purpose build software that will mature around this hardware will blow many people away.

    • @KingLarbear
      @KingLarbear 2 года назад

      @@josephmagly it is truly jaw dropping

  • @solenskinerable
    @solenskinerable 2 года назад

    does the dpu have enough ram for decent hitrate in the arc, and would a local slog and metadata device help (and maybe l2arc)?

  • @denvera1g1
    @denvera1g1 2 года назад

    This is pretty cool. how hard is it to do parity calculations with physically different processors/ram pools? Or is it only going to work on a per-DPU basis?

    • @NavinF
      @NavinF 2 года назад +2

      The way he set it up, parity is only calculated in the client DPU. The machine that card is plugged into presumably sees a single block device that’s backed by a zvol. Anyway erasure codes are dirt cheap on

  • @Yoshirumo
    @Yoshirumo 2 года назад +1

    This might be a stupid question, but what is the software used during the ZFS installation (13:00 onwards)?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад

      MobaXterm or are you thinking just Ubuntu?

    • @Yoshirumo
      @Yoshirumo 2 года назад

      @@ServeTheHomeVideo MobaXterm, that's the one. Looks like an interesting tool and I might try it out some time. Thank you!

  • @camofelix
    @camofelix 2 года назад +2

    That’s pretty fascinating…. So it’s a software raid, hardware raid card?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +3

      With 100GbE and OOB management :-) Actually, many of the DPUs have things like erasure coding offload so they are like software defined, hardware accelerated cards?

    • @marcogenovesi8570
      @marcogenovesi8570 2 года назад +3

      If it's done by the card's own firmware, it's hardware raid. Even normal RAID cards are running software on their own controller to do the RAID functionality

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад

      I guess this is different because you can run more than one stack on the device and also do networking

    • @camofelix
      @camofelix 2 года назад

      @@ServeTheHomeVideo As if the main platforms didn't have enough things going on already! I'm just picturing a rack of IBM z16's in combination with a racks worth of these guys to offload the hard part of the calculations.

  • @Sawyer0823
    @Sawyer0823 Год назад

    How DPU connect to NVMe Storage? Is it a direct attached or need other auxiliary card?

  • @frenchmarty7446
    @frenchmarty7446 Год назад

    Would be cool to see some serious memory (>256gb DDR5) on a DPU and maybe separate FPGA logic for things like running a full database engine. A server with several of these could be a game changing edge server. Someone needs to make DPUs their priority the way NVIDIA is with GPUs.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад

      We looked at the Octeon 10 that has socketed DDR5 DRAM in both the CRB (that we have) and the card version www.servethehome.com/marvell-octeon-10-arm-neoverse-n2-dpu-in-the-wild-rivaling-2017-era-intel-xeon/

  • @jagdtigger
    @jagdtigger 2 года назад +1

    so its basically an arm based PC on a PCI-E card, what ground breaking innovation.... /s

  • @ewookiis
    @ewookiis 2 года назад +1

    I love it. But, I disagree with "cheaper" looking at the pricelist :p.
    Quick, someone send a DPU to Wendell :D.

    • @ishmaelmusgrave
      @ishmaelmusgrave 2 года назад +2

      I could be wrong, but this video gives the impression, that for an ESXi server/hosting: where one pays $X licensing per CPU core, a DPU would not be charged extra. so offloading more tasks to a card, means the cores one pays for can do the task you need it to (host VM's) and less for storage. in that case, a server can gain functionality, with less expense, than adding a whole CPU with it;s extra licensing?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      That is the idea, but VMware has not announced its license pricing for this yet (I may be wrong.) But that is what the cloud providers like AWS, Google, and so forth are doing, just without VMware costs

  • @Vikingza
    @Vikingza 2 года назад +1

    Could one install Proxmox onto each DPU? Could one cluster a few Proxmox DPUs?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      Proxmox needs better Arm support to make that useful

  • @montecorbit8280
    @montecorbit8280 2 года назад +2

    At 1:37
    Hello Patrick!!
    There is a blue ribbon on the pipes behind your left shoulder. Is that a "gender reveal"for you or a member of your crew??

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      Ha! We did add a photographer/ b-roll person this week so maybe that is why there is a camera there?

    • @montecorbit8280
      @montecorbit8280 2 года назад

      @@ServeTheHomeVideo
      Thought it looked like a blue ribbon, instead of a camera....either my screen is too small, or my eyes are too old.
      Since I'm watching on a cell phone, it could be both....

  • @SalvatorePellitteri
    @SalvatorePellitteri 2 года назад +2

    DPU=NICS+PROCESSING so why not dedicate 2-4-6 cores of the main processor+4Port NIC to do so? If you have 24-32-64-128 cores on e machine why not

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад

      That does not separate the management/ infrastructure plane from the compute plane. Also, typically large organizations want to have as many big cores available for VMs/ containers/ apps since they cost so much per core.

  • @Camhin1
    @Camhin1 2 года назад +1

    You could run a router straight out of the dpu. Probably can't route at 100gb though. Not yet anyway.

    • @Adrian-jj4xk
      @Adrian-jj4xk 2 года назад

      look at the mikrotik CCR2004-1G-2XS-PCIe

  • @solenskinerable
    @solenskinerable 2 года назад

    i would have guessed the future would have e.g. u2 ssds in 90LFF case but denser since its SFF. the backplane being two nvmeof network cards per drive & double terrabit switches, and quad psus or such. i would not have expected a pcie backplane...

  • @Gjarllarhorn1
    @Gjarllarhorn1 2 года назад

    Wonder if you could run CEPH on the DPUs and have a super dynamic cluster

  • @MoraFermi
    @MoraFermi 2 года назад +3

    Yo dawg, I heard you liked your compute networked, so we put compute in your network.
    Frankly, at this point, "DPU" moniker is pointless, it's just a card server, just like old PC/104 industrial devices were.

    • @shapelessed
      @shapelessed 2 года назад

      Well, it is a processing unit for a strictly specific task. It does processing, it is a unit and is dedicated...

    • @im.thatoneguy
      @im.thatoneguy 2 года назад

      A PC/104 stack is a full standalone server and generally has a single CPU module with expansion cards attached. Like a rack mounted server vs a tower case server, they're the same concept in a different form factor. A DPU isn't optimized to function independently of a host computer. It's optimized to function as a highly programmable add-on card. I wouldn't say they're really the same concept at all. In fact I'm sure there are DPUs in the 104 form factor.

  • @jonathan3917
    @jonathan3917 2 года назад

    @ServeTheHome Patrick do you think that this could be a game changer for caching and tiering? To think not that many years ago, the mainstream config was to have SAS SSDs as tier 0, then 10K/15K HDDs, then lastly regular nearline storage. I'm imaging a setup of say 102 bay JBOD enclosures daisy chained with capacity storage, then onboard DPU networking/disk controllers with NVMe networked caching in each JBOD all connected to a software optimized host node. People might not need additional racks for more nodes and just link up capacity and performance more directly than ever before. People still stack a bunch of storage servers but it really seems like a huge waste!

    • @guy_autordie
      @guy_autordie 2 года назад +1

      Storage goes to SSDs everywhere. Except "cold" storage because density is better with HDDs.

  • @im.thatoneguy
    @im.thatoneguy 2 года назад +1

    Instead of "ARM CPU Complex" can we call it an "ARM SOC" which is probably an idea most viewers are familiar with? Or would that be inaccurate and misleading?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      In the industry, most do not call these Arm SoCs since the primary functions are accelerators and I/O. Somewhat like how Intel and AMD-Xilinx FPGAs have Arm cores, but we call them FPGAs not Arm SoCs

  • @SalvatorePellitteri
    @SalvatorePellitteri 2 года назад +1

    Can the DPU emulate an nvme device to the host?

  • @cagataykilic7978
    @cagataykilic7978 2 года назад

    can you use ISER instead iscsi?

  • @System0Error0Message
    @System0Error0Message 2 года назад

    But will it store? But will it ZFS2?

  • @nicekeyboardalan6972
    @nicekeyboardalan6972 2 года назад +1

    Hey I have some Napatech NT40E3 4x10g port Smartnics are they comparable to this card?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад

      Those are great examples of SmartNICs v. DPUs.

    • @nicekeyboardalan6972
      @nicekeyboardalan6972 2 года назад

      @@ServeTheHomeVideo Well i mean functionally as in could I do the same thing as you guys in the Vid and run ZFS on it? Thanks for the Answer tho

  • @javierthewish
    @javierthewish 2 года назад +1

    uuuu that is a pci bridge!!!

  • @SajalKayan
    @SajalKayan 2 года назад +1

    Kernel 5.4 is ancient. This is the problem with Nvidia devices is that they need non-mainline kernel forks, it's not mantained/updated. Experienced this with Jetson. Maybe this is fine for corporate suits who like the stability of ancient code.

    • @JeffGeerling
      @JeffGeerling 2 года назад +2

      Heh, you must not still be supporting a lot of old RHEL servers 😭

    • @SajalKayan
      @SajalKayan 2 года назад

      @@JeffGeerling that's going into pre-historic territory...

  • @user-th3jl8mz7y
    @user-th3jl8mz7y 2 года назад +2

    damn son, that's pcie root...

  • @leadiususa7394
    @leadiususa7394 2 года назад +1

    What is that above your head?!?! Can we say a Easter egg maybe?!?!

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      That is just an Aputure P300c (that is always overhead for lighting)

    • @leadiususa7394
      @leadiususa7394 2 года назад

      @@ServeTheHomeVideo Well I tried... lol

    • @danieliser
      @danieliser 2 года назад

      @@ServeTheHomeVideo Didn't see any comments, was it the 2(+) screens in the back with difference RGB colors (yellow & blue), which also matched your shirt. Assuming support for Ukraine given timing & colors.

    • @danieliser
      @danieliser 2 года назад

      Or the serve the home t-shirt in general, who knows.

  • @Joshua286m
    @Joshua286m 2 года назад +1

    now I just need one that does SATA drives instead of NVMe/PCIe drives

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +2

      You can plug a HBA/ RAID controller into the JBOX and then attach drives via an external JBOD with SATA disks. But you are 100% on the right track. This is a big part of how these started

  • @AlexanderKalish
    @AlexanderKalish 2 года назад +2

    What camera is hanging there?

    • @zachradabaugh4925
      @zachradabaugh4925 2 года назад +1

      They use a canon r5 a lot so it probably is that one with a battery grip or an r3

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      You got the Easter Egg. Canon R5

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      Good call! R5 with grip and the 28-70 f/2. Servers do not move fast enough for us to use the R3 :-)

    • @zachradabaugh4925
      @zachradabaugh4925 2 года назад +1

      @@ServeTheHomeVideo helps that you mentioned the R5 last video where I thought the shot was film haha

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +2

      Oh yea :-) Actually I am taking the R5 C with the R5 in my backpack. I have like 3 trips in 12 days coming up starting Monday. I am excited not to have to bring a C70 w/ a Pelican for video (I tried the A7S III and FX3 but they overheat)

  • @arendneethling9584
    @arendneethling9584 2 года назад +2

    But does it run prox?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      Probably not the best idea with and Arm CPU and limited RAM

    • @Cynyr
      @Cynyr 2 года назад

      @@ServeTheHomeVideo but my best proxmox node has 8gb...

  • @SplittingField
    @SplittingField 2 года назад +1

    co-server

  • @TheSwiip1
    @TheSwiip1 2 года назад +2

    he said 10gbe is not fast :'(
    ( cries at 10gbe home network speed )

    • @KingLarbear
      @KingLarbear 2 года назад

      Internet/ethernet and intranet/ethernet are two different things, the latter will always be able to take advantage of higher speed because it is almost always local

    • @KingLarbear
      @KingLarbear 2 года назад

      So 10gbe for a home ethernet is extremely fast since you only need 40mbps for 4K video call lol

    • @TheSwiip1
      @TheSwiip1 2 года назад

      @@KingLarbear xD i know, i cant get more that 10gbe Internet (shared with 8) anyways ^^

  • @Japaneren
    @Japaneren 2 года назад

    Now do Ceph.

  • @johnblakely6568
    @johnblakely6568 2 года назад

    12VO?

  • @user-uw7st6vn1z
    @user-uw7st6vn1z 2 года назад +1

    200gb network card storage device has 89mb/s speed, well done, you are fired for sure for providing this wonderful solution....

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +2

      Or, another way to think of it is that you have redundant Docker storage out-of-band for DPUs to provide high value services over the 100GbE ports. That is actually how this idea came about. We needed more storage than the eMMC provided for DPU Arm containers

  • @BoraHorzaGobuchul
    @BoraHorzaGobuchul 6 месяцев назад +1

    So, like now people are running doom on everything, in a few years people are going to be having fun running zfs on watches, freezers, impact wrenches, and pregnancy tests just for the lulz.

  • @stevelk1329
    @stevelk1329 2 года назад

    DPU=?

    • @MarkRose1337
      @MarkRose1337 2 года назад +1

      Dedicated processing unit.

  • @JonMartinYXD
    @JonMartinYXD 2 года назад +1

    Man is it weird hearing an American talk about ZFS, with the whole zee vs zed pronounciation.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 года назад +1

      I was at a bar in Austin last night and someone there was asking me about the DPU video and said "Zed F-S" and I had this exact thought

  • @adnansorker-z2z
    @adnansorker-z2z 7 дней назад

    Hall Christopher Hall Larry Martin Michelle

  • @movax20h
    @movax20h 2 года назад +1

    Boring. This is not different than doing it on a host, just different form factor. I also expect performance to be really bad.

    • @jonathan3917
      @jonathan3917 2 года назад

      Umm, with edge computing becoming more distributed than ever, I'm pretty sure the use cases not only now but in the future would be great. You can effectively link storage and networking to have lower latency, all in a system that could be more hyper converged than ever.

    • @movax20h
      @movax20h 2 года назад

      @@jonathan3917 Umm, no.

  • @pavelpolyakov5763
    @pavelpolyakov5763 2 года назад

    How often your wife tells you to shut up? You speak fast, but the amount of information you convey exponentially approaches ZERO.