AMD Pensando - Wire speed 100gbe with no CPU overhead is here today

Поделиться
HTML-код
  • Опубликовано: 1 июл 2023
  • Some crazy tech for you crazy techies! AMD brings their game when it comes to newer and faster server tech!
    *********************************
    Check us out online at the following places!
    bio.link/level1techs
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Intro and Outro Music: "Earth Bound" by Slynk
    Other Music: "Lively" & "FollowHer" by Zeeky Beats
    Edited by Autumn
  • НаукаНаука

Комментарии • 247

  • @DxCBuG
    @DxCBuG 10 месяцев назад +166

    So servers finally become L3 Switches / Routers in Hardware itself but also Firewalls with IDS/IPS ... basically anything ? crazy

    • @Level1Techs
      @Level1Techs  10 месяцев назад +84

      Yeah exactly. And you can migrate the job between nic and Aruba switch (!) Pretty seamlessly tho I haven't tried that they are already doing that at scale

    • @rkan2
      @rkan2 10 месяцев назад +4

      Are the latencies comparable though?

    • @ewookiis
      @ewookiis 10 месяцев назад +5

      @@rkan2 avoiding handover to the CPU, to ASIC or like video contains - there's tons of performance boosts to be had - cpu's sucks speedwise on handling network traffic, not even including additional packet analysis..

    • @jamegumb7298
      @jamegumb7298 10 месяцев назад +7

      @@ewookiis 2-3 times copying over before it gets received or sent out does not help, that is where a lot of the beneft is.
      Way back I ran a weird ass userspace network stack already for 1Gbit, near 0 cpu overhead and I squeezed out a few more Megabits consistently. Then I bought 2 cheap Infiniband adapters from eBay, kinda slow, unimpressive. Then I put one in my nas 1 in my pc and paid attention, set it up properly. WOW so fast, and cpu use went way down. For 100Gbit? I can only imagine.

    • @ewookiis
      @ewookiis 10 месяцев назад +2

      @@jamegumb7298 agreed. I’m really excited!

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 10 месяцев назад +142

    Meanwhile at Intel: “Quick, let’s cut off ECC functionality and offer it as an optional additional licensed service!”

    • @raven4k998
      @raven4k998 10 месяцев назад +7

      AMD goes faster and faster for a set price Intel gouging your eyes out for every little bit of there performance why cause they are intel🤣🤣🤣

    • @TheGuruStud
      @TheGuruStud 10 месяцев назад +9

      I have 2 bit ECC on my NAS thanks to AMD and Asrock (cheapo pro apu).
      Intelololol

    • @RN1441
      @RN1441 10 месяцев назад

      @@TheGuruStud Which board and apu model specifically?

    • @btudrus
      @btudrus 9 месяцев назад +1

      @@RN1441 I suppose any of the x470 / x570 asrockrack boards and any 5xxxG APU?

    • @TheGuruStud
      @TheGuruStud 9 месяцев назад

      @@RN1441 x470 taichi and 2200G pro. Obviously, there's a lot better options, now.

  • @benjaminoechsli1941
    @benjaminoechsli1941 10 месяцев назад +36

    18:10 You love to see AMD mixing their LEGOs together to create something wonderful that's not vendor-locked.

  • @JosiahBradley
    @JosiahBradley 10 месяцев назад +22

    AMD has been working on this since 2006. Literally the "future is fusion" was meant to say they knew general compute was what was needed and they spent years working on this via hardware, software, and acquisitions. We are seeing this pay off now. And in spades. Tuning tcp for Ethernet for overhead was always fun and glad we can finally get wire speed instead of ~2GBs per core. The max I've gotten before on Epyc was ~90% wire from CPU overhead.

  • @jamieknight326
    @jamieknight326 5 месяцев назад +2

    Wow. Blast from the past. Varnish cache in front of PHP was the backend setup at the BBC for years. Many happy memories :)

  • @cldpt
    @cldpt 10 месяцев назад +23

    I love that you brought up DMA. I have been doing the same in my online arguments as to why things like DirectStorage, or PCH bottlenecks, or even USB hub layouts seem redundant yet it seems even after 30-40 years there's still somebody doing something inefficiently wher the benefits of DMA are completely diluted or flat out eliminated... Because somebody thought it would be fine to add this encryption or that packet inspection or another step in the data flow that just has to go to daddy CPU to ask for help

    • @winebartender6653
      @winebartender6653 10 месяцев назад +3

      When the software devs and hardware devs don't like each other

  • @slowtrigger
    @slowtrigger 10 месяцев назад +17

    Pensando really makes you think

  • @devemia
    @devemia 10 месяцев назад +45

    Configuration as code is great, but one point people (and myself) tend to overlook is SecOps. I think a dedicated video on Lv1Tech web infrastructure would be a great one (from Dev, to DevOps, SecOps, and Infrastructure).

    • @malborboss
      @malborboss 10 месяцев назад +5

      I have a feeling that this is not a channel for topics you've mentioned

    • @devemia
      @devemia 10 месяцев назад

      @@malborboss There are far more specialized channel on these topics. That said, none covers a wide range of topics (but focusing on infra) with hands-on devices like lv1, so I think they can make a great video on this.

  • @chbrules
    @chbrules 10 месяцев назад +28

    I went through a lot of headache trying to saturate 2x10gbe bonded connections in Linux directly between two hosts. 100gbe is next level crazy. I've seen some references to 1.6tbps ethernet now too. It's getting wild out there!

    • @wayland7150
      @wayland7150 10 месяцев назад +2

      I only get about 3gbps file transfers on my 10gb SPF+ connection. If the file is cached it might do 6gbps for a bit.

    • @funkintonbeardo
      @funkintonbeardo 10 месяцев назад +7

      I developed software for systems that handled 400 Gbps throughput for years that 10 Gbps networks sounds like toys to me. Funny thing is I only have 1G network at home and get 100/20 service 💀

    • @sfalpha
      @sfalpha 10 месяцев назад +1

      @@wayland7150 It may limit by file transfer itself still a single process (if not by disk/storage).
      But if there is like 10 transfer threads and offload and multi-queue setup properly it should able to perform full 10Gbps.
      More than that (25GBE) require better offload than normal TCP/UDP offload and RSS such as RDMA which require support in hardware both server/client.
      100GBE+ probably need memory to memory transfer because at least 4 x4 NVME SSDs needed to saturate write speed of full 100GBPS. 🤣

    • @LtdJorge
      @LtdJorge 10 месяцев назад

      ​@@wayland7150that sounds like an IOPS problem.

  • @sfalpha
    @sfalpha 10 месяцев назад +22

    Great platform. Now we can have multi gigabit hardware based forwarding (+LOW LATTENCY) router inside NIC on the server itself without need separate appliances.

  • @boblister6174
    @boblister6174 10 месяцев назад +2

    Keep your enthusiasm at max Wendel keeps me coming back to keep learning even tho I stop working on networks years ago. Always nice to see what is around now.

  • @williamervin3272
    @williamervin3272 10 месяцев назад +2

    I caught that subtle TNG Q reference! lol. Keep up the good work, Wendell!

  • @Decenium
    @Decenium 10 месяцев назад +9

    Hearing Wendel talk is just impressive, such knowledge

    • @henrikoldcorn
      @henrikoldcorn 10 месяцев назад +6

      I have no idea what he's talking about but I'm here anyway. I like Wendel.

  • @tehsnipes123
    @tehsnipes123 10 месяцев назад

    Great video Wendell. Love the IaC demo, thanks!

  • @daltonchaney1504
    @daltonchaney1504 10 месяцев назад +7

    Am I misinterpreting this or does this seem like a revolutionary step in the way we do processing at a large scale? To me this seems like a no brainer for enterprise and data center scale applications. The uptick in throughput for what seems to be a marginal increase in price, all while maintaining flexibility in terms of what the hardware can be used for... seems huge. Thanks for sharing Wendell, I appreciate the overview!

    • @LtdJorge
      @LtdJorge 10 месяцев назад +2

      Of course, it is the next step in computing. And the one with the more cohesive platform will win. Right now, it kinda sounds like that's AMD.

  • @TheTaipan
    @TheTaipan 10 месяцев назад

    Looking fit and trim mate. Whatever you're doing is working. Well done.

  • @sanriosonderweg
    @sanriosonderweg 10 месяцев назад +3

    Meanwhile, they still sell usb 3 sticks that write at 10MB/s

  • @williamdouglas8040
    @williamdouglas8040 10 месяцев назад +3

    Thank you, for being one of the only RUclips presenters that can pronounce Xilinx.

    • @declanmcardle
      @declanmcardle 10 месяцев назад

      Zi (that's short z, not zed z, or zee z) (sounds like sigh) links. Why? What are other people pronouncing it as? Exxey linnex?

  • @partlyawesome
    @partlyawesome 10 месяцев назад +1

    i love the slightly out of focus wendell and the *really* in focus wall

  • @wskinnyodden
    @wskinnyodden 10 месяцев назад +11

    Yes, above 5Gbps MTU matters again on most CPUs, not so much for 1Gbps (although it used to 10 years ago, hence the 9000B+ MTU)

  • @ewookiis
    @ewookiis 10 месяцев назад +1

    There's so much want here I'm bubbling... I've been dreaming on making my own setup for "packet interrogation"/routing/etc for ages... I've been shifting my eyes towards gpu handoff on packet inspection for a long time, promising papers, but, for ... a more normal approach, and just like when nic's really started to do their own magic with most of the packet-magic.. this is .. just amazeballs.

  • @jengstrm2
    @jengstrm2 10 месяцев назад

    I really like running Docker inside Proxmox LXC containers. Works great and so easy to manage.

  • @GizmoFromPizmo
    @GizmoFromPizmo 10 месяцев назад +4

    I love these videos, man. Since I retired, this is the nerdiest crap I do anymore.

  • @mrlithium69
    @mrlithium69 10 месяцев назад +1

    whoa cool I want one. Wendell you rock, you level up my youtube feed

  • @0LoneTech
    @0LoneTech 10 месяцев назад +3

    So Microsoft marketing Linux blade servers as add in cards to get around poor Windows/PC performance? I'd like to see some architecture descriptions and performance specifications.

  • @vincei4252
    @vincei4252 10 месяцев назад +9

    Weird, even though this is 4K the video looks soft and our Wendell looks slightly out of focus. :(

    • @snaky115
      @snaky115 10 месяцев назад +12

      oh he is visibly out of focus, oof. Look at that sharp brick wall, though!

    • @vincei4252
      @vincei4252 10 месяцев назад +3

      @@snaky115 Oops. You're absolutely right. Brick walls for the win.

    • @marcogenovesi8570
      @marcogenovesi8570 10 месяцев назад +1

      he knows we all want to look longingly at the bricks in his wall

  • @aliensarefromspace
    @aliensarefromspace 10 месяцев назад +1

    I was so hyped the entire video, i couldn't focus on Wendell :D

  • @trissylegs
    @trissylegs 9 месяцев назад

    As someone trying to figure out how to add a Load Balance to Kubernetes on my own infra. It'd be nice to have a option that's: Just run the loadbalance on my fancy network card.
    Another thing is when QUIC started gain traction there was some issues with performance as TCP offload meant that you'd get worse performance as QUIC is UDP. But as the name say "Protocol-independent" you could instead use it to offload QUIC instead of TCP. (And QUIC also includes TLS1.3)

  • @williamcleek4922
    @williamcleek4922 Месяц назад

    Was thinking about Ethernet microsegmentation issues of hitting resources of access gear hard in large implementations - so much that it dissuades implementation. Pushing those security ACL's/policies further down to DPU makes sense

  • @eastwoodpeake
    @eastwoodpeake 10 месяцев назад

    Really cool stuff here, Wendell!

  • @richardheumann1887
    @richardheumann1887 10 месяцев назад

    I understood very little about what Wendel talked about, but it is excellent.

  • @FLOODOFSINS
    @FLOODOFSINS 10 месяцев назад +30

    Let's get that AMD and Amazon deal done!

    • @raven4k998
      @raven4k998 10 месяцев назад

      you gotta love how windows pegs a single thread to bottle neck the download🤣

  • @aacasd
    @aacasd 10 месяцев назад +3

    Is there any architecture diagram which highlights how a couple of apps or website can be hosted using pensando DPU (cloud or on-prem)? It will be good to see a demo on how to deploy a couple of websites using this design.

  • @stclaws9580
    @stclaws9580 10 месяцев назад

    gotta love the camera focus on brick wall instead of face :)

  • @ericneo2
    @ericneo2 10 месяцев назад

    When Wendell shows off tech. Feels good man.

  • @bayanzabihiyan7465
    @bayanzabihiyan7465 10 месяцев назад +2

    Even though I graduated as a Computer Engineer, this high level infrastructure, containers, layers, networking, etc stuff still goes over my head.
    Only thing I really got was "DMA engine" cause I understand how those work.
    Kinda embarassing ngl.

  • @floodo1
    @floodo1 10 месяцев назад

    That part feeling out on docker stack

  • @frzen
    @frzen 10 месяцев назад +3

    I am finding this so interesting but haven't been able to fully grok what I need to do to use this. The aruba switch is €35k which doesn't seem crazy when it is also a firewall for all East west traffic at line rate. Please do more videos on this topic

    • @rkan2
      @rkan2 10 месяцев назад

      Switch? This is a router? A firewall? A lot more than a switch.

  • @j340_official
    @j340_official 9 месяцев назад

    So, offload tasks from the host cpu onto accelerators on the pcie bus ?? How does CXL factor in to this equation ? Can other companies besides intel and amd get in on the accelerator game? If so, is there a future where users don’t need a 128 core monster from intel/amd but rather can get by with 32 cores and several programmable accelerators?

  • @LackofFaithify
    @LackofFaithify 10 месяцев назад +6

    This makes much more sense than the NVIDIA and VMWare or bust setup. Thank you Mon Capitan for not being a blithering fanboy and acting like this isn't its own computer in the computer, unlike NVIDIA Bluefield evangelists.

  • @NinjaQuick
    @NinjaQuick 10 месяцев назад +1

    dude you look great, good job with weight loss! keep it up!

  • @duncanny5848
    @duncanny5848 10 месяцев назад

    Adored the comment "Unless your a member of The Continuum" !! Only here, and only Wendell. Love it all.

  • @im.thatoneguy
    @im.thatoneguy 10 месяцев назад +1

    So Windows requires Server or Workstation Edition $$$ for RDMA.
    Could a DPU handle it in the DPU OS (Open Source) and then pass the data through the driver? Or do you still need OS support for the RDMA?

  • @LordSaliss
    @LordSaliss 3 месяца назад

    Thinking about the 9000 byte jumbo packets and the LAN transfer speed, is there a way to set up the LAN side of the network to run at 9000 byte packets, but have the router configured so that whenever traffic is going out the WAN port it automatically takes that packet and re-assembles it into 6 complete packets to send out that dont have the fragmented bit set? That way LAN traffic should be able to use jumbo frames successfully, but internet traffic has no issues or risk of servers receiving fragmented packets and dropping them. If its possible to do such a config, maybe do a video on it for us?
    Also, I thought The Continuum was a great show.
    😂

  • @TheObiwantoby
    @TheObiwantoby 2 месяца назад

    I love having all my volumes and services handled in docker so I can wipe and rebuild on demand. I have to settle for the poor version of this and use CPUs with AES extensions or similar. Pi5 with crypto extensions ... but this is and would be so cool to play with @ home. Even if overkill.
    However, are these DPUs even available for purchase?

  • @joshuawaterhousify
    @joshuawaterhousify 10 месяцев назад

    The focus on some shots is a little distracting, as focus is behind Wendell on the brick wall. The subtle DoF on Wendell may be subtle, but it's enough to bother me a little.
    That said, the content itself is great as always, and while this isn't something I'm ever going to be playing with, it's great to see that it's coming and learn the different ways it can be used.

    • @zorbakaput8537
      @zorbakaput8537 10 месяцев назад

      Agree he looks a little out of focus on my 42" 4K monitor. I find it off putting. Enjoy his presentations though he always makes and provides cogent examples.

  • @maxwellsmart3156
    @maxwellsmart3156 10 месяцев назад +2

    Everything old is new again! Working with WANG minis in the 80's, the relatively weak CPU was surrounded by dedicated controllers to offload functionality. Intel then wanted to use their CPU to perform all functions, then we got soft modems and dumb NICs, etc. Now we've come full circle. When are we going to start pxe booting the OS?

  • @ZhangMaza
    @ZhangMaza 10 месяцев назад

    It is like Hyperconverged but specialized in networks, nice

  • @jannegrey593
    @jannegrey593 10 месяцев назад +9

    They finally will use that Pensando buy.

  • @GizmoFromPizmo
    @GizmoFromPizmo 10 месяцев назад

    I get about 430MB throughput on a large file copy between servers with old Mellanox ConnectX 10Gb NICs in them (point-to-point using SFP cabling between them). The computers are Server 2012 Enterprise machines with 32GB RAM in each. I have the Mellanox settings configured for:
    o 9000 Jumbo Packets
    o 4096 Send Buffers
    o 4096 Receive Buffers
    One Server is configured to be the network SAN (iSCSI Target) computer and I have 4 1TB SSDs (Inland/Microcenter) configured in a RAID 0 array using MS Storage Spaces and residing in my nifty little 4-bay Icy Dock internal enclosure.
    430MB/sec. is about what the controllers can do in these rigs. SATA transfer speeds are often overstated. (BTW, that's 420ish MB/s reported by Microsoft during copying multiplied by 1024.)
    I copy my VHDs from the Hyper-V host every Friday to the iSCSI SAN for backup. Back in the old days, this would take hours. Now, it's maybe 30-minutes. 10Gb makes a big difference. SSDs make a big difference. These backups used to go onto HDDs. Ouch.

    • @TrueThanny
      @TrueThanny 9 месяцев назад +1

      I think Windows is acting as a bottleneck for you there. I've found it's much more difficult to saturate a 10Gbps link in Windows (jumbo frames and all) than in Linux. I never bothered to nail down exactly why.
      I don't think it's your storage. Even consumer SATA SSD's will easily saturate the link at rated speed.

    • @GizmoFromPizmo
      @GizmoFromPizmo 9 месяцев назад

      @@TrueThanny I would love to get faster throughput but 430 MB/s is pretty good. I watched a Linus video where the dude was getting like a gigabyte per second and I was like, "No way..."
      I know the SATA controller is throttling down after it heats up and so I've got some stick-on copper heat sinks on order. I'll try to see if I can do a little ad hoc passive cooling to take some of the pressure off.
      But, I watch the Performance Monitor and as these files are copying, the RAM in the sending computer fills up until there's like 3GB free then the copy speed plummets. As long as the file is under like 30GB, I can get away without the speed tanking like that. I have a couple 40GB+ VHDs that copy fine for awhile but then, once the buffer maxes out, BOOM, the speed is cut to less than half.
      Also, smaller files take longer in this scenario. I have a folder that contains a lot of support files and when the copy routine gets to those, the speed drops to the floor. It's copying a lot of files but there is so much File Open / File Close going on that it takes extra time. I mean, it's not terrible but it's a big difference.
      Overall, I'm pretty satisfied. The move from HDDs to SSDs was (obviously) tremendous. Also, that Mellanox 10Gb NIC is the tits. Certainly, I'd love to see the Linus 1GB/s performance but I'm not working with enterprise level hardware in any of this. I guess the Mellanox cards are enterprise but that's about it.

  • @cinemaipswich4636
    @cinemaipswich4636 10 месяцев назад

    It was only a week ago that I was going to buy an ITX Motherboard/CPU/RAM and Multi Ethernet cards to do what is now on offer. Perhaps these DPU units can be priced in the NUC price range.

  • @einrealist
    @einrealist 10 месяцев назад

    That's why I bought more AMD stock, when the Xilinx buyout was announced. :D

  • @GlobalTommy
    @GlobalTommy 9 месяцев назад

    Why is that first Nginx in a docker container and not just installed on the OS? Why don't you use Certbot with it's cron jobs for certificates renewal?

  • @shammyh
    @shammyh 11 месяцев назад +2

    Quantum mechanics jokes... And Star Trek references... AND DPUs?? Is it my birthday??!

  • @coder543
    @coder543 10 месяцев назад +5

    What is the benefit of using Varnish instead of Nginx's built-in caching?

    • @stepansigut1949
      @stepansigut1949 10 месяцев назад

      Varnish lets you do much more elaborate caching policies and request processing. Its VCL is a full blown programming language and let you do pretty much anything from custom hashing, cache eviction, request/response modification, merging two backend fetches into a single response, cache bypass depending on request content etc… If you need to configure your caching more granularly then varnish is the way to go. Unfortunately the learning curve is pretty steep, documentation is suboptimal and there are a lot of public VCL examples which are either wrong or outright dangerous to deploy. Nevertheless it is a powerful tool after you get the gist of it.

  • @NegativeROG
    @NegativeROG 9 месяцев назад

    I want a new AMD processor line to be called "Gorlami".

  • @robertpearson8546
    @robertpearson8546 10 месяцев назад

    I have been talking about this for 40 years. As the demand for speed increases, algorithms get implemented in specialized hardware. Look at the Novix CPU. Moore implemented the Forth interpreter (virtual machine) in hardware (an actual machine). He was able to execute up to 4 instructions per clock cycle.
    To speed up processes. 1) Use better algorithms. (Look at Newton's Method for square roots vs the garbage taught in schools.) 2) Implement the algorithm directly in hardware. 3) Hire better design engineers. (I once did a stepper motor drive circuit. I could only get 9 times the specified ratings with my design. Look at the Ćuk-buck2 vs the 1920 buck regulator. Look at the bridgeless Power Factor corrector vs the 1920/1988 PC PSU garbage.)

    • @prashanthb6521
      @prashanthb6521 9 месяцев назад +1

      True sir, but software engineers are a lethargic lot, they will still push computations thru the CPU even if they are given a specialized accelerators.

  • @dvdavid888
    @dvdavid888 10 месяцев назад

    So servers will replace L3 Switches isn't this similar to how they were offloading packet inspection to the gpu just trying to get a better understanding

  • @levygaming3133
    @levygaming3133 10 месяцев назад +2

    I thought the AMD video encoding accelerator card was an ASIC not a FPGA? Like I remember them mentioning Xilinx in its announcement, but it didn’t sound like it was an fpga.

    • @memory_stick
      @memory_stick 10 месяцев назад

      The one shown in the video (Alveo U30) is based on the Xilinx Zynq platform, which is in fact a SoC with a Programmable Logic (PL == "FPGA") integrated among a quad core ARM Processor block, Memory, DSPS and some high Speed I/O of various kind. These Devices are not pure FPGAs like some bigger / older devices. There seem to be pure FPGA based solutions in the Alveo lineup though.
      Also the newest generation of Devices from Xilinx is the Versal Line which they call ACAP (Adaptive Compute Acceleration Platform ) which is basically a Zynq in much bigger with AI accelerator engines in there too (think High Pref. ARM Cores + large FPGA Block + AI Accelerators + All the I/O and Memory stuff in one Device)

    • @LtdJorge
      @LtdJorge 10 месяцев назад

      ​@@memory_stickall true, however they have ASICs inside to do the video encode/decode. You'll see it referenced as Xilinx IP, which are the something-on-hardware blocks.

  • @declanmcardle
    @declanmcardle 10 месяцев назад

    So, is this like an AWS Nitro card on ketracel-white (keeping with the Star Trek references)?

  • @PanduPoluan
    @PanduPoluan 10 месяцев назад

    17:19 "that's under the most extreme load, one that doesn't even exist in the world today"
    whoahhh

  • @MrMpp81
    @MrMpp81 10 месяцев назад +1

    Interesting that Intel is bringing QAT and related accelerators on-chip (and enabling for subscribers-only), but AMD is disaggregating/offloading this workload to Pensando DPU add-in-cards. AMD's approach is more customizable because it allows sysadmins/devops to install and configure their choice of generic software on the DPU, but Intel's approach could theoretically also be leveraged for related well-defined but non-network workloads like ZFS compression and erasure codes. Of course Intel also offers their own IPUs and PCI-e QAT add-in-cards. @Level1Techs, I'm curious as to which approach you think will win in the market?

    • @LtdJorge
      @LtdJorge 10 месяцев назад +1

      Pensando's should be able to do those ZFS tasks, look at Wendell's video from the Bluefield DPU. It also means there would be DMA from the disks to the network if using a network share on top of ZFS.
      I'm now wondering if these would be good to offload Ceph 🤤

  • @joshhardin666
    @joshhardin666 10 месяцев назад +1

    I loved this video, and i'm a big fan of your work, but the A roll on this video is focused really soft. I realize that lots of people are watching this on a phone and will NEVER notice, but I watch most youtube videos on my 48" 4k lg cx 120hz oled (I use 3 on my primary workstation as my standard displays) and it bugged me a bit, so I thought i'd give you some feedback. you might want to enable zebras on your camera (which i assume has zebras because it is at least 4k given you use the "trick" of punching into the same frame every once in a while to keep your A roll fresh, which is a good trick that I like). Thank you for all your hard work, I really do love your videos and software defined networking in this way is going to be a total game changer... this technology looks amazing! I can't wait until it trickles down to homelab users like myself!

  • @Kurukx
    @Kurukx 10 месяцев назад +1

    Broader Appealin :P

  • @AlexDemskie
    @AlexDemskie 10 месяцев назад +1

    Okay, so I gather that NGINX supports this accelerator. That's great for webservers sitting behind the proxy. But for everything else we'll need to embed support for DPDK on the backend apps. Practically speaking this means writing your app in C/C++. I guess it's possible to write in Rust/Go and use FFIs - but aint nobody got time fo dat.
    Nowadays most devs aren't writing their backends in C++ and getting packets in/out efficiently isn't the bottleneck for servers running interpreted languages like Python and JS.
    I guess what I'm trying to say is that we shouldn't trivialize gaining widespread support for everyday applications.
    However Network appliances are the exception. This is game changing for proxies, firewalls, switches, routers, controllers, video processors, etc. They'd be crazy not to take advantage of this. If these accelerators sell enough I expect open source implementations of common network appliances to really gain marketshare and appeal.

    • @paulie-g
      @paulie-g 10 месяцев назад

      This is not for soy devs or peasants, it's for serious people working at a large scale. No one is going to hire JS 'devs' to write DPDK code.

  • @OKuusava
    @OKuusava 9 месяцев назад

    Today is relative thing -as we know usb 4 and thunderbolt has been a long time, and actuallly, where? I have not seen either in any machine yet. Only broken usb-c ports and plugs, those I have seen many, so something has changed rather fast ;-)

  • @yourma2000
    @yourma2000 2 месяца назад

    0:03 Yep, that's DEFINITELY San Francisco.

  • @lavavex
    @lavavex 10 месяцев назад

    When will level1techs level up to level2techs? I would say they are so good that they are already level100techs

  • @nakos-zj6lq
    @nakos-zj6lq 3 месяца назад

    Can you do benchmark on what is the fastest possible network you can get with jumbo frames and sensible priced stuff, like older threadripper or AM5? I wont buy 100GBE if its bottlenecked to the point of unusable on e.g a 7950x3d.

  • @robertpearson8546
    @robertpearson8546 10 месяцев назад

    I though TCP/IP pack sizes were limited to 4k due to the router limitations.

  • @LA-MJ
    @LA-MJ 10 месяцев назад

    Have you tried Caddy?

  • @jfkastner
    @jfkastner 10 месяцев назад

    Security? How is the Traffic isolated inside the DPU?

  • @ShowXTech
    @ShowXTech 10 месяцев назад

    I mean with PCIe Gen. 5 you only need a liitle over 3 Lanes to do one 100 Gigabit connection

  • @seantellsit1431
    @seantellsit1431 10 месяцев назад +2

    The caveat is thet AMD and Aruba are charging out nose for these switches. Last quote I got for 4x10Gb switches from Aruba with this tech in it was nearly 30k each depending on your configuration. Sure, this tech is awesome, but way too early to be useful to anyone in entrprise or even small datacenters. This tech is only useful for big cloud and medium/large datacenters.

    • @wmopp9100
      @wmopp9100 10 месяцев назад +1

      Aruba has a very weird price policy. 80% discount even for medium sized customers are not unheard of

  • @prashanthb6521
    @prashanthb6521 9 месяцев назад

    I knew the future is of Accelerators !

  • @SciPunk215
    @SciPunk215 10 месяцев назад

    Nobody brings this kind of content like Wendel and the L1T team.

  • @PrivateUsername
    @PrivateUsername 10 месяцев назад +4

    CXL, anyone?

  • @aflury
    @aflury 10 месяцев назад

    I remember MTUs/IRQs being a bottleneck back with gigabit ethernet and old HPC systems (Cray/SGI) with like 200mhz CPUs couldn't keep up. Had to use slower (800mbps) HiPPI/GSN that was faster because of an effective 64kb MTU. But jumboframes were supposed to fix that... in the 90s... wait why are we still using 1500 byte MTUs?

  • @TheRealSwidi
    @TheRealSwidi 10 месяцев назад

    I am just here for the Trek references.

  • @IIARROWS
    @IIARROWS 9 месяцев назад

    6:15 not over 9000... Vegeta is pleased?

  • @0M9H4X_Neckbeard
    @0M9H4X_Neckbeard 10 месяцев назад +3

    I wonder if this will hurt Fortinet, looks like their ASIC-accelerated networking will be coming to everyone

    • @LackofFaithify
      @LackofFaithify 10 месяцев назад +2

      Their customers don't seem to be irked with all of their security screw ups on repeat, so doubt this will change their sales.

  • @aterentyev
    @aterentyev 10 месяцев назад

    Wendel, please make a video running a game that supports direct storage, from a network drive on a NAS with a 100Gbe LAN.
    Will games loading off RAID-Z2 with 6x PCIe NVMe drives be fast? Is it stupid? Yes. But really I think there's a practical application in a world with 10 gig home internet, streaming games with multiple terabytes of high-res textures directly off the cloud.

  • @gxtoast2221
    @gxtoast2221 6 месяцев назад

    Might be the nail in the coffin for TCP... hello UDP and QUIC+ or whatever tech/standard it becomes.

  • @WesFelter
    @WesFelter 10 месяцев назад

    Doesn't TSO/LRO solve the 1500 MTU problem?

  • @datapro007
    @datapro007 10 месяцев назад

    Wendell is out of focus. Try a higher f-stop for greater depth of field?

  • @VelcorHF
    @VelcorHF 10 месяцев назад +1

    Terraform?

  • @PhilippHaussleiter
    @PhilippHaussleiter 10 месяцев назад +1

    One question I still try to get an answer: how does a router performs when the public NIC uses 1,5k and the internal nics using 9k packet size? And also the other way round (e.g. VM Host with external 10g / 9k, but internal non 10g VM nics).

    • @Level1Techs
      @Level1Techs  10 месяцев назад +2

      With the dpu the packet size doesn't matter anymore. The p4 is so fast it matter much less I guess I should say. So even wireline SSL offload is faster than a simple smb file copy with Pensando

    • @0LoneTech
      @0LoneTech 10 месяцев назад

      There are routers that can defragment as well as fragment (splitting large packets into smaller ones). Path MTU discovery will aim to use the lowest common MTU for particular routes. Each have costs in memory, latency and metadata overhead; higher performance networks like DTM can move the planning out of frame level.

    • @PhilippHaussleiter
      @PhilippHaussleiter 10 месяцев назад

      @@0LoneTech Thank you for the answer. Do you have a link by any chance, that provides a good summary about this topic?

    • @0LoneTech
      @0LoneTech 10 месяцев назад +1

      @@PhilippHaussleiter You could start with Wikipedia "Maximum transmission unit", then the link to "IP fragmentation" and its subheading "Impact on network forwarding".

  • @Liqtor
    @Liqtor 9 месяцев назад

    Focus... FOOOCUUUUSSSSS!!

  • @goblinphreak2132
    @goblinphreak2132 10 месяцев назад

    Broader appeal? Would you say the plan is bananas? Ill see my way out

  • @udirt
    @udirt 10 месяцев назад

    for the sake of being fair - F5 had stellar infrastructure automation interfaces long, long before people even started speaking "IaaC"

  • @dexxeve9420
    @dexxeve9420 10 месяцев назад

    Not overly important but Wendell was the camera not in focus on your face, seemed a bit off

  • @waterflame321
    @waterflame321 10 месяцев назад

    That is epyc bro

  • @thehristokolev
    @thehristokolev 10 месяцев назад +1

    @Level1Techs it's not enough for certbot or whatever you are using to get your certificates to override them on disk. You need the nginx process to reload it's configuration in order for the certs to apply. I just have it set to restart itself every month. :D

    • @rpm10k.
      @rpm10k. 10 месяцев назад

      Nginx proxy manager ftw. Does all this on its own... And a nice gui

    • @LtdJorge
      @LtdJorge 10 месяцев назад

      You can reload the service without dropping clients in Nginx

  • @dgo4490
    @dgo4490 10 месяцев назад +4

    "People need security built into the network equipment" - I'd say it is network equipment that needs that by default. People don't stand to benefit much from learning to blindly rely on blackbox security solutions that may or may not work as expected. As a rule of thumb, I always assume I can't trust the infrastructure for security, especially in this day and age when it is all in the cloud, and even hardware purchases can be intercepted and implanted with nefarious firmware, so I facilitate that on the software side as thoroughly as possible. And if the hardware is secure - that's added bonus, but in case it is not - vulnerability is minimized.

    • @wiziek
      @wiziek 10 месяцев назад +1

      that's a lot of conspirasy theory, will you lay your own fiber, routers and switches between locations you want to connect?

    • @dgo4490
      @dgo4490 10 месяцев назад

      @@wiziek No, as I mentioned already, I will use whatever is available and presume it unsafe and do my best to provision for it. I will not write the software to rely on magical black box security, like say SSL. Of course, I also use SSL, but also not rely on it as a magic catch-all solution I can safely do insecure programming under. My code will in fact be just as safe if it didn't use SSL at all, I am only using it because browsers complain about it, and some outright refuse non-SSL connections. There's zero reliance on SSL for security on my end.
      It is no different than relying on your software users to maintain safe systems. You can't possibly know what a user does with a system, in addition to run your software, what sites they visit or what software they install, and how much malware they might have running. I do not presume that my software will run on a safe system, whether that's an end user system or a cloud instance.

    • @LackofFaithify
      @LackofFaithify 10 месяцев назад +1

      @@wiziek How is saying you won't implicitly trust something you can't actually know the inner workings of conspiratorial? From Intel ME to Cisco back doors, the list of untrustworthy devices doing things that are bad and that the user can have zero vision of are pretty numerous. I would be far more curious as to why anyone WOULD trust at this point.

    • @jmwintenn
      @jmwintenn 10 месяцев назад +1

      just saying, if there's a chip that is allowing a backdoor, no amount of software can stop that. look at the people with full rooted pixel phones that still ended up with the cough tracker getting loaded on their phone.
      if it makes you feel better that's fine, but know that software loses to hardware.

  • @destrozar
    @destrozar 10 месяцев назад

    This is very interesting stuff

  • @pedroferrr1412
    @pedroferrr1412 10 месяцев назад

    Pensando , means exactly in Portuguese "Thinking".

  • @supercompooper
    @supercompooper 10 месяцев назад

    I am always uncomfortable dropping end to end encryption. I always encrypt between the containers.

    • @JosiahBradley
      @JosiahBradley 10 месяцев назад

      You're adding unneeded overhead in system that doesn't actually add security as the data is completely decrypted in memory. If a regular user can see your internal tcp or socket streams your system is already hosed. A proper selinux setup and acls make this unnecessary.

    • @supercompooper
      @supercompooper 10 месяцев назад

      @@JosiahBradley what do u mean sees internal TCP socket streams? How would they do that? If I use selinux they can't trace the calls. I am worried about tcpdump mainly if someone is root outside the container?

    • @JosiahBradley
      @JosiahBradley 10 месяцев назад

      @@supercompooper root outside a container can't violate a selinux boundary directly but they can totally read your encrypted traffic if they are root locally, it'll just get audited. There's literally nothing to save you on the same box. If you need that level of security you should be air gapping already and have root fully disabled.

  • @wahdangun
    @wahdangun 10 месяцев назад

    the better killer nic ?

  • @jairo8746
    @jairo8746 10 месяцев назад +1

    The blurriness made it hard to watch.

  • @nagi603
    @nagi603 10 месяцев назад

    You *could* shove it in the server... if you will be able to get one.