Kubernetes Networking Intro and Deep-Dive - Bowei Du & Tim Hockin, Google

Поделиться
HTML-код
  • Опубликовано: 3 сен 2020
  • Don’t miss out! Join us at our upcoming events: EnvoyCon Virtual on October 15 and KubeCon + CloudNativeCon North America 2020 Virtual from November 17-20. Learn more at kubecon.io. The conferences feature presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
    Kubernetes Networking Intro and Deep-Dive - Bowei Du & Tim Hockin, Google
    Networking is less complicated than you think! This session is a combined intro and deep dive. This talk will start with some background on Kubernetes networking. Attendees who are not already comfortable with the "hows and whys" of basic networking in Kubernetes can get a bit of a primer before we dive deep on a few of the more recent developments and efforts in the networking space.
    sched.co/ZewF
  • НаукаНаука

Комментарии • 11

  • @justinkim7202
    @justinkim7202 2 года назад +3

    Learned a great deal from this one. Good job and thank you.

  • @motdde
    @motdde Год назад

    New to k8's. Thanks for creating these videos.

  • @rahulbhatija1680
    @rahulbhatija1680 Год назад +1

    Great Video, Great efforts.

  • @francoisgervais1
    @francoisgervais1 2 года назад +15

    The deep dive is a bit too deep for me at the moment, I’ll be back later.

  • @jerikho04
    @jerikho04 2 года назад +2

    Thank you for the great video, gentlemen.

  • @karmavil4034
    @karmavil4034 3 года назад

    3:09 😎🕶🎵🎶 Las llaves del reino.. The keys to the kingdom

  • @michaelutech4786
    @michaelutech4786 Год назад

    Is the only purpose of EndPointSlices to mitigate this scalability issue? If so, wouldn't it be the better solution to propagate state changes instead of the entire state?
    There are so many problems with slicing that I'm surprised this would be the favoured solution:
    The partitioning needs to predict the locality of changes in order to actually reduce the required bandwidth. So if a set of changes to be propagated crosses multiple slices, the benefit may not be all that great (If you had 10 slices and 10 changes to be deployed, you would need to transmit at best 10%, at worst 100% and on average somewhere in between, depending on how slices and changes correlate; if you would only propagate changes, you would just need to transmit the minimal set of information).
    Users will probably be tempted to associate a semantics to slices beyond the purpose as an optimisation mechanism, which is enhanced by the need to predict related changes (slice locality). Once that happens, especially over the life cycle of an application, the semantic partitioning might well conflict with the optimisation objective.
    Am I misunderstanding the problem?

  • @Lom_PC
    @Lom_PC Месяц назад

    Nice

  • @ramprasad_v
    @ramprasad_v Год назад

    26:19

  • @dvsakg
    @dvsakg 3 года назад +5

    Good content by shuttling between Presenters is bad

    • @daroay
      @daroay 3 года назад +7

      I actually like it, is best effort but breaks possible monotony.