Failing Better - When Not To Ceph and Lessons Learned - Lars Marowsky-Brée, SUSE

Поделиться
HTML-код
  • Опубликовано: 26 июл 2024
  • Failing Better - When Not To Ceph and Lessons Learned - Lars Marowsky-Brée, SUSE
    Talks on where and how to utilize Ceph successfully abound; and rightly so, since Ceph is a fascinating and very flexible SDS project. Let's talk about the rest.
    What lessons can we learn from the problems we have encountered in the field, where Ceph even may have ultimately failed? Or where the behaviour of Ceph was counter-intuitive to user expectations? And if so, was Ceph suboptimal or the expectations off?
    Drawing from several years of being the engineering escalation point for projects at SUSE and community feedback, this session will discuss anti-patterns of Ceph, hopefully improving our success rate by better understanding our failures.
    About Lars Marowsky-Brée
    SUSE
    Distinguished Engineer
    Berlin Area, Germany
    Twitter Tweet
    Lars works as the architect for Ceph & software-defined-storage at SUSE. He is a SUSE Distinguished Engineer and represents SUSE on The Ceph Foundation board.
    His speaking experience includes various Linux Foundation events, Ceph Days, OLS, linux.conf.au, Linux Kongress, SUSECon, and others. Previous notable projects include Linux HA and Pacemaker.
    Lars holds a master of science degree from the University of Liverpool. He lives in Berlin.
  • НаукаНаука

Комментарии • 5

  •  5 лет назад +17

    Sorry for the two typos! I cannot for the life of me explain the "plural apostrophe" on the title slide, and my colleague's name is properly spelled "João" instead. If you've got any feedback, please reach out to me by email so I can continue improving the talk!
    I'm also happy to answer any questions or follow-ups.

    • @kelownatechkid
      @kelownatechkid 3 года назад +5

      Was a great presentation, much appreciated.

  • @ytdlgandalf
    @ytdlgandalf 3 года назад +6

    The availability aspect of ceph is unbeatable. I'd hate to go back to drdb. That's why I run rook/ceph in my 4 node k8s cluster. The future is now

    • @orsonc.badger7421
      @orsonc.badger7421 2 года назад +1

      TOTALLY agree! Nothing out there I have looked at is even close to as decent as CEPH. I have tried quite a few, the best part is rook-ceph works very well in the cloud on bare metal!

  • @yourjjrjjrjj
    @yourjjrjjrjj Месяц назад

    Is 3 node ceph cluster(with let's say 5 OSDs per node) ok for production? Or is it significantly better to have 5 nodes with 3 OSD per node? I only care about the performance.