What's Spinnin' with 45Drives - Episode 7 with Jeff Reinke

Поделиться
HTML-код
  • Опубликовано: 23 авг 2024

Комментарии • 2

  • @Anonymous______________
    @Anonymous______________ 8 месяцев назад

    It's ashame Ceph is such a pain to setup and manage when compared to MooseFS, SeaweedFS, GlusterFS, BeeGFS or MinIO. One thing I have noticed is none of your videos demonstrate failure, recovery, or scaling with Ceph. I deployed several multi-peta scale clusters with Ceph and in nearly all cases, the erasure codes in conjunction with the crush algorithm has been abysmal in terms of dealing with drive failures, recovery, and actually scaling (i.e. adding new drives and PG's). Often times there would be issues with transient corruption of files or entire volumes. Even worse the developers removed things like ceph-deploy or offline deployment capabilities in favor of using containers. Which is complicated for environments that are air gapped and require an offline registry. Ceph has way too many constraints for any legitimate production environment. And I suspect the only reason your company utilizes it, is because it's one of the only truly free distributed file systems that supports erasure codes. Most of the others have some sort of restrictions (i.e. only stripe or replicate data without EC) or performance issues (eg. GlusterFS).

    • @mitcHELLOworld
      @mitcHELLOworld 8 месяцев назад +1

      I'm not going to lie, our experience with Ceph has been almost literally the exact opposite. Ceph can literally be bootstrapped in a single command. We, however do not containerize our Ceph clusters. We build our own packages and have our own tool designed in house to build Ceph Clusters on 45Drives hardware. I am the Chief Architect here at 45Drives, and we have successfully deployed and currently support thousands of Ceph clusters. I don't want to insinuate it has anything to do with your knowledge and skills with Ceph - but sir, much of the worlds infrastructure runs on Ceph on the back-end. You said "Ceph has way too many constrains for any legitimate production environment" - I mean Digital Ocean, CERN, AMD, the list goes on. We have Ceph clusters in over 80 of the top 100 universities in USA. The fact of the matter simply is if all of the things you said were true, we would not have a business.