What's Spinnin' with 45Drives - Episode 7 with Jeff Reinke

Поделиться
HTML-код
  • Опубликовано: 15 июл 2024
  • We want to leverage the power of "community" and encourage the spirit of innovation and collaboration when it comes to data storage. What better way to do that, then with a podcast appropriately titled "What's Spinnin' with 45Drives".
    In episode 7, we will be talking to Jeff Reinke who is the Director of Content & Audience Development for Industrial Media. Jeff has over 15 years of experience creating and managing strategic B2B content. We wanted to chat with Jeff and talk to him about his favorite industries for content creation, his podcast hosting duties and talk strategies for cyberattacks (especially focusing on the manufacturing sector).
    Episodes also available on #Spotify and #Apple later in the week.
  • НаукаНаука

Комментарии • 2

  • @Anonymous______________
    @Anonymous______________ 7 месяцев назад

    It's ashame Ceph is such a pain to setup and manage when compared to MooseFS, SeaweedFS, GlusterFS, BeeGFS or MinIO. One thing I have noticed is none of your videos demonstrate failure, recovery, or scaling with Ceph. I deployed several multi-peta scale clusters with Ceph and in nearly all cases, the erasure codes in conjunction with the crush algorithm has been abysmal in terms of dealing with drive failures, recovery, and actually scaling (i.e. adding new drives and PG's). Often times there would be issues with transient corruption of files or entire volumes. Even worse the developers removed things like ceph-deploy or offline deployment capabilities in favor of using containers. Which is complicated for environments that are air gapped and require an offline registry. Ceph has way too many constraints for any legitimate production environment. And I suspect the only reason your company utilizes it, is because it's one of the only truly free distributed file systems that supports erasure codes. Most of the others have some sort of restrictions (i.e. only stripe or replicate data without EC) or performance issues (eg. GlusterFS).

    • @mitcHELLOworld
      @mitcHELLOworld 6 месяцев назад +1

      I'm not going to lie, our experience with Ceph has been almost literally the exact opposite. Ceph can literally be bootstrapped in a single command. We, however do not containerize our Ceph clusters. We build our own packages and have our own tool designed in house to build Ceph Clusters on 45Drives hardware. I am the Chief Architect here at 45Drives, and we have successfully deployed and currently support thousands of Ceph clusters. I don't want to insinuate it has anything to do with your knowledge and skills with Ceph - but sir, much of the worlds infrastructure runs on Ceph on the back-end. You said "Ceph has way too many constrains for any legitimate production environment" - I mean Digital Ocean, CERN, AMD, the list goes on. We have Ceph clusters in over 80 of the top 100 universities in USA. The fact of the matter simply is if all of the things you said were true, we would not have a business.