Tuesday Tech Tip - Intro to Ceph Clustering Part 4 - Self Balancing and Self Healing

Поделиться
HTML-код
  • Опубликовано: 26 июл 2024
  • Each Tuesday, we will be releasing a tech tip video that will give users information on various topics relating to our Storinator storage servers.
    This week, co-founder Doug Milburn finishes his series about Ceph storage clustering. Part 4 looks at the magic of Ceph with self-balancing and self-healing.
    Check out part 1 that gives an overview of when you should consider moving from a single server infrastructure to a cluster: • Tuesday Tech Tip - Int...
    Check out part 2 that gives an overview of the components that make up a starter Ceph storage cluster: • Tuesday Tech Tip - Int...
    Check out part 3 that looks at how Ceph works in terms of providing your cluster setup with data security: • Tuesday Tech Tip - Int...
    The scalability, availability, and future-proofed nature of clusters come into play whether it is a large, multi-petabyte deployment, or for a less than a 150TB entry-level cluster. With 45 Drives, you can get a (US)$20,000 storage cluster that will be able to provide you with a solution that will continually grow with your business needs. And, this will give you access to possibly the most well-rounded storage software ever designed.
    Visit our website: www.45drives.com/
    Check out our GitHub: github.com/45drives
    Read our Knowledgebase for technical articles: knowledgebase.45drives.com/
    Check out our blog: 45drives.blogspot.com/
    Have a discussion on our subreddit: / 45drives
    Be sure to watch next Tuesday, when we give you another 45 Drives tech tip.
  • НаукаНаука

Комментарии • 12

  • @hasarangam3045
    @hasarangam3045 11 месяцев назад +1

    Watching this video series is the implest way to learn Ceph for a beginner . Thank you very much .

  • @IvayloKrumov
    @IvayloKrumov 2 года назад +2

    Complex things explained in such easy to digest way means true patience and understanding of the topic

  • @stephenjohnston746
    @stephenjohnston746 2 года назад +1

    Really well done. Very approachable. Thank you!

  • @oachkatzlschwoaf01
    @oachkatzlschwoaf01 3 года назад +2

    Simple and well explained, thank you for the content!

  • @Alpha725_
    @Alpha725_ 4 года назад +4

    Thanks again for the great content. Such a exciting concept

    • @TmanaokLine
      @TmanaokLine 2 года назад +1

      Better than just a concept! Tried and true in many many many datacentres. Much better than Gluster or Lustre or Moose FS.

  • @geoffgates8264
    @geoffgates8264 2 года назад

    well done, getting to love Ceph.

  • @caseyknolla8419
    @caseyknolla8419 5 месяцев назад

    Really appreciate this series. I'm looking to learn ceph in a homelab environment, and I just got my HL15, so I've got a clean slate storage server to work with. I don't like the limitations that ZFS has when it comes to arbitrary expansion; in contrast I really like the hybrid raid that Synology provides which self-balances across drives as you add them, similar to what you described what ceph does. I'd like to move away from the closed Synology ecosystem but maintain that powerful feature. I'm considering making 3 VMs on a single server and having each one manage 5 drives. I realize this eliminates much of the high availability aspect of server outage since they're sharing the same motherboard and power supply, but being a homelab, I'm willing to accept that risk. I should still get raid5-like hard drive failure safety if I understand correctly. But, best of all, after what I learned from this video, I could add another physical server down the line, add it to the cluster, let it self-balance, then intentionally kill off one of my VMs, consume the now-unused hard drives with the other 2 VMs, and return to a 3-system cluster, but now with 2 physical servers. I think this means I could scale infinitely and gradually achieve true high availability once I reach 3 physical servers. Is this concept sound?

    • @45Drives
      @45Drives  5 месяцев назад +1

      Hey Casey, love to hear that you are combining your knowledge of homelab and clustering. I think these questions would be perfect for our homelab forum. Do you mind posting in there? I think between our team and the rest of the community, we can get you started on the right foot!
      Head on over to the forum here: forum.45homelab.com/

    • @caseyknolla8419
      @caseyknolla8419 5 месяцев назад

      ​@@45Drives will do, thanks

  • @kiransonawane2940
    @kiransonawane2940 Год назад

    awesome..

  • @orthodoxNPC
    @orthodoxNPC 3 года назад

    thanks!