A Conversation About Storage Clustering: Gluster VS Ceph (PART 3)

Поделиться
HTML-код
  • Опубликовано: 26 июл 2024
  • In the following 3-part video series, co-founder Doug Milburn sits down with Lead R&D Engineer Brett Kelly to discuss storage clustering. More specifically, taking a deeper look into two open-source clustering platforms; Ceph and Gluster.
    In part 3, Brett summarizes everything up, talks about best practices and introduces a new piece of 45 Drives hardware that can accompany your Ceph clustering setup.
  • НаукаНаука

Комментарии • 17

  • @tincoandringa4630
    @tincoandringa4630 3 года назад +7

    At 4:37 starts the most concise well argumented Gluster vs Ceph comparison I've found on the net so far.

  • @TorstenSeemann
    @TorstenSeemann 5 лет назад +6

    Great series of videos. Clarified various things for me.

  • @davidg4512
    @davidg4512 5 лет назад +1

    Thank you guys so much. Appreciate these 3 videos. Please make more. Thanks.

  • @henrik2117
    @henrik2117 4 года назад +2

    Thank you for publishing this three video series!
    I really like the "techie + human" way of approaching the topic. Some things are best explained in tech and then everything is wrapped up in a complete and understandable way in the end 👍
    Also the highlighting of each of their strengths is great!

  • @skaltura
    @skaltura 4 года назад

    these were good. Have been planning to build a ceph cluster for years!

  • @davidstievenard6313
    @davidstievenard6313 4 года назад

    nice and clear explanations Brett !!!

  • @just1689
    @just1689 2 года назад +1

    Can't believe Brett landed such a sweet gig after attacking Jim in The Office

  • @syakoob18
    @syakoob18 5 лет назад

    Nice video, We have started on ceph and I agree love what is able to do for us..

  • @torbenhrup9759
    @torbenhrup9759 5 лет назад

    Great videos. Please make more

  • @midnightblack7616
    @midnightblack7616 2 года назад

    Nice discussion. For my use case (huge numbers of inodes/small files in a flat directory structure imposed by software), you've clearly pointed me towards ceph, for which I thank you.
    I wish you would have added a very rough overview of resource demands for ceph and gluster, if there are any such generalizations. That is, for some arbitrary data store, say 10 TB vs 1000TB, or 500k files vs 500M files, would there be a typical difference in CPU/RAM/fast disk usage/network bandwidth/??? for gluster vs ceph? Ceph more--or-less demands a separate, dedicated 10G+ data network for just a 3 node cluster; does glusterfs require similar for decent performance? How do system resource requirements scale with data set for each? That is, if growing from a 3-node cluster to a 9 node cluster (tripling the data store), how much additional CPU/RAM/disk/network will ceph vs glusterfs need? For example, one ceph monitor needs X CPU cores and Y RAM and ???, but the number of ceph monitors does not increase until Z nodes, and only RAM usage will grow with data, or whatever; and, as you mention, it's easy to migrate a ceph monitor if it's using too many resources on a compute node. Don't need really detailed analysis, just a "from our experience" ballpark of where there are very clear differences between ceph and gluster(+underlying FS, since ZFS would add it's own demands).

  • @oah8465
    @oah8465 2 года назад

    what about for database workloads, say Oracle, Postgres, MySQL which one do you recommend?

  • @leadiususa7394
    @leadiususa7394 5 лет назад

    Has this been tested with application level KRON/KSH scripting samples in relationship to storage access redundant TTL scripts mostly?

  • @marcello4258
    @marcello4258 Год назад

    Well the question of the day:
    When you are bold enough to build a 3node ceph cluster:
    What happens when one node goes down (blows up or down for maintenance) as ceph requires you to have3, I assume it will block and you cannot access your data anymore. So you should have at least 4 and a plan what you do when one is borked

  • @francomartin6531
    @francomartin6531 5 лет назад +3

    waiting for the ceph and vmware integration video

  • @fluffyfloof9267
    @fluffyfloof9267 5 лет назад +1

    Hi, are you still using ZFS?

    • @45Drives
      @45Drives  5 лет назад +3

      We do you still use ZFS but only underneath Gluster. When using Ceph there is no traditional underlying filesystem. Ceph uses what's called "Bluestore" to store objects on the disks.