Tuesday Tech Tip - Intro to Ceph Clustering Part 1 - When to Consider It

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 24

  • @Jordan-hz1wr
    @Jordan-hz1wr 2 года назад +6

    "Maintenance? Pull the plug on the thing!"
    SOLD

  • @atrowell
    @atrowell 3 года назад +4

    Extremely relevant, basically told my organizations history. :-)

  • @Game_Rebel
    @Game_Rebel Год назад +1

    incredibly helpful video, thanks!

  • @rapidscampi
    @rapidscampi 3 года назад +3

    thanks. I found this really useful. I actually had a pretty good grasp about how CEPH works before watching this. What I was interested to understand is how the architecture translates to real-life benefits, what types of organisations would qualify as candidates for adoption and how the features are reflected in actual use cases. I got all of that from this short video and also took note of your whiteboarding style which is top drawer. Except for the line-drawing. Bit wonky on that front :-)

  • @nickway_
    @nickway_ Год назад +1

    I am a homelab user looking at doing a 8-12bay TrueNAS server. Why am I watching this? Because it's awesome, that's why. I love the idea behind this. I'm already thinking how I can do a micro cluster. Hmm....

  • @MikeDent
    @MikeDent 8 месяцев назад

    Nice intro, thank you.

  • @davem3564
    @davem3564 4 года назад +1

    great introduction - thanks

  • @dkimmortal
    @dkimmortal 4 года назад +1

    pretty awesome introduction, thanks

  • @Spooferish
    @Spooferish 10 месяцев назад

    Question, so you don't need raid on the server. Will be adding normal drives without any raid or virtual volumes?
    Since if you do raid 0, I'm case of single disk failure whole ceph node has to be rebuilt. And in case of raid 5, you will loose one HDD on each server..
    Also does it support cache disk as well?

  • @dayo
    @dayo 2 года назад +2

    Hello 45Drives, thank you for this video!
    I'm considering to create a small ceph cluster (3 nodes, 4 OSD per node) to separate my compute and storage stack.
    I understood that ceph clustering requiere 10gbe at least, but what about the client access network (from the compute stack, like proxmox, to the ceph cluster) ? Is1 gbe enought or should I have a dual 10 gbe nic on each ceph node (access and cluster).
    Thank you 😃

    • @45Drives
      @45Drives  2 года назад

      Hey! You can get away with 1GB on the client access network for Ceph.

  • @BigBadDodge4x4
    @BigBadDodge4x4 Год назад

    What about power consumption and heat production? Does CEPH keep ALL the spinners spinning OR does it only spin up as needed? I started with UNRAID because it allowed me to have a server with 30 spinners and a cache pool of SSD. All new data gets written the the cache, then once a day it moves data to spinners. Basically keeping the spinners off till needed. (in my use and application 95% of the time data is written, it is hardly accessed once written. ). Lets say I have 5 CEPH servers all with spinners, and after a year, the data is equally on them all. So, now when a backup runs, it needs to spin up most all the drives to confirm what data is there. 3.5" drives produce most the heat, the less they are on the better. Is Ceph have a solution to limiting spinning up disks, such as pools or file cache to RAM? THANKS!

  • @Randall363
    @Randall363 4 года назад

    Excellent talk

  • @slesru
    @slesru Год назад

    I don't see backup here...

  • @RayZde
    @RayZde 4 месяца назад

    You really need 4 nodes min for high availability.

  • @Unselfless
    @Unselfless Год назад

    So, you're RAIDing servers instead of just RAIDing drives. Sounds like a good idea to me

  • @ragtop63
    @ragtop63 2 года назад +1

    Very interesting. What do you have for small businesses who don't necessarily have 20K to spend on a storage solution? Please don't tell me to reach out to one of your people. This video is posted on a public site with a public comments section. Be transparent and share the info with everyone here. Thanks.

    • @45Drives
      @45Drives  2 года назад +5

      Thanks for the comment!
      Budget is a critical part of every storage admins decision, and matching your needs to an appropriate solution is critical. Sometimes Ceph may not be the right answer as it was designed for large scale, and as many 9s of availability as you can get. That's overkill for a lot of businesses.
      With that in mind we're more than just a Ceph company! We're an open-source storage solution company, so for the many use cases/environments that Ceph does not fit we still offer our ZFS single server options which are much more economical and still provide a solid framework to grow your company infrastructure. We have a variety of videos that cover ZFS and its capabilities such as snapshotting, encryption, raid standards and caching, all of which can help with designing a storage infrastructure that will work for you and your budget.
      Of course, “homebrewing” a system is always an option, if you have the time and dedication to do so putting together a NAS yourself is a great way to keep to a budget, and will also give you a much more intimate understanding of your storage solution.
      Your storage solution can be these 3 things: Good, Fast, Cheap. Pick 2. What we mean by that is, if you want something good AND fast, it won't be cheap. If you want something cheap and fast, it is not going to be that good (it may use some cheap caching mechanism that is not safe, or reliable). If you want something cheap and good, it is not going to be fast. We can build very redundant Ceph clusters that are extremely reliable and on a budget by using 3 base units with a few disks each. It just won't be fast. It is our job at 45Drives and as storage architects to take all of the information you give us about what you need, what you want, and what you'd like and deliver the best possible solution within the budget you set forth.
      Hope some of this helps your decision-making. Thanks again for the question.

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 3 года назад +1

    OMFG you want $20,000 for 140 tb of usable space? Highway banditry. I can get the three servers from HP or Dell and install ceph myself for less than half of that. I can do it for maybe $6k using new drives but used ebay servers, and hell I'd even throw in 100gbe nics at that price. And I still don't understand why you need 3 as a minimum instead of 2.

    • @dariusbucinskas1880
      @dariusbucinskas1880 3 года назад +2

      It's a steep number for us average people, but experience, stability and reassurance has its price. You need 3 so you have the odd voting member to verify that the data is correct and fix any corruptions. If it's 2, there's a stalemate, who has the uncorrupted data? Vote 1 vs 1. New storage bays are expensive, yes you can get used, but that's no warranty and some business care about that. That being said, I'd take your approach, because that's all I can afford, but with the recommended 3 nodes.

    • @zyxwvutsrqponmlkh
      @zyxwvutsrqponmlkh 3 года назад

      @@dariusbucinskas1880 Cant you know who is corrupt by looking at a checksum? Do you really read the data on all three servers for every read request and then compare them all? I cant believe that's how it works down in the nuts and bolts. I imagine a checksum is created per file or block of files (for small files) and you only have to ask the other servers if you fail a checksum on read.

    • @mitcHELLOworld
      @mitcHELLOworld 2 года назад +2

      @@zyxwvutsrqponmlkh Please look up split brain sir !

    • @samcan9997
      @samcan9997 Год назад

      i mean my home system has a old HP system i bought for 400£ it has 60+2 drive bays that thing can take a few PB but thats not really the point

    • @python3.x
      @python3.x 10 месяцев назад

      3 mon needed to elect mon leader but if don’t want then you can setup single lag node to run mon, mgr, osd and cephfs