All you need to know about the Crush Map

Поделиться
HTML-код
  • Опубликовано: 29 янв 2025

Комментарии • 15

  • @SeandonMooy
    @SeandonMooy 2 года назад +3

    Awesome! Thank you for this. Ceph/Rook needs more content - although you might get tons of viewers - still - great video!

  • @damiendye6623
    @damiendye6623 2 года назад +1

    Thanks very much for this it will help when hooked with the other classes you have given

  • @LampJustin
    @LampJustin 2 года назад +1

    Thanks for the insights! The device classes are great! That would be really good for HDDs with a write cache for example. Cool .

  • @arka6302
    @arka6302 8 дней назад +1

    What is meant by 1.0 in sudo ceph osd crush set osd.0 1.0 host=n3 ?

  • @Digalog
    @Digalog Год назад +1

    i don't understand moving an osd to another host with that command. there is no disk allocated for the osd to run on that other node right? i like your style! ! thanks for the demonstration

    • @DanielPersson
      @DanielPersson  Год назад

      Hi Digalog
      Well, moving an OSD from one host to another can only be done virtually, and that is not a good use of resources. An OSD is usually connected to hardware, a hard drive, or a solid-state of some sort.
      But you could change your hierarchy by saying where your hardware is located, and the names of these places could change as your infrastructure grows. For instance, we will move the hardware to another location when we setup new data centers.
      I hope this helps.
      Best regards
      Daniel

  • @deba10106
    @deba10106 Год назад +1

    Thank you for the video. As you have shown in your tutorial that once a crush rule is updated for a pool, ceph immediately start updating. However, can data protection rule for a pool also be updated? Let's say, I have a pool with 3x replication and I would like to change the rue for that pool to be Error correction coded pool. Is that possible?

    • @DanielPersson
      @DanielPersson  Год назад

      Hi Debasis
      Thank you for watching my videos. You can change some of the rule properties runtime but I would never recommend to switch from replication to erasure encoded as they are so different and require a different setup. So my suggestion would be to migrate to a new pool when you do these drastic changes.
      I don't really see the use-case where this would be appropriate.
      Best regards
      Daniel

  • @paulfx5019
    @paulfx5019 2 года назад

    Many thanks for the awesome tutorials related to CEPH! If I was to build a 5 node cluster and have a pool with replication of 3 and no erasure code will crush map balance data across the 5 nodes?

    • @DanielPersson
      @DanielPersson  2 года назад +1

      Hi Paul
      Yes, it definitely will. That is one of the good parts of Ceph that you always will get a good distribution of your data. I currently have 4 nodes with replication of 3 and it balances pretty well. At work we have more than 100 OSD with a replication of 3 and it balances it out over these nodes.
      So if you have a cluster with 3 nodes and add 2 more it will rebalance the data over the extra nodes so you get a even distribution. That does not mean that you will save the same mount of data per node but the Placement Groups will be as even as possible. And depending on how much data is saved in each group you might have a bit more data on some nodes than other.
      If you have a bad configuration you could end up in a situation that the pools fill up one disk to 87% and another is just at 17% which is not optimal.
      Another thing that is taken into considuration is the size of each drive. If you have different sizes it will try to put more on larger devices. For my cluster with 2x5TB and 2x4TB disks it's almost perfectly even procentage wise but the larger drives as more data than the smaller ones.
      I hope this helps.
      Best regards
      Daniel

    • @paulfx5019
      @paulfx5019 2 года назад

      @@DanielPersson Many thanks for the valued feedback, are you using Ceph Cache Tiering on your work cluster?

    • @DanielPersson
      @DanielPersson  2 года назад

      Hi Paul
      Yes and no :)
      We are using it for some legacy pools where we need quick access to the filesystem. But we have other pools where a cache tier is overkill and not needed.
      Cache pools are more rigid and harder to change so you should only use it if you really need to.
      Best regards
      Daniel

    • @paulfx5019
      @paulfx5019 2 года назад +1

      @@DanielPersson Great feedback and many thanks for your Ceph videos as I've struggle with doing a PoC and now have one up & running that's to your tutorials. Looking forward to the next installment of your OpenStack as I would like to tick this off my bucket list. Cheers

    • @paulfx5019
      @paulfx5019 2 года назад +1

      @@DanielPersson Are you considering of producing a tutorial on performance tuning Ceph with or without cache tier?