Architecture & Build - Building a Petabyte Ceph Cluster for a Veeam Repository

Поделиться
HTML-код
  • Опубликовано: 17 окт 2024

Комментарии • 27

  • @michaelmauer1385
    @michaelmauer1385 7 месяцев назад +1

    Thank you for sharing the windows part , this can be very useful !

  • @jasonfehr2978
    @jasonfehr2978 2 года назад +4

    Great vid, seeing real environments and use cases is awesome.

  • @nyx3492
    @nyx3492 2 года назад +2

    Great vid! A follow-up video with more advanced topics regarding veeam + ceph would be nice.

  • @jasonfehr2978
    @jasonfehr2978 2 года назад +3

    Also, 100% agree on Windows Terminal. I run everything out of there

  • @Scania73
    @Scania73 2 года назад +3

    *This is the new whiteboard like* =P

  • @kindred27
    @kindred27 2 года назад +1

    Please do a more detailed explanation regarding actual disc space used on rep. as well as maby an in depth talk about deduplication on a Refs formatted ceph drive. Pros / Cons
    Thank you for an awesome video 🙏

  • @andrewjohnston359
    @andrewjohnston359 2 года назад +3

    Awesome - have been waiting for a video on this stuff. Tried to figure out Veeam Cloud Connect connecting to Ceph Cluster about a year ago...didn't know about the RBD driver for Windows. Any chance you could go a bit further a replicate a Veeam Cloud connect environment with scale out to S3/block storage and what it looks like from both the provider and the customers point of view?

  • @TayschrennSedai
    @TayschrennSedai 2 года назад +4

    Now I almost feel overkill doing 300tb of sata ssd for my ceph Veeam repository 🤣
    Hopefully will be running by next week.

  • @thecruzader4882
    @thecruzader4882 2 года назад +1

    One day they will get the funding for both a larger whiteboard and some cardboard templates to help draw square blocks.

  • @Melpheos1er
    @Melpheos1er 2 года назад +1

    Leaving a like so you get a better whiteboard

  • @paulfx5019
    @paulfx5019 2 года назад +1

    Hi guys, great video....what is better for Proxmox VM storage only Ceph cluster? Replication or Erasure? Cheers

    • @45Drives
      @45Drives  2 года назад +2

      Hey Paul,
      Replication is going to be your go to for any random latency sensitive workloads (databases, VMs, etc...) , EC parity calculations means additional latency for your workload which is not ideal for OS disks, databases, etc. that require more IOPS. EC is fantastic at large streaming reads and writes (throughput based) which typically are not the main workload of a VM OS disk.
      Hope this helps. Thanks!

    • @paulfx5019
      @paulfx5019 2 года назад

      @@45Drives Many thanks for the feedback. I've notice that you've not mentioned Ceph Cache Tiering Pools in any of the tutuorials, is that because your experiences to-date have proven that it doesn't really improve performance?

    • @45Drives
      @45Drives  2 года назад

      @@paulfx5019 That is correct. From our understanding, Ceph Cache Tiering was created to make EC pools perform good enough back when filestore OSDs were the only option.
      Since Bluestore OSDs, the benefit of cache tiering pools disappeared pretty quick. We do not use them in production and interest in the community has dwindled.
      Redhat depreciated the feature back in RHCS 2.0, and no active development has occurred since.
      Thanks!

    • @paulfx5019
      @paulfx5019 2 года назад

      @@45Drives Many thanks for the feedback and look forward to more videos like this.

  • @Vikashkapoor70044
    @Vikashkapoor70044 2 года назад +1

    Great

  • @לידורמ
    @לידורמ Год назад +1

  • @kranstopher
    @kranstopher 2 года назад +1

    Hey does veeam support proxbox

    • @45Drives
      @45Drives  2 года назад +2

      You can certainly setup the veeam agent on a proxmox server, or on the VMs themselves.
      Proxmox also offers their own backup server software which works quite well and is also more integrated.

    • @kranstopher
      @kranstopher 2 года назад +1

      @@45Drives cool! Thanks guys

  • @marcellogambetti9458
    @marcellogambetti9458 2 года назад

    guys just watch out for refs...sometimes corruption can occur...seen on various customers

  • @marcuss1526
    @marcuss1526 2 года назад

    Moved not to long ago

  • @damiendye6623
    @damiendye6623 Год назад

    oh dear this no longer works
    root@wlsc-pxmh01:~/temp# crushtool -c text_map -o new_map
    WARNING: min_size is no longer supported, ignoring
    WARNING: max_size is no longer supported, ignoring

    • @samcan9997
      @samcan9997 11 месяцев назад

      its not that it doesnt work but was disabled as the mix and max for EC pools is now set to the exact ammount for the stripes+parity

    • @damiendye6623
      @damiendye6623 11 месяцев назад

      @@samcan9997 so how it's it done now then ?

    • @samcan9997
      @samcan9997 11 месяцев назад

      @@damiendye6623 you dont for EC at all it has a fixed size which is what you set it thats why it says unsupported