Tuesday Tech Tip - The Simplest Way to Build a Ceph Cluster

Поделиться
HTML-код
  • Опубликовано: 16 авг 2021
  • Each Tuesday, we will be releasing a tech tip video that will give users information on various topics relating to our Storinator storage servers.
    This week, Mark from our R&D team is here to walk you through our latest Houston module, that will greatly simplify building your next Ceph cluster.
    Along with our previous release of Houston modules, the on-going theme behind all of these new features is to get everyone out of the command line and make setting up and managing your storage infrastructure super simple. Still using Ansible underneath the hood, this new Ceph Deployment module will graphically take you through the steps of setting up your own Ceph storage cluster.
    Visit our website: www.45drives.com/
    Check out Ceph Deploy on our GitHub: github.com/45Drives/cockpit-c...
    Check out Ansible: github.com/ceph/ceph-ansible
    Check out our GitHub that includes our Ansible playbooks: github.com/45drives
    Read our Knowledgebase for technical articles: knowledgebase.45drives.com/
    Check out our blog: www.45drives.com/blog/
    Have a discussion on our subreddit: / 45drives
  • НаукаНаука

Комментарии • 27

  • @intheprettypink
    @intheprettypink 2 года назад +19

    Lol someone had fun editing this video.

  • @heechanlee6589
    @heechanlee6589 2 года назад +9

    Video editing quality is like a... LTT! XD.
    Great editing

  • @GrishTech
    @GrishTech 2 года назад +7

    Love this style haha

  • @TmanaokLine
    @TmanaokLine 2 года назад +1

    Very cool video! Well made! I'm really impressed with the ansible playbooks and this nifty Cockpit module you guys have made! I've never explored cockpit modules, turns out I've been missing out big time!

  • @benjaminl1173
    @benjaminl1173 2 года назад +8

    Don't care what this video is about but I see a mayhem shrit 🤘

    • @constamentaldrift7016
      @constamentaldrift7016 2 года назад +4

      I knew I made the right call wearing that shirt to work the day we shot this. 🤘

  • @kennibal666
    @kennibal666 5 месяцев назад

    Nice Mayhem shirt!

  • @DaymauhGaming
    @DaymauhGaming Год назад +2

    Hey, same question as others, can this work for other hardware than 45drives ? Stuck at the Device Alias playbook (core) step. Would love to ear about you. Huge work on your side guys, thanks a lot.

  • @lickerishstick
    @lickerishstick 2 года назад +3

    Can i suggest something, after each run is done and the done option is made available, would it not be best to Grey out the run option then otherwise you might get someone clicking run again by mistake.

  • @UnleashingMayhem
    @UnleashingMayhem 2 года назад

    should we deploy ceph inside kubernetes cluster?
    I mean, what if something happened to the cluster than we wouldn't have access to our PVs and PVCs.
    Is it possible to integrate ceph with proxmox and than use it inside the kubernetes?
    What is the best practice here?

    • @45Drives
      @45Drives  2 года назад +1

      Thanks for the question! We are believers that Ceph should be standalone to the Kubernetes cluster. Ceph and Kubernetes are both complex systems so sticking with our theme of keeping it simple and having them work independently of each other is ideal.
      You can consume ceph storage (external to kubernetes) through the use of the ceph-csi drivers (github.com/ceph/ceph-csi). Hope this helps!

  • @dwieztro6748
    @dwieztro6748 10 месяцев назад

    hii.. what happen if admin crash and need to re install from scratch?

  • @MrTrever1969
    @MrTrever1969 Год назад

    Will this work on debian?

  • @execration_texts
    @execration_texts Год назад

    Nice shirt

  • @alfarahat
    @alfarahat 10 месяцев назад

    I do you have Sam deployment for Ubuntu or Debian

  • @johnhaight6142
    @johnhaight6142 2 года назад

    Wait though.... Building is one thing, adding on is another.... Maintain and repair , just in case, should be the priority, perhaps ?

  • @Varengard
    @Varengard 2 года назад

    Fuck the marketing team, this is YOUR studio now. Un-negociable.

  • @whatwhat-777
    @whatwhat-777 Год назад

    where the hell is the second part i want it

  • @Carlos-Rodrigues
    @Carlos-Rodrigues 2 года назад +2

    Took me many nights to install 4 servers one by one by hand with Ceph. Now you do this in a video that takes not longer than 13 minutes..... Am I too old ?

  • @TrifonKalchev
    @TrifonKalchev Год назад

    Whats the default password for the webinterface of cephdeploy?

    • @TrifonKalchev
      @TrifonKalchev Год назад

      I figured it out, you can ignore that post.

  • @johnhaight6142
    @johnhaight6142 2 года назад

    You get into the quantum matrix ¿ I'm going to stay retired

  • @tekknokrat
    @tekknokrat 2 года назад +1

    If the ansible inventory mgmt is such a tedious task for you why don't you investigate a little bit time into dynamic inventories or the add_host module. You can get rid of all this inventory management when you make the classification on the machines itself, using the daughter board or with some characteristics of the hardware. Also, I would ship this boring manual pinging as part of your playbook 😉

  • @steveo6023
    @steveo6023 2 года назад +2

    I don’t get it why they make ceph so complicated. I mean come on, a rpm/deb package for each service (osd, mon…) and one simple config file in etc/ceph (osd.conf, mon.conf…) and that should be enough. But all this cephadm/cephdeploy madness is just unnecessary

  • @ArthurOnline
    @ArthurOnline 2 года назад

    does not seem to work, error: nothing provides python3-dataclasses needed by cockpit-ceph-deploy-1.0.2-2.el8.noarch

    • @45Drives
      @45Drives  2 года назад

      Hey Arthur, this looks like its just a dependency/repo issue. Which OS are you trying to run on? We only officially support Ubuntu and Rocky at this time. If you are using Rocky or Ubuntu, just make sure that your packages are all up to date. If you are still running into issues, feel free to reach out to us at info@45drives.com and we could get you in touch with support. Thanks!