Ceph Tutorials - Install Ceph Cluster from Scratch
HTML-код
- Опубликовано: 7 авг 2024
- Hi everyone, this video explained how to setup ceph manual (mon, mgr, osd & mds) from scratch.
Note:
This is optionals. ( in the video (2:09) /etc/ceph/ceph.conf )
Configurations in the video already supported with cephx authentications.
To enable auth cephx into the cluster, you must change 3 lines of values below:
auth cluster required = none
auth service required = none
auth client required = none
To:
auth cluster required = cephx
auth service required = cephx
auth client required = cephx Наука
Can you please provide any links to command or any doc. It will be really helpful.
thank you so much you saved my life
tested on Ubuntu 20. 04 and ceph octopus, apart from package installations everything else is working
typed every single command, it even helps me more to understand the things
Thank you for this tutorial, you are a hero
Thank u for watching this video..
Great Tutorial Dimzrio. Could you please make tutorial video for manual RGW deployment (nautilus) as well ?
Which distribution are you using in these nodes? Also, I would like to create a CRUD operation service for my CEPH cluster in Python3. Is there anywhere you can refer me to, so that I can check it out? Thanks!
Would have been nice if you started by explaining what storage was on each node and what you were trying to achieve.
Thank you Dimzrio.
I know now in v16.2 you can mount windows storage to ceph cluster but can ceph server itself be install natively in windows?
Thanks
may i know the hardware specs you are using for the ceph cluster in this tutorial?
Hey man, this video has been a lifesaver..
Just wondering if i were to use a custom name for the cluster instead of the default name i.e., ceph will this work?
i m trying to configure a rbd mirror setup and if i m right it requires distinct cluster names.
Your help will be much appreciated.
thanks in advance
Thanks for watching this video.
do you have rename ceph confiuration to your cluster name? for example default ceph.conf to ceph01.conf.. it's mean your cluster name is ceph01.
Refer to manual install doc:
"For example, when you run multiple clusters in a multisite configuration, the cluster name (e.g., us-west, us-east) identifies the cluster for the current CLI session. Note: To identify the cluster name on the command line interface, specify the Ceph configuration file with the cluster name (e.g., ceph.conf, us-west.conf, us-east.conf, etc.). Also see CLI usage (ceph --cluster {cluster-name})."
docs.ceph.com/docs/master/install/manual-deployment/
@@dimzrio yes i tried renaming my ceph.conf file to my custom cluster name. But after this step I get the "Unable to connect to cluster" Error
excellent!
thank u
very good tutorial, thanks for that.
would you like to tell me which multi-user terminal is that ?
What program are you using to simultaneously control multiple ssh sessions?
iTerm terminal
Dimzrio Tutorials thank you!
your mds not start acturally , you should config chown ceph:ceph /var/lib/ceph/mds/ceph-node1/keyring
Sorry, I am a ceph beginner I cannot find "nano /etc/yum.repos.d/ceph.repo"
create one by yourself, it is missing by default
I decided to take up the ceph again, and at the moment, using centos 7 - I get a timeout at the time of the operation "ceph auth get-or-create mgr.ceph3", ceph -s - does not respond, ceph -h - shows help and crashes in the end " time out "
daemonperf {type.id | path} list | ls [stat-pats] [priority]
Get selected perf stats from daemon / admin socket
Optional shell-glob comma-delim match string stat-pats
Optional selection priority (can abbreviate name):
critical, interesting, useful, noninteresting, debug
List shows a table of all available stats
Run times (default forever),
once per seconds (default 1)
timed out
[root @ ceph3 ~] # ceph -s ^ C
ubuntu 18.04.4 - luminus - health good ... )
I had to delete the last part of the ceph user's line in /etc/passwd. Then put the full directory of /usr/bin/ceph-mon in the command. Also move the ceph.mon.keyring out of the root user's /tmp folder. Put it somewhere else like /home and give full permission and ownership to the ceph user. Then run it like this: sudo -u ceph /usr/bin/ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /home/ceph.mon.keyring I think there are things wrong with the ceph user creation or something.
Might also need to add ceph user to sudoers.
what's the terminal you used in this video? thanks
I'd like to know it too.
@@fernandodellatorre4709 I think it’s iTerm
@@ggyun oh thanks. I think it works only for MacOS, isn't it? I run Linux :(
@@fernandodellatorre4709 yeah… sorry for ur loss…
@@ggyun lol nevermind. Thanks anyway! Cheers
why you not try ceph-deploy ?
Yes, i'm already try setup cluster using ceph-deploy but failed for ceph nautilus.. if you enjoying setup cluster with ceph-deploy, it's a same thing... but, the purpose this video is make us understand step by step confiigure ceph cluster with manual deployment.
@@dimzrio why you write
auth cluster required = none
auth service required = none
auth client required = none
why do you have “without authentication = none” in the configuration, and according to Ceph’s documentation, to enable it, you need to do all the steps with the keys that you did, but at the end you did not change “none”> “cephx”
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap mgr 'allow *' --cap osd 'allow *' --cap mds 'allow *'
"ENABLING CEPHX
When cephx is enabled, Ceph will look for the keyring in the default search path, which includes /etc/ceph/$cluster.$name.keyring. You can override this location by adding a keyring option in the [global] section of your Ceph configuration file, but this is not recommended."
@@dimzrio only there the team looks a little different, why?
ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring
Thanks for your advice.
I already noted into description video.
@@dimzrio do you think about creating a video "let's say one of the nodes is out of order, so that to restore the existing cluster you need to follow this procedure (1)", "let's say two points out of order, and one (2) remains active." This would be very useful, as the order of launching services is of particular importance - the cluster recovery process is no less important than its creation, I know that in the second case it is best to start the server one at a time, but I would like to see practice from you.Sorry for the syntax errors, I am writing through Google translator
iTerm saves it
Not really a tutorial if all you're doing is playing load music and just doing it. A tutorial contains dialog on what you are doing and why.
not a very good video. It doesnt hurt to talk about what you are doing while you are doing it. You will get the whole thing started but will have to read the documentation anyway since you'll have no clue what anything does.
LOL, i watched and still know nothing xD
Waste of my time, and the music sucks.