Learn the skills to make IT your job: 30% off your first month/first year of any ITPro personal plan with Code “CHUCK30” - ntck.co/itprotv NetworkChuck is building a FrankeNAS using CEPH, an open-source software-defined storage system. Learn how to turn a pile of old hardware into a powerful, scalable storage cluster that rivals enterprise solutions. This video covers Ceph architecture, installation, and configuration, demonstrating how to create a flexible, fault-tolerant storage system using a mix of devices. Whether you're a home lab enthusiast or IT professional, discover the power of decentralized storage and how Ceph can revolutionize your data management approach. Commands/Walkthrough: ntck.co/403docs 🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy ****00:00**** - Intro: the FrankeNASS ****00:45**** - Why you NEED CEPH ****04:45**** - How CEPH works ****19:55**** - TUTORIAL - CEPH Cluster Setup
Dude...I was recently learning proxmox and was like...damn this ceph documentation is so complicated to wrap my head around,BOOM. Network chuck to the rescue
I’m just sitting here this morning in bed like “I really need a new NAS” and network chuck has convinced me he’s hacked into my brain and he’s like “I got you man, don’t worry. Video inbound now.”
ceph looks fantastic for enterprise storage or situations where you have multiple servers with lots of storage in them and lots of people who need to access that storage, but up to a few hundred tb, I would strongly recommend zfs (raidz2/3), preferably under truenas scale if you just want a big nas that's easy to manage and fault tolerant for pretty inexpensive.
Yes please part two. I had the CEPH websit open for several weeks now. and here it is, a video from Networkchuck about CEPH. Thank's Chuck, love your Videos
I am currently a trainee on the job, just yesterday my CEO wanted to explain me CEPH, I swear today I will walk in office with a very big smile and knowledge of ceph 😅
@@jacob1000 it’s about the technology that come with CEPH, brings redundancy for your Infrastructure, and I work in IT, our company uses CEPH to host data for hospitals all around germany, it brings 800TB total storage
Ceph means cephalopod = who can repair/heal themselves just like one of the main pillars ceph was founded on. That is why versions are named after different cephalopods Source - 12 years running Ceph in production ;)
@@Foxy10-b6n it can be as redundant as you want it to be. Or as little as you want. I've done levels from disk redundancy to while DC redundancy on ceph. It can be made to be aware of all of this.
Lovely to see more videos about Ceph. I found it when we ran out of space on our production environment. Had dabbled with it a bit at home but I had to put it into production like yesterday so that was a herowing week at work. Now 2 years later we have learned a lot and it's still rock solid.
Hey everyone, @DanielPersson makes great content on ceph. I am an old subscriber of him, make sure you subscribe to his channel. He also has done many tutorials on installing ceph on his channel.
I actually stumbled upon your videos when I was trying to setup storage for my k8s cluster. Little did I know, it is deprecated and my architecture struggled with it. Great videos though!
Hey Chuck, this is by far THE BEST video I have seen from your channel in the last 3 months. GREAT video! Totally innovative content. Make this a series please if you can. Follow-up this video with the CEPH tiering tutorial (pool for SSDs and pool for HDDs). Everything related to CEPH is SO COOL! Pump as much content as you can about CEPH. Cheers and thanks!
I'm totally down for a second part/follow up on this video! Thank you for the great explanation! I'm also looking forward to seeing how this could be setup with Ansible and connecting Proxmox with Ceph + some best practices!
Concur, had experimented with it as well about 6 months ago for about 6 months and reverted back to NFS shares for proxmox cluster and just did scheduled rsyncs between my unraid NASes.
This video made me fall in love in Ceph, seriously. Part 2 please with tiering, Ansible integration and proxmox. There aren't a lot of tutorials on web and you explain everything in a simple and approachable way! I would even pay to see part 2 if needed. Sad to not see this video pop off.
Yes, Part 2 please! Or make it a series? Would like to see how to do all this with ansible, include proxmox and seperate SSD and HDD. Btw, awesome video! The density of information per minute was pretty high, but you really explained this complex topic as easy as I can imagine, without skipping any steps. Thank you for that!
Wow...This video is particularly useful for anyone considering investing in a NAS for personal or small business use. Chuck’s hands-on approach and detailed explanations make it easy to understand the benefits and potential challenges of using a NAS. His enthusiasm and practical advice can help viewers make informed decisions about their storage solutions. If you’re new to NAS or looking to upgrade your current setup, this video is a great resource. Thanks a lot Chuck!.
Cool video! I will definitely return to it once I add a 2nd node to my infrastructure. When showing things like OSDs being added, you can use "watch !!" to rerun the last command once every 2 seconds (adjustable with -n flag), instead of repeatadly hitting "up" + "enter". Changes are often more visible, because watch draws the output on a single screen from the top down, so you don't have weirdness due to the terminal scrolling. The optional "-d" flag also highlights differences between command runs.
I've been procrastinating about diving into CEPH for a couple of years. Thanks to this video, I'm finally going to to it. (Like kubernetes, similar time procrastinating, but NetworkChuck to the rescue again with the Kubernetes vid eo!)
As a penguin I could really use a guy, It's 40 degrees here today and I'm overheating. To be honest, as global warming continues to worsen selling ice to penguins could be a pretty good business model if so many penguins weren't broke lazy bludgers. It's not racist for me to say that because I'm a penguin.
I've been procrastinating about diving into CEPH for a couple of years. Thanks to this video, I'm finally going to to it. (Like kubernetes, similar time procrastinating, but NetworkChuck to the rescue again with the Kubernetes video! Hmm... Maybe I need a NetworkChuck video about tidying up my home office to get that done too!)
Such a great individual! Everything is explained and demonstrated clearly and effectively. I find it fantastic for getting started with various topics, tools, and software.
Great Video and well explained, as always 👍 would love to see a follow up going deeper covering Performance & Tuning, Snapshots / Replication, Disaster Scenarios etc. Thank you 👍
Excellent job explaining this. I've been needing to figure it out, but have been procrastinating until i had a full weekend to sort it out. Now in 3 hours I have it sorted in my head and a a 5 node cluster spooled up in my lab.
was i the only one to notice his alien eye at 3:24 . what the heck man. chuck is an alien . well that explains a lot about coffee and the wierd mix of intrests
Thank you NetworkChuck, I have been trying to understand CEPH and you made it easy for this old man. I have had a mix of unraid, truenas, proprietary OSs, but I was looking for something like CEPH. I would def like a more in depth video on CEPH commands and making things work together.
I use ceph at work, I honestly didn't know you could map ceph directly in Linux remotely? I've always used SMB/NFS. So, to pay it forward, one or two things I've learned over the years: 1. Power Loss Protection or PLP is a game changer on SSD. TLDR, It allows the SSD to "say" it finished copying when it actually hasn't. This has a massive affect on speed. 2. If you set it up properly, you can have a "third" speed teir. So not just SSD/HDD, but the ability to WRITE to SSD and STORE the data on HDD much like read/write cache in normal ZFS. Thank you @networkchuck
I just watched this video and I'm quite new to Self Hosting primarily in Linux. Would like to give this a try it looks complicated but you made it look easy. Thank you Chuck!!!
As a computer/tech guy in general computers, and tech got me into networking and IT so this is basically my dream set up right here with little pieces of tech that are awesome. Like I want this so bad cause it looks so nice.😊
This is such an amazing video and exactly what I've been working for for quite some time now. Thanks for always explaining things so well Chuck, great job as always!
Sometimes, I could absolutely swear you’re snooping on my search history - it’s absolutely uncanny how frequently you put out a video on something I’ve been actively researching. 😂 (Great minds think alike I guess.)
Ceph is one of the things that made me realize the beauty of Kubernetes operators. One of the OpenShift Data Foundation's upstreams is Rook Ceph. You basically install the operator, then just define which nodes and which drives to use on those nodes and it does the rest for you. However... It will not be as performant as Ceph itself because it's not as tweakable. For high performance clusters usually you use Rook to connect to an external Ceph cluster.
This it they type of tutorial I have been waiting for. I've been wanting to use ceph for my home lab so my storage has infinite growth and I don't have to build a new server every couple of years. I would like to see how to configure a share using erasure coding add setting up NFS and iscsi shares.
I've been trying to work out what to do for a NAS for a while. At the moment I just have a 6TB usb drive plugged into my proxmox server and forwarded to where it needs to be. I've been looking at TrueNAS and retail NAS drives and counting the cost.... Now I'm just going to pull out me Raspberry Pi's, maybe get a couple more mini PC's and/or a Zima board and go nuts from there :) Thank you
Great explanation! Have a hybrid ha cluster :) For example, influxdb running as a pod in k3s set to a specific node which is a vm on a ha proxmox. The pod uses a pvc/pv that points to a mounted directory which in turn is a virtual disk in proxmox on a ceph storage pool, along with a k3s lb svc to reference the pod. So in theory? can access the db using the same ip externally or internally by svc name, regardless of which one of the 3 proxmox nodes it's running on.
Really looking forward for a part 2. As a glusterfs user, I'm especially curious things like how are failures handled, can you also shrink the amount of available osd's later, and how hard is it to recover files if the cluster really fails?
This was awesome and has me thinking about reconfiguring my current setup. Can you go over in a future video on if there's any possibility on setting up docker containers for Plex, etc and how you would swap a defective hard drive?
You still need to manage the underlying operating system on those nodes. I'd think you would go for a used disk shelf or storage server. You can find pretty cheap used storage arrays for up to 36/48 drives.
I'd love to see a follow up video on using all the same hardware for services that utilize the storage. Webserver on one, media on another, etc. or are all these hosts now locked down to being only storage?
Damn this video is so awesome. I really like the deep dive parts. how things work and stuff. What would interrest me is how exactly seph reacts in any failure event. like a server/disc(s) break(s). as i understood the nas is not in raid with this configuration. so if one disk goes down am i able to replace it? or would i just replace it, delete the old drive from the cluster and add the new? does ceph go immediately into emergency mode when a disk goes down and step up its replication game and moves up the replications to 3 again? also the additional setup for the professional system would be interresting. with the network connections specified like you had in the overview. I would really love a second or third video. keep it up and thank you
We converted our old Dell Servers that have 8bays+ of disk into Ceph nodes. Suffice to say the results were amazing that we are scrapping our Synology expansion.
so cool!!! I can't wait to be back home and try this! Can you please please PLEASE make a part 2 with all the rest of the goodies???? Like integrating into proxmox for instance? Cheers
How can this be performant and efficient? So many (slower) network connections between servers (compared to SATA and M.2 / NVMe), so many different IO Speeds. It's magic
Hmm, I have just quickly watched the video so I may have missed it, but my experience with ceph in proxmox, was that it was using rather big amount of memory (not like ZFS though) and while it was working on a single 1G NIC (three node HP Elitedesk cluster) I have seen many arguments not advising to use this approach. They claimed that if you have significant amount of data being written to the ceph pool it may not be able to keep up using the 1GB NIC. So if you suffer power outage, you will suffer data loss. In addition, CEPH will wear out the consumer data storage devices rather fast due to the large amount read/write operations.
Hey Chuck! Thanks for showing open-source projects like CEPH, which is amazing for clustering storage. I’m curious if you've come across any open-source projects that focus on clustering AI-required resources, similar to how CEPH handles storage? I think it could be smart and save cost in local compute resources. Would love to hear your thoughts or if you know of any projects in this space! Great work!
It is always a pleasure to watch your videos! This is why I subscribed a long time ago... I have always been a fan of ZFS. Now you are showing something totally new. I am not sure if the speed of cephfs is as good as zfs due to the fact that ceph would mix/match any storage devices. As you say, USB devices, IDE devices, SATA devices, etc. I would think that it all comes town to the lowest speed device that will force the speed? Also, as you need a full device to pass on to ceph, it will unlikely work with some of the VPSes around (like Oracle free tier, Azure, Amazon, and other retailer like Contabo, etc.)? Unless it is possible to create file block systems and bring these as devices? Is this possible? Then set them up via their IP addresses? Can we do that as well? Please, don't hesitate setting up more videos on this subject as I belive this is really big. Good bye UNRAID, welcome CEPH!
That Memory Reboot hits hard. Reminding me of my childhood where i would try to make things work with each other when clearly it was not meant to. Even though i never could i atleast tried.
Great Ceph concepts explanation. So good that also highlights the overwhelming complexity of such a system. Personally I tried it for a couple of years (manually deployed and with Rook in Kubernetes) and found it painfully hard to debug issues, lost data. It is quite slow and it hurts to manage it in production. Cryptic errors, rabbit holes and whatnot. Granted, it was more than 5 years ago. Also played around with other distributed storage systems (Gluster,OpenEBS/Mayastor, Linstor,MooseFS), all very very complicated to set up and maintain. This forced me to reduce the need for such type of storage, redesigned the way I'm storing data (a combination of databases and S3/Minio). And waiting for SeaweedFS to become more stable as a product.
Hey Network Chuck, I'm a cybersecurity tester, and I have to say your videos are spot on. It's great to see someone spreading accurate information, unlike many RUclipsrs nowadays who often share misleading content. I really appreciate your in-depth content on cybersecurity. I have a couple of questions for you: Have you ever considered making a video on how to reverse the connection to a scammer's PC? I know it might be outside your usual scope, but I'm curious about your thoughts on this. Also, what is your operating system of choice for cybersecurity work? Mine is Linux.
Learn the skills to make IT your job: 30% off your first month/first year of any ITPro personal plan with Code “CHUCK30” - ntck.co/itprotv
NetworkChuck is building a FrankeNAS using CEPH, an open-source software-defined storage system. Learn how to turn a pile of old hardware into a powerful, scalable storage cluster that rivals enterprise solutions. This video covers Ceph architecture, installation, and configuration, demonstrating how to create a flexible, fault-tolerant storage system using a mix of devices. Whether you're a home lab enthusiast or IT professional, discover the power of decentralized storage and how Ceph can revolutionize your data management approach.
Commands/Walkthrough: ntck.co/403docs
🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
****00:00**** - Intro: the FrankeNASS
****00:45**** - Why you NEED CEPH
****04:45**** - How CEPH works
****19:55**** - TUTORIAL - CEPH Cluster Setup
👍
Me [Less content, but inspired by you Chuck. Thanks. Hope you read this cos a daemon splurged it :) keep up all the hard work]
So I have a M2 mac mini collecting dust and I need a NAS .SO can I use it to make a NAS?
@@Banzir_Ahmmed_Raj
Absolutely! The M2 Mac mini is actually perfect for a NAS. use something like Plex or TrueNAS. Go digging bro / brotjie / brodess
100%
The M2 Mac mini is perfect for a NAS. Use software like Plex or TrueNAS for your setup. Do some research yeah
Dude...I was recently learning proxmox and was like...damn this ceph documentation is so complicated to wrap my head around,BOOM. Network chuck to the rescue
It took me a while to really wrap my head around ceph
I’m just sitting here this morning in bed like “I really need a new NAS” and network chuck has convinced me he’s hacked into my brain and he’s like “I got you man, don’t worry. Video inbound now.”
For me it was different. I didn’t need a storage solution, now I really need a CEPH cluster
I was looking for storage solution and now I can utilize my crapware, my current NAS likes to fail
Same been thinking about getting a NAS for a few weeks now.
ceph looks fantastic for enterprise storage or situations where you have multiple servers with lots of storage in them and lots of people who need to access that storage, but up to a few hundred tb, I would strongly recommend zfs (raidz2/3), preferably under truenas scale if you just want a big nas that's easy to manage and fault tolerant for pretty inexpensive.
I was thinking I needed a microsd card for my future pi 5 k3s cluster. Now I'm tempted to make that a USB SSD for CEPH purposes.
Yes please part two. I had the CEPH websit open for several weeks now. and here it is, a video from Networkchuck about CEPH. Thank's Chuck, love your Videos
I am currently a trainee on the job, just yesterday my CEO wanted to explain me CEPH, I swear today I will walk in office with a very big smile and knowledge of ceph 😅
Huge W!!!
Lol
Do you work in a coffee shop? What kind of serious company uses this?
@@jacob1000 it’s about the technology that come with CEPH, brings redundancy for your Infrastructure, and I work in IT, our company uses CEPH to host data for hospitals all around germany, it brings 800TB total storage
@@janbredow2662 id use a way more reliable storage platform for that! Cheap out on license costs ?
Ceph means cephalopod = who can repair/heal themselves just like one of the main pillars ceph was founded on.
That is why versions are named after different cephalopods
Source - 12 years running Ceph in production ;)
I am curious how redundant it could be
Seems like a really safe bet
@@Foxy10-b6n it can be as redundant as you want it to be. Or as little as you want. I've done levels from disk redundancy to while DC redundancy on ceph. It can be made to be aware of all of this.
so where's the aplha ceph
I suppose it's also a play on the many arms cephalopods have.
Lovely to see more videos about Ceph. I found it when we ran out of space on our production environment. Had dabbled with it a bit at home but I had to put it into production like yesterday so that was a herowing week at work. Now 2 years later we have learned a lot and it's still rock solid.
Hey everyone, @DanielPersson makes great content on ceph. I am an old subscriber of him, make sure you subscribe to his channel.
He also has done many tutorials on installing ceph on his channel.
I actually stumbled upon your videos when I was trying to setup storage for my k8s cluster. Little did I know, it is deprecated and my architecture struggled with it. Great videos though!
Hey Chuck, this is by far THE BEST video I have seen from your channel in the last 3 months. GREAT video! Totally innovative content. Make this a series please if you can. Follow-up this video with the CEPH tiering tutorial (pool for SSDs and pool for HDDs). Everything related to CEPH is SO COOL! Pump as much content as you can about CEPH. Cheers and thanks!
[ceph]
a part 2 = yes
include proxmox = yes
coffee = yes
When you said "Ready, set, go!" at the 34 minute mark, you missed the chance to do a Dad joke with "Ready, Ceph, GO!"
I'm totally down for a second part/follow up on this video! Thank you for the great explanation! I'm also looking forward to seeing how this could be setup with Ansible and connecting Proxmox with Ceph + some best practices!
Make a part 2 and show how to separate the SSD and HD drives like you mentioned at the beginning of your video Please =)
Yes please
Yes please. I've used CEPH in the past on my previous ProxMox setup. Definitely like to know about setting up the tiers.
Concur, had experimented with it as well about 6 months ago for about 6 months and reverted back to NFS shares for proxmox cluster and just did scheduled rsyncs between my unraid NASes.
This video made me fall in love in Ceph, seriously. Part 2 please with tiering, Ansible integration and proxmox. There aren't a lot of tutorials on web and you explain everything in a simple and approachable way! I would even pay to see part 2 if needed. Sad to not see this video pop off.
Yes, Part 2 please! Or make it a series?
Would like to see how to do all this with ansible, include proxmox and seperate SSD and HDD.
Btw, awesome video! The density of information per minute was pretty high, but you really explained this complex topic as easy as I can imagine, without skipping any steps. Thank you for that!
Wow...This video is particularly useful for anyone considering investing in a NAS for personal or small business use. Chuck’s hands-on approach and detailed explanations make it easy to understand the benefits and potential challenges of using a NAS. His enthusiasm and practical advice can help viewers make informed decisions about their storage solutions. If you’re new to NAS or looking to upgrade your current setup, this video is a great resource. Thanks a lot Chuck!.
I’ve always wanted to make a storage cluster out of my older unused equipment. This is perfect! Please do a part 2!
Cool video! I will definitely return to it once I add a 2nd node to my infrastructure.
When showing things like OSDs being added, you can use "watch !!" to rerun the last command once every 2 seconds (adjustable with -n flag), instead of repeatadly hitting "up" + "enter". Changes are often more visible, because watch draws the output on a single screen from the top down, so you don't have weirdness due to the terminal scrolling. The optional "-d" flag also highlights differences between command runs.
I've been procrastinating about diving into CEPH for a couple of years. Thanks to this video, I'm finally going to to it. (Like kubernetes, similar time procrastinating, but NetworkChuck to the rescue again with the Kubernetes vid
eo!)
amidst all the storage talk, i heard "storge" once, and now i can only hear it like that.. thanks Networkchuck !
Oh gd it boo this commenter
... I just started watching and now I'm left with 'ikea chuck,' and his new Storj collection
This guy can sell ice to penguins.
As a penguin I could really use a guy, It's 40 degrees here today and I'm overheating.
To be honest, as global warming continues to worsen selling ice to penguins could be a pretty good business model if so many penguins weren't broke lazy bludgers. It's not racist for me to say that because I'm a penguin.
I've been procrastinating about diving into CEPH for a couple of years. Thanks to this video, I'm finally going to to it. (Like kubernetes, similar time procrastinating, but NetworkChuck to the rescue again with the Kubernetes video! Hmm... Maybe I need a NetworkChuck video about tidying up my home office to get that done too!)
Such a great individual! Everything is explained and demonstrated clearly and effectively. I find it fantastic for getting started with various topics, tools, and software.
A good thing for organizing all of our storage, we waiting for part 2, good job bro
Great Video and well explained, as always 👍 would love to see a follow up going deeper covering Performance & Tuning, Snapshots / Replication, Disaster Scenarios etc. Thank you 👍
Excellent job explaining this. I've been needing to figure it out, but have been procrastinating until i had a full weekend to sort it out. Now in 3 hours I have it sorted in my head and a a 5 node cluster spooled up in my lab.
Chuck you are rock!!!, I never ever seen the Ceph explaination plus han on in super easy understand like this before. Great job!!! please keep going
Grate stuff, please do more on Ceph , can’t believe I watched more than 40 minutes one shot.
was i the only one to notice his alien eye at 3:24 .
what the heck man. chuck is an alien . well that explains a lot about coffee and the wierd mix of intrests
I was about to comment the same thing! Was that edited?
@@thanEay Good question! I noticed too!
No bro, this is ai. he is dead
yes HQHAHAHAHHAHA
who then looked to the side and winked? I did
Thank you NetworkChuck, I have been trying to understand CEPH and you made it easy for this old man.
I have had a mix of unraid, truenas, proprietary OSs, but I was looking for something like CEPH. I would def like a more in depth video on CEPH commands and making things work together.
The best thing about this video for me: you're a Dad of 6 daughters. I love that. Huge respect.
I use ceph at work, I honestly didn't know you could map ceph directly in Linux remotely? I've always used SMB/NFS.
So, to pay it forward, one or two things I've learned over the years:
1. Power Loss Protection or PLP is a game changer on SSD. TLDR, It allows the SSD to "say" it finished copying when it actually hasn't. This has a massive affect on speed.
2. If you set it up properly, you can have a "third" speed teir. So not just SSD/HDD, but the ability to WRITE to SSD and STORE the data on HDD much like read/write cache in normal ZFS.
Thank you @networkchuck
Hey Network Chuck,
thanks for the awesome video! Your content is always impressive and super informative. Keep it up!
yes part 2....ceph looks pretty awesome
I just watched this video and I'm quite new to Self Hosting primarily in Linux. Would like to give this a try it looks complicated but you made it look easy. Thank you Chuck!!!
As a computer/tech guy in general computers, and tech got me into networking and IT so this is basically my dream set up right here with little pieces of tech that are awesome. Like I want this so bad cause it looks so nice.😊
Mind blown. Now I have a use for those old machines I still have. Thank you.
Would love a Ceph and Proxmox video.
This is such an amazing video and exactly what I've been working for for quite some time now.
Thanks for always explaining things so well Chuck, great job as always!
Sometimes, I could absolutely swear you’re snooping on my search history - it’s absolutely uncanny how frequently you put out a video on something I’ve been actively researching. 😂 (Great minds think alike I guess.)
Yes plz - More of this. I am learning so much.
please do a follow up video explaining the bits you talked at the end of the video, proxmox, block, etc, thanks!
brilliant, finally i do have some understanding of ceph. Definitely you should make a part 2, thank you
do a part 2, i truly love this kinda of stuff ...
I never really understood CEPH until now. Thanks for the great tutorial!
I have no IDEA what you are doing, but I really enjoy your videos!!
Ceph is one of the things that made me realize the beauty of Kubernetes operators. One of the OpenShift Data Foundation's upstreams is Rook Ceph. You basically install the operator, then just define which nodes and which drives to use on those nodes and it does the rest for you. However... It will not be as performant as Ceph itself because it's not as tweakable. For high performance clusters usually you use Rook to connect to an external Ceph cluster.
This it they type of tutorial I have been waiting for. I've been wanting to use ceph for my home lab so my storage has infinite growth and I don't have to build a new server every couple of years. I would like to see how to configure a share using erasure coding add setting up NFS and iscsi shares.
I've been trying to work out what to do for a NAS for a while. At the moment I just have a 6TB usb drive plugged into my proxmox server and forwarded to where it needs to be. I've been looking at TrueNAS and retail NAS drives and counting the cost.... Now I'm just going to pull out me Raspberry Pi's, maybe get a couple more mini PC's and/or a Zima board and go nuts from there :)
Thank you
please make a proxmox version of this and ceph!!
Thank you for this in depth video. Ceph seems very nice.
You are constantly a source of knowledge and inspiration.
wow, awesome! we need more ceph videos from you! :)
Thank you so much for sharing!
Yes second part please.
Yes, integration with proxmox would be very helpful.
Best regards from MX.
I didn't know how powerful can be Ceph!
Thanks!
Great explanation! Have a hybrid ha cluster :) For example, influxdb running as a pod in k3s set to a specific node which is a vm on a ha proxmox. The pod uses a pvc/pv that points to a mounted directory which in turn is a virtual disk in proxmox on a ceph storage pool, along with a k3s lb svc to reference the pod. So in theory? can access the db using the same ip externally or internally by svc name, regardless of which one of the 3 proxmox nodes it's running on.
Really looking forward for a part 2. As a glusterfs user, I'm especially curious things like how are failures handled, can you also shrink the amount of available osd's later, and how hard is it to recover files if the cluster really fails?
This was one of best things I could've seen in my timeline
hell yeah, more on ceph! great video - thank you!
💯 Great ceph content! More please! I implemented ceph mamy years ago way before it supported containers. 😎
Awesome, love that u used cephadm and not Ceph ansible or proxmox to deploy Ceph. It's so much easier and better supported!
More videos on this would be great.
Maybe touching some use cases for the average home user?
I’ve literally been looking for this exact type of software!
I’d love to see that network setup you talked about.
Well i know what im going to be doin for the next couple years. Mind Blown.
This was awesome and has me thinking about reconfiguring my current setup. Can you go over in a future video on if there's any possibility on setting up docker containers for Plex, etc and how you would swap a defective hard drive?
Teaches practical skills better than most college professors.
My brain is chewing your explanation Chuck🙈, very informative vid! Thanks!
Very cool, I see the obvious data uses for it. Just FYI these Corps use it
Geico
Comcast
Ford Motors
SpaceX
Openstack
AMD
Id love it if you did a full video explaining how ceph works 😂 i find it interesting
Yea just do more videos about ceph in general please
ceph might be the flavor of the month
You still need to manage the underlying operating system on those nodes. I'd think you would go for a used disk shelf or storage server. You can find pretty cheap used storage arrays for up to 36/48 drives.
I'd love to see a follow up video on using all the same hardware for services that utilize the storage. Webserver on one, media on another, etc. or are all these hosts now locked down to being only storage?
part 2 part 3 more more more this is just what I need to use for my set-up at home. enjoy the content
This is exactly what a was looking for thanks!
P.s.: I want more pleassseeeeee !
Very useful. Would love to see more vids on Ceph. Using Ceph with proxmox be great
Thanks, pls make the part 2, like add more storage, remove storage, hard drive fail, etc ...
Another awesome video! Please make part 2. What about security, does it support snapshots?
Love CEPH! Need to know more about performance tweaks.
I never comment, but this was incrediable. Great job!
You Need To Make More Content On CEPH! RIGHT NOW!!
Damn this video is so awesome. I really like the deep dive parts. how things work and stuff. What would interrest me is how exactly seph reacts in any failure event. like a server/disc(s) break(s). as i understood the nas is not in raid with this configuration. so if one disk goes down am i able to replace it? or would i just replace it, delete the old drive from the cluster and add the new?
does ceph go immediately into emergency mode when a disk goes down and step up its replication game and moves up the replications to 3 again?
also the additional setup for the professional system would be interresting. with the network connections specified like you had in the overview.
I would really love a second or third video.
keep it up and thank you
This is cool! Thanks Chuck!
Hey man... you save me a lot of money. This is exactly what I needed! Thank you very much.
We converted our old Dell Servers that have 8bays+ of disk into Ceph nodes. Suffice to say the results were amazing that we are scrapping our Synology expansion.
so cool!!! I can't wait to be back home and try this! Can you please please PLEASE make a part 2 with all the rest of the goodies???? Like integrating into proxmox for instance?
Cheers
If you think this is hard, go and care for 6 daughters. @NetworkChuck has my biggest respect for this!
How can this be performant and efficient? So many (slower) network connections between servers (compared to SATA and M.2 / NVMe), so many different IO Speeds. It's magic
This was so interesting man got me wanting to try this out just for my first proxmox rig to
Hmm, I have just quickly watched the video so I may have missed it, but my experience with ceph in proxmox, was that it was using rather big amount of memory (not like ZFS though) and while it was working on a single 1G NIC (three node HP Elitedesk cluster) I have seen many arguments not advising to use this approach. They claimed that if you have significant amount of data being written to the ceph pool it may not be able to keep up using the 1GB NIC. So if you suffer power outage, you will suffer data loss. In addition, CEPH will wear out the consumer data storage devices rather fast due to the large amount read/write operations.
Really enjoyed this video. Cool stuff!
It'd be pretty sweet to show how to set up ceph with proxmox but..that's for both education and practical use on my end as well.
Hey Chuck! Thanks for showing open-source projects like CEPH, which is amazing for clustering storage. I’m curious if you've come across any open-source projects that focus on clustering AI-required resources, similar to how CEPH handles storage? I think it could be smart and save cost in local compute resources. Would love to hear your thoughts or if you know of any projects in this space! Great work!
keyboard's sound ❤🔥❤🔥
That’s NASty! 🤣
It is always a pleasure to watch your videos! This is why I subscribed a long time ago...
I have always been a fan of ZFS. Now you are showing something totally new. I am not sure if the speed of cephfs is as good as zfs due to the fact that ceph would mix/match any storage devices. As you say, USB devices, IDE devices, SATA devices, etc. I would think that it all comes town to the lowest speed device that will force the speed?
Also, as you need a full device to pass on to ceph, it will unlikely work with some of the VPSes around (like Oracle free tier, Azure, Amazon, and other retailer like Contabo, etc.)? Unless it is possible to create file block systems and bring these as devices? Is this possible? Then set them up via their IP addresses? Can we do that as well?
Please, don't hesitate setting up more videos on this subject as I belive this is really big. Good bye UNRAID, welcome CEPH!
That Memory Reboot hits hard. Reminding me of my childhood where i would try to make things work with each other when clearly it was not meant to. Even though i never could i atleast tried.
Great Ceph concepts explanation. So good that also highlights the overwhelming complexity of such a system. Personally I tried it for a couple of years (manually deployed and with Rook in Kubernetes) and found it painfully hard to debug issues, lost data. It is quite slow and it hurts to manage it in production. Cryptic errors, rabbit holes and whatnot. Granted, it was more than 5 years ago.
Also played around with other distributed storage systems (Gluster,OpenEBS/Mayastor, Linstor,MooseFS), all very very complicated to set up and maintain. This forced me to reduce the need for such type of storage, redesigned the way I'm storing data (a combination of databases and S3/Minio). And waiting for SeaweedFS to become more stable as a product.
Looking forward to the Hack the Box videos 😊
Can you do part 2 with Mac and iPad connect.
Hey Network Chuck,
I'm a cybersecurity tester, and I have to say your videos are spot on. It's great to see someone spreading accurate information, unlike many RUclipsrs nowadays who often share misleading content. I really appreciate your in-depth content on cybersecurity.
I have a couple of questions for you: Have you ever considered making a video on how to reverse the connection to a scammer's PC? I know it might be outside your usual scope, but I'm curious about your thoughts on this. Also, what is your operating system of choice for cybersecurity work? Mine is Linux.
As a pro, maybe you would be better able to make a vid on that topic???
Please make part 2 😊 - Do you know GPFS ?