the FrankeNAS - (Raspberry Pi, Zima Board, Dell Server, Ugreen) // a CEPH Tutorial

Поделиться
HTML-код
  • Опубликовано: 31 янв 2025
  • НаукаНаука

Комментарии • 580

  • @NetworkChuck
    @NetworkChuck  6 месяцев назад +33

    Learn the skills to make IT your job: 30% off your first month/first year of any ITPro personal plan with Code “CHUCK30” - ntck.co/itprotv
    NetworkChuck is building a FrankeNAS using CEPH, an open-source software-defined storage system. Learn how to turn a pile of old hardware into a powerful, scalable storage cluster that rivals enterprise solutions. This video covers Ceph architecture, installation, and configuration, demonstrating how to create a flexible, fault-tolerant storage system using a mix of devices. Whether you're a home lab enthusiast or IT professional, discover the power of decentralized storage and how Ceph can revolutionize your data management approach.
    Commands/Walkthrough: ntck.co/403docs
    🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
    ****00:00**** - Intro: the FrankeNASS
    ****00:45**** - Why you NEED CEPH
    ****04:45**** - How CEPH works
    ****19:55**** - TUTORIAL - CEPH Cluster Setup

    • @1bustudios
      @1bustudios 6 месяцев назад

      👍

    • @0nezNzero
      @0nezNzero 6 месяцев назад

      Me [Less content, but inspired by you Chuck. Thanks. Hope you read this cos a daemon splurged it :) keep up all the hard work]

    • @Banzir_Ahmmed_Raj
      @Banzir_Ahmmed_Raj 6 месяцев назад +2

      So I have a M2 mac mini collecting dust and I need a NAS .SO can I use it to make a NAS?

    • @0nezNzero
      @0nezNzero 6 месяцев назад

      @@Banzir_Ahmmed_Raj
      Absolutely! The M2 Mac mini is actually perfect for a NAS. use something like Plex or TrueNAS. Go digging bro / brotjie / brodess

    • @0nezNzero
      @0nezNzero 6 месяцев назад

      100%
      The M2 Mac mini is perfect for a NAS. Use software like Plex or TrueNAS for your setup. Do some research yeah

  • @mignochrono
    @mignochrono 6 месяцев назад +177

    Dude...I was recently learning proxmox and was like...damn this ceph documentation is so complicated to wrap my head around,BOOM. Network chuck to the rescue

    • @NetworkChuck
      @NetworkChuck  6 месяцев назад +38

      It took me a while to really wrap my head around ceph

  • @Kaotix_music
    @Kaotix_music 6 месяцев назад +154

    I’m just sitting here this morning in bed like “I really need a new NAS” and network chuck has convinced me he’s hacked into my brain and he’s like “I got you man, don’t worry. Video inbound now.”

    • @nazzak2093
      @nazzak2093 6 месяцев назад +8

      For me it was different. I didn’t need a storage solution, now I really need a CEPH cluster

    • @rafal9ck817
      @rafal9ck817 6 месяцев назад +1

      I was looking for storage solution and now I can utilize my crapware, my current NAS likes to fail

    • @robpalomo
      @robpalomo 6 месяцев назад +1

      Same been thinking about getting a NAS for a few weeks now.

    • @Cpgeekorg
      @Cpgeekorg 6 месяцев назад

      ceph looks fantastic for enterprise storage or situations where you have multiple servers with lots of storage in them and lots of people who need to access that storage, but up to a few hundred tb, I would strongly recommend zfs (raidz2/3), preferably under truenas scale if you just want a big nas that's easy to manage and fault tolerant for pretty inexpensive.

    • @lua-nya
      @lua-nya 6 месяцев назад

      I was thinking I needed a microsd card for my future pi 5 k3s cluster. Now I'm tempted to make that a USB SSD for CEPH purposes.

  • @CHSgtHondo
    @CHSgtHondo 6 месяцев назад +25

    Yes please part two. I had the CEPH websit open for several weeks now. and here it is, a video from Networkchuck about CEPH. Thank's Chuck, love your Videos

  • @janbredow2662
    @janbredow2662 6 месяцев назад +26

    I am currently a trainee on the job, just yesterday my CEO wanted to explain me CEPH, I swear today I will walk in office with a very big smile and knowledge of ceph 😅

    • @glitchinglive
      @glitchinglive 6 месяцев назад +1

      Huge W!!!

    • @PixelatedError
      @PixelatedError 6 месяцев назад

      Lol

    • @jacob1000
      @jacob1000 6 месяцев назад

      Do you work in a coffee shop? What kind of serious company uses this?

    • @janbredow2662
      @janbredow2662 6 месяцев назад +4

      @@jacob1000 it’s about the technology that come with CEPH, brings redundancy for your Infrastructure, and I work in IT, our company uses CEPH to host data for hospitals all around germany, it brings 800TB total storage

    • @jacob1000
      @jacob1000 6 месяцев назад

      @@janbredow2662 id use a way more reliable storage platform for that! Cheap out on license costs ?

  • @peeejayz
    @peeejayz 6 месяцев назад +176

    Ceph means cephalopod = who can repair/heal themselves just like one of the main pillars ceph was founded on.
    That is why versions are named after different cephalopods
    Source - 12 years running Ceph in production ;)

    • @Foxy10-b6n
      @Foxy10-b6n 6 месяцев назад

      I am curious how redundant it could be

    • @Foxy10-b6n
      @Foxy10-b6n 6 месяцев назад

      Seems like a really safe bet

    • @peeejayz
      @peeejayz 6 месяцев назад +1

      @@Foxy10-b6n it can be as redundant as you want it to be. Or as little as you want. I've done levels from disk redundancy to while DC redundancy on ceph. It can be made to be aware of all of this.

    • @nushankodikara
      @nushankodikara 6 месяцев назад +1

      so where's the aplha ceph

    • @am0rpheus
      @am0rpheus 6 месяцев назад

      I suppose it's also a play on the many arms cephalopods have.

  • @DanielPersson
    @DanielPersson 6 месяцев назад +11

    Lovely to see more videos about Ceph. I found it when we ran out of space on our production environment. Had dabbled with it a bit at home but I had to put it into production like yesterday so that was a herowing week at work. Now 2 years later we have learned a lot and it's still rock solid.

    • @whatwhat-777
      @whatwhat-777 6 месяцев назад +2

      Hey everyone, @DanielPersson makes great content on ceph. I am an old subscriber of him, make sure you subscribe to his channel.
      He also has done many tutorials on installing ceph on his channel.

    • @HNGLT
      @HNGLT 6 месяцев назад +1

      I actually stumbled upon your videos when I was trying to setup storage for my k8s cluster. Little did I know, it is deprecated and my architecture struggled with it. Great videos though!

  • @parl-88
    @parl-88 6 месяцев назад +10

    Hey Chuck, this is by far THE BEST video I have seen from your channel in the last 3 months. GREAT video! Totally innovative content. Make this a series please if you can. Follow-up this video with the CEPH tiering tutorial (pool for SSDs and pool for HDDs). Everything related to CEPH is SO COOL! Pump as much content as you can about CEPH. Cheers and thanks!

  • @johndoughto
    @johndoughto 6 месяцев назад +84

    [ceph]
    a part 2 = yes
    include proxmox = yes
    coffee = yes

  • @eveypea
    @eveypea 6 месяцев назад +40

    When you said "Ready, set, go!" at the 34 minute mark, you missed the chance to do a Dad joke with "Ready, Ceph, GO!"

  • @bedar89
    @bedar89 6 месяцев назад +7

    I'm totally down for a second part/follow up on this video! Thank you for the great explanation! I'm also looking forward to seeing how this could be setup with Ansible and connecting Proxmox with Ceph + some best practices!

  • @ThrivingInLife
    @ThrivingInLife 6 месяцев назад +46

    Make a part 2 and show how to separate the SSD and HD drives like you mentioned at the beginning of your video Please =)

    • @davidclift5989
      @davidclift5989 6 месяцев назад +3

      Yes please

    • @Darkk6969
      @Darkk6969 6 месяцев назад +4

      Yes please. I've used CEPH in the past on my previous ProxMox setup. Definitely like to know about setting up the tiers.

    • @justindeemer6260
      @justindeemer6260 6 месяцев назад +1

      Concur, had experimented with it as well about 6 months ago for about 6 months and reverted back to NFS shares for proxmox cluster and just did scheduled rsyncs between my unraid NASes.

  • @malborboss
    @malborboss 4 месяца назад

    This video made me fall in love in Ceph, seriously. Part 2 please with tiering, Ansible integration and proxmox. There aren't a lot of tutorials on web and you explain everything in a simple and approachable way! I would even pay to see part 2 if needed. Sad to not see this video pop off.

  • @DerTypOfficial
    @DerTypOfficial 5 месяцев назад +2

    Yes, Part 2 please! Or make it a series?
    Would like to see how to do all this with ansible, include proxmox and seperate SSD and HDD.
    Btw, awesome video! The density of information per minute was pretty high, but you really explained this complex topic as easy as I can imagine, without skipping any steps. Thank you for that!

  • @spottechnologies
    @spottechnologies 6 месяцев назад

    Wow...This video is particularly useful for anyone considering investing in a NAS for personal or small business use. Chuck’s hands-on approach and detailed explanations make it easy to understand the benefits and potential challenges of using a NAS. His enthusiasm and practical advice can help viewers make informed decisions about their storage solutions. If you’re new to NAS or looking to upgrade your current setup, this video is a great resource. Thanks a lot Chuck!.

  • @jamesbrown8766
    @jamesbrown8766 6 месяцев назад +3

    I’ve always wanted to make a storage cluster out of my older unused equipment. This is perfect! Please do a part 2!

  • @marvinnicorode1209
    @marvinnicorode1209 6 месяцев назад +2

    Cool video! I will definitely return to it once I add a 2nd node to my infrastructure.
    When showing things like OSDs being added, you can use "watch !!" to rerun the last command once every 2 seconds (adjustable with -n flag), instead of repeatadly hitting "up" + "enter". Changes are often more visible, because watch draws the output on a single screen from the top down, so you don't have weirdness due to the terminal scrolling. The optional "-d" flag also highlights differences between command runs.

  • @ScottKeene70
    @ScottKeene70 Месяц назад

    I've been procrastinating about diving into CEPH for a couple of years. Thanks to this video, I'm finally going to to it. (Like kubernetes, similar time procrastinating, but NetworkChuck to the rescue again with the Kubernetes vid
    eo!)

  • @ledoynier3694
    @ledoynier3694 6 месяцев назад +3

    amidst all the storage talk, i heard "storge" once, and now i can only hear it like that.. thanks Networkchuck !

    • @dudedavid522
      @dudedavid522 6 месяцев назад +1

      Oh gd it boo this commenter
      ... I just started watching and now I'm left with 'ikea chuck,' and his new Storj collection

  • @GasperRomih
    @GasperRomih 6 месяцев назад +14

    This guy can sell ice to penguins.

    • @SineN0mine3
      @SineN0mine3 27 дней назад

      As a penguin I could really use a guy, It's 40 degrees here today and I'm overheating.
      To be honest, as global warming continues to worsen selling ice to penguins could be a pretty good business model if so many penguins weren't broke lazy bludgers. It's not racist for me to say that because I'm a penguin.

  • @ScottKeene70
    @ScottKeene70 Месяц назад

    I've been procrastinating about diving into CEPH for a couple of years. Thanks to this video, I'm finally going to to it. (Like kubernetes, similar time procrastinating, but NetworkChuck to the rescue again with the Kubernetes video! Hmm... Maybe I need a NetworkChuck video about tidying up my home office to get that done too!)

  • @RiDDiX93
    @RiDDiX93 5 месяцев назад

    Such a great individual! Everything is explained and demonstrated clearly and effectively. I find it fantastic for getting started with various topics, tools, and software.

  • @paiwandfaruq
    @paiwandfaruq 6 месяцев назад +2

    A good thing for organizing all of our storage, we waiting for part 2, good job bro

  • @elmeromero303
    @elmeromero303 3 месяца назад +1

    Great Video and well explained, as always 👍 would love to see a follow up going deeper covering Performance & Tuning, Snapshots / Replication, Disaster Scenarios etc. Thank you 👍

  • @thegreyfuzz
    @thegreyfuzz 6 месяцев назад

    Excellent job explaining this. I've been needing to figure it out, but have been procrastinating until i had a full weekend to sort it out. Now in 3 hours I have it sorted in my head and a a 5 node cluster spooled up in my lab.

  • @tsunin
    @tsunin 4 месяца назад

    Chuck you are rock!!!, I never ever seen the Ceph explaination plus han on in super easy understand like this before. Great job!!! please keep going

  • @alfarahat
    @alfarahat 6 месяцев назад +1

    Grate stuff, please do more on Ceph , can’t believe I watched more than 40 minutes one shot.

  • @sarveshwarsharma990
    @sarveshwarsharma990 6 месяцев назад +51

    was i the only one to notice his alien eye at 3:24 .
    what the heck man. chuck is an alien . well that explains a lot about coffee and the wierd mix of intrests

    • @thanEay
      @thanEay 6 месяцев назад +1

      I was about to comment the same thing! Was that edited?

    • @WiSPMusic.
      @WiSPMusic. 6 месяцев назад

      @@thanEay Good question! I noticed too!

    • @parsagp
      @parsagp 6 месяцев назад +3

      No bro, this is ai. he is dead

    • @nnktv28
      @nnktv28 6 месяцев назад

      yes HQHAHAHAHHAHA

    • @briguymaine
      @briguymaine 6 месяцев назад

      who then looked to the side and winked? I did

  • @mytime34
    @mytime34 6 месяцев назад +1

    Thank you NetworkChuck, I have been trying to understand CEPH and you made it easy for this old man.
    I have had a mix of unraid, truenas, proprietary OSs, but I was looking for something like CEPH. I would def like a more in depth video on CEPH commands and making things work together.

  • @danielclark6033
    @danielclark6033 6 месяцев назад

    The best thing about this video for me: you're a Dad of 6 daughters. I love that. Huge respect.

  • @KILLERTX95
    @KILLERTX95 6 месяцев назад +7

    I use ceph at work, I honestly didn't know you could map ceph directly in Linux remotely? I've always used SMB/NFS.
    So, to pay it forward, one or two things I've learned over the years:
    1. Power Loss Protection or PLP is a game changer on SSD. TLDR, It allows the SSD to "say" it finished copying when it actually hasn't. This has a massive affect on speed.
    2. If you set it up properly, you can have a "third" speed teir. So not just SSD/HDD, but the ability to WRITE to SSD and STORE the data on HDD much like read/write cache in normal ZFS.
    Thank you @networkchuck

  • @DN9KNM
    @DN9KNM 6 месяцев назад +2

    Hey Network Chuck,
    thanks for the awesome video! Your content is always impressive and super informative. Keep it up!

  • @RockTheCage55
    @RockTheCage55 6 месяцев назад +11

    yes part 2....ceph looks pretty awesome

  • @xRaydah
    @xRaydah 5 месяцев назад

    I just watched this video and I'm quite new to Self Hosting primarily in Linux. Would like to give this a try it looks complicated but you made it look easy. Thank you Chuck!!!

  • @Nathan15038
    @Nathan15038 6 месяцев назад

    As a computer/tech guy in general computers, and tech got me into networking and IT so this is basically my dream set up right here with little pieces of tech that are awesome. Like I want this so bad cause it looks so nice.😊

  • @davidclift5989
    @davidclift5989 6 месяцев назад

    Mind blown. Now I have a use for those old machines I still have. Thank you.
    Would love a Ceph and Proxmox video.

  • @Pedrocas
    @Pedrocas 6 месяцев назад

    This is such an amazing video and exactly what I've been working for for quite some time now.
    Thanks for always explaining things so well Chuck, great job as always!

  • @konkelen
    @konkelen 6 месяцев назад

    Sometimes, I could absolutely swear you’re snooping on my search history - it’s absolutely uncanny how frequently you put out a video on something I’ve been actively researching. 😂 (Great minds think alike I guess.)

  • @Kylle812
    @Kylle812 6 месяцев назад +2

    Yes plz - More of this. I am learning so much.

  • @squalazzo
    @squalazzo 6 месяцев назад +3

    please do a follow up video explaining the bits you talked at the end of the video, proxmox, block, etc, thanks!

  • @aros243
    @aros243 6 месяцев назад

    brilliant, finally i do have some understanding of ceph. Definitely you should make a part 2, thank you

  • @RazoBeckett.
    @RazoBeckett. 6 месяцев назад +3

    do a part 2, i truly love this kinda of stuff ...

  • @OK_ACME
    @OK_ACME 6 месяцев назад

    I never really understood CEPH until now. Thanks for the great tutorial!

  • @CalebStewartPhoto
    @CalebStewartPhoto 4 месяца назад

    I have no IDEA what you are doing, but I really enjoy your videos!!

  • @FlexibleToast
    @FlexibleToast 6 месяцев назад

    Ceph is one of the things that made me realize the beauty of Kubernetes operators. One of the OpenShift Data Foundation's upstreams is Rook Ceph. You basically install the operator, then just define which nodes and which drives to use on those nodes and it does the rest for you. However... It will not be as performant as Ceph itself because it's not as tweakable. For high performance clusters usually you use Rook to connect to an external Ceph cluster.

  • @ryant7392
    @ryant7392 6 месяцев назад +1

    This it they type of tutorial I have been waiting for. I've been wanting to use ceph for my home lab so my storage has infinite growth and I don't have to build a new server every couple of years. I would like to see how to configure a share using erasure coding add setting up NFS and iscsi shares.

  • @dancoon7653
    @dancoon7653 5 месяцев назад

    I've been trying to work out what to do for a NAS for a while. At the moment I just have a 6TB usb drive plugged into my proxmox server and forwarded to where it needs to be. I've been looking at TrueNAS and retail NAS drives and counting the cost.... Now I'm just going to pull out me Raspberry Pi's, maybe get a couple more mini PC's and/or a Zima board and go nuts from there :)
    Thank you

  • @lilloca
    @lilloca 6 месяцев назад +2

    please make a proxmox version of this and ceph!!

  • @jonathanchevallier7046
    @jonathanchevallier7046 6 месяцев назад +1

    Thank you for this in depth video. Ceph seems very nice.

  • @BeerBytesandBarbells
    @BeerBytesandBarbells 6 месяцев назад

    You are constantly a source of knowledge and inspiration.

  • @korseg1990
    @korseg1990 4 месяца назад

    wow, awesome! we need more ceph videos from you! :)

  • @alfonsocastellanosbalderas9842
    @alfonsocastellanosbalderas9842 5 месяцев назад

    Thank you so much for sharing!
    Yes second part please.
    Yes, integration with proxmox would be very helpful.
    Best regards from MX.

  • @4Xsample
    @4Xsample 6 месяцев назад

    I didn't know how powerful can be Ceph!
    Thanks!

  • @DavidC-rt3or
    @DavidC-rt3or 6 месяцев назад

    Great explanation! Have a hybrid ha cluster :) For example, influxdb running as a pod in k3s set to a specific node which is a vm on a ha proxmox. The pod uses a pvc/pv that points to a mounted directory which in turn is a virtual disk in proxmox on a ceph storage pool, along with a k3s lb svc to reference the pod. So in theory? can access the db using the same ip externally or internally by svc name, regardless of which one of the 3 proxmox nodes it's running on.

  • @nalixl
    @nalixl 6 месяцев назад +1

    Really looking forward for a part 2. As a glusterfs user, I'm especially curious things like how are failures handled, can you also shrink the amount of available osd's later, and how hard is it to recover files if the cluster really fails?

  • @hosseintarighatimomtaz3298
    @hosseintarighatimomtaz3298 6 месяцев назад +1

    This was one of best things I could've seen in my timeline

  • @firstspar
    @firstspar 2 месяца назад

    hell yeah, more on ceph! great video - thank you!

  • @AlexKidd4Fun
    @AlexKidd4Fun 6 месяцев назад

    💯 Great ceph content! More please! I implemented ceph mamy years ago way before it supported containers. 😎

  • @LampJustin
    @LampJustin 6 месяцев назад

    Awesome, love that u used cephadm and not Ceph ansible or proxmox to deploy Ceph. It's so much easier and better supported!

  • @guitaristtom
    @guitaristtom 5 месяцев назад

    More videos on this would be great.
    Maybe touching some use cases for the average home user?

  • @SolninjaA
    @SolninjaA 6 месяцев назад +12

    I’ve literally been looking for this exact type of software!

  • @brentfuchs5501
    @brentfuchs5501 6 месяцев назад

    I’d love to see that network setup you talked about.

  • @JeremyBardwellGR
    @JeremyBardwellGR 3 месяца назад

    Well i know what im going to be doin for the next couple years. Mind Blown.

  • @Omn1Slash
    @Omn1Slash 6 месяцев назад

    This was awesome and has me thinking about reconfiguring my current setup. Can you go over in a future video on if there's any possibility on setting up docker containers for Plex, etc and how you would swap a defective hard drive?

  • @Lmanyakaa
    @Lmanyakaa 6 месяцев назад

    Teaches practical skills better than most college professors.

  • @pujisetiadi1141
    @pujisetiadi1141 4 месяца назад

    My brain is chewing your explanation Chuck🙈, very informative vid! Thanks!

  • @mrmotofy
    @mrmotofy 5 месяцев назад

    Very cool, I see the obvious data uses for it. Just FYI these Corps use it
    Geico
    Comcast
    Ford Motors
    SpaceX
    Openstack
    AMD

  • @CatCread16
    @CatCread16 6 месяцев назад

    Id love it if you did a full video explaining how ceph works 😂 i find it interesting
    Yea just do more videos about ceph in general please

  • @romayojr
    @romayojr 6 месяцев назад +1

    ceph might be the flavor of the month

  • @iankester-haney3315
    @iankester-haney3315 6 месяцев назад

    You still need to manage the underlying operating system on those nodes. I'd think you would go for a used disk shelf or storage server. You can find pretty cheap used storage arrays for up to 36/48 drives.

  • @itzteajay
    @itzteajay 6 месяцев назад

    I'd love to see a follow up video on using all the same hardware for services that utilize the storage. Webserver on one, media on another, etc. or are all these hosts now locked down to being only storage?

  • @Ehrlichia1
    @Ehrlichia1 6 месяцев назад

    part 2 part 3 more more more this is just what I need to use for my set-up at home. enjoy the content

  • @therealwill2k
    @therealwill2k 6 месяцев назад

    This is exactly what a was looking for thanks!
    P.s.: I want more pleassseeeeee !

  • @JA-ii5vp
    @JA-ii5vp 6 месяцев назад

    Very useful. Would love to see more vids on Ceph. Using Ceph with proxmox be great

  • @fortedexe8273
    @fortedexe8273 6 месяцев назад

    Thanks, pls make the part 2, like add more storage, remove storage, hard drive fail, etc ...

  • @papand88
    @papand88 4 месяца назад

    Another awesome video! Please make part 2. What about security, does it support snapshots?

  • @Darkk6969
    @Darkk6969 6 месяцев назад

    Love CEPH! Need to know more about performance tweaks.

  • @danclark106
    @danclark106 5 месяцев назад

    I never comment, but this was incrediable. Great job!

  • @whatwhat-777
    @whatwhat-777 6 месяцев назад +1

    You Need To Make More Content On CEPH! RIGHT NOW!!

  • @Holzf43ller
    @Holzf43ller 6 месяцев назад +1

    Damn this video is so awesome. I really like the deep dive parts. how things work and stuff. What would interrest me is how exactly seph reacts in any failure event. like a server/disc(s) break(s). as i understood the nas is not in raid with this configuration. so if one disk goes down am i able to replace it? or would i just replace it, delete the old drive from the cluster and add the new?
    does ceph go immediately into emergency mode when a disk goes down and step up its replication game and moves up the replications to 3 again?
    also the additional setup for the professional system would be interresting. with the network connections specified like you had in the overview.
    I would really love a second or third video.
    keep it up and thank you

  • @gamereditor59ner22
    @gamereditor59ner22 6 месяцев назад +4

    This is cool! Thanks Chuck!

  • @jonathanmartins7744
    @jonathanmartins7744 6 месяцев назад

    Hey man... you save me a lot of money. This is exactly what I needed! Thank you very much.

  • @jdg7327
    @jdg7327 6 месяцев назад +3

    We converted our old Dell Servers that have 8bays+ of disk into Ceph nodes. Suffice to say the results were amazing that we are scrapping our Synology expansion.

  • @VincentLeBourlot
    @VincentLeBourlot 6 месяцев назад

    so cool!!! I can't wait to be back home and try this! Can you please please PLEASE make a part 2 with all the rest of the goodies???? Like integrating into proxmox for instance?
    Cheers

  • @RxZ95sssPG
    @RxZ95sssPG 6 месяцев назад +3

    If you think this is hard, go and care for 6 daughters. @NetworkChuck has my biggest respect for this!

  • @WWSchoof
    @WWSchoof 2 месяца назад

    How can this be performant and efficient? So many (slower) network connections between servers (compared to SATA and M.2 / NVMe), so many different IO Speeds. It's magic

  • @InsaiyanTech
    @InsaiyanTech 5 месяцев назад

    This was so interesting man got me wanting to try this out just for my first proxmox rig to

  • @bluesquadron593
    @bluesquadron593 6 месяцев назад

    Hmm, I have just quickly watched the video so I may have missed it, but my experience with ceph in proxmox, was that it was using rather big amount of memory (not like ZFS though) and while it was working on a single 1G NIC (three node HP Elitedesk cluster) I have seen many arguments not advising to use this approach. They claimed that if you have significant amount of data being written to the ceph pool it may not be able to keep up using the 1GB NIC. So if you suffer power outage, you will suffer data loss. In addition, CEPH will wear out the consumer data storage devices rather fast due to the large amount read/write operations.

  • @TinTalon
    @TinTalon 6 месяцев назад +1

    Really enjoyed this video. Cool stuff!

  • @Deadphyre
    @Deadphyre 2 месяца назад

    It'd be pretty sweet to show how to set up ceph with proxmox but..that's for both education and practical use on my end as well.

  • @djscanlon
    @djscanlon 5 месяцев назад

    Hey Chuck! Thanks for showing open-source projects like CEPH, which is amazing for clustering storage. I’m curious if you've come across any open-source projects that focus on clustering AI-required resources, similar to how CEPH handles storage? I think it could be smart and save cost in local compute resources. Would love to hear your thoughts or if you know of any projects in this space! Great work!

  • @sabid.mahmud
    @sabid.mahmud 6 месяцев назад +2

    keyboard's sound ❤‍🔥❤‍🔥

  • @timteske3255
    @timteske3255 6 месяцев назад +29

    That’s NASty! 🤣

  • @marcq1588
    @marcq1588 6 месяцев назад +1

    It is always a pleasure to watch your videos! This is why I subscribed a long time ago...
    I have always been a fan of ZFS. Now you are showing something totally new. I am not sure if the speed of cephfs is as good as zfs due to the fact that ceph would mix/match any storage devices. As you say, USB devices, IDE devices, SATA devices, etc. I would think that it all comes town to the lowest speed device that will force the speed?
    Also, as you need a full device to pass on to ceph, it will unlikely work with some of the VPSes around (like Oracle free tier, Azure, Amazon, and other retailer like Contabo, etc.)? Unless it is possible to create file block systems and bring these as devices? Is this possible? Then set them up via their IP addresses? Can we do that as well?
    Please, don't hesitate setting up more videos on this subject as I belive this is really big. Good bye UNRAID, welcome CEPH!

  • @hyperninjaprox
    @hyperninjaprox 6 месяцев назад

    That Memory Reboot hits hard. Reminding me of my childhood where i would try to make things work with each other when clearly it was not meant to. Even though i never could i atleast tried.

  • @coocoobau
    @coocoobau 5 месяцев назад

    Great Ceph concepts explanation. So good that also highlights the overwhelming complexity of such a system. Personally I tried it for a couple of years (manually deployed and with Rook in Kubernetes) and found it painfully hard to debug issues, lost data. It is quite slow and it hurts to manage it in production. Cryptic errors, rabbit holes and whatnot. Granted, it was more than 5 years ago.
    Also played around with other distributed storage systems (Gluster,OpenEBS/Mayastor, Linstor,MooseFS), all very very complicated to set up and maintain. This forced me to reduce the need for such type of storage, redesigned the way I'm storing data (a combination of databases and S3/Minio). And waiting for SeaweedFS to become more stable as a product.

  • @ZambeziSentinel
    @ZambeziSentinel 6 месяцев назад +2

    Looking forward to the Hack the Box videos 😊

  • @tiboore
    @tiboore 5 месяцев назад +1

    Can you do part 2 with Mac and iPad connect.

  • @hilex867
    @hilex867 6 месяцев назад +2

    Hey Network Chuck,
    I'm a cybersecurity tester, and I have to say your videos are spot on. It's great to see someone spreading accurate information, unlike many RUclipsrs nowadays who often share misleading content. I really appreciate your in-depth content on cybersecurity.
    I have a couple of questions for you: Have you ever considered making a video on how to reverse the connection to a scammer's PC? I know it might be outside your usual scope, but I'm curious about your thoughts on this. Also, what is your operating system of choice for cybersecurity work? Mine is Linux.

    • @mrmotofy
      @mrmotofy 5 месяцев назад

      As a pro, maybe you would be better able to make a vid on that topic???

  • @jianlux4661
    @jianlux4661 6 месяцев назад

    Please make part 2 😊 - Do you know GPFS ?