ProxMox High Availability Cluster!

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025
  • Are you wanting to centrally manage your ProxMox servers, or configure your virtual machines for High Availability? If so, you've come to the right place.
    But first... What am I drinking???
    Today's brew is a colab between Torn Label Brewing and Thou Mayest Coffee... It's Mansion Brew, an Imperial Wheat Stout with Coffee, clocking in at 8.8%. Unlike a lot of other coffee stouts, this trades in a lot of the roasted notes for a smooth malt, without becoming too sweet. Definitely recommended.
    Links to items below may be affiliate links for which I may be compensated
    Check out the parts from my HomeLab servers:
    Rosewill RSV-Z2700: amzn.to/33vlltl
    Machinist X79 uATX motherboard: ebay.to/34oJeBV ali.ski/W2Gvc8
    Intel Xeon E5-2648L: ebay.to/2F1mxuZ
    8GB DDR3-REG 1866MHz: ebay.to/2F1ekqx ali.ski/_KDx3
    Sunbow 32GB m.2 SATA SSD: amzn.to/34noL0l ali.ski/YyZV_
    Even better deal for Video - ATI HD 2400 LP card: ebay.to/30AAck7
    PowerMan 350W Power Supply: ebay.to/3niUfNO
    24-Pin PSU Extension: amzn.to/2HRIu0p
    8-Pin PSU Extension: amzn.to/3cZSErB
    10pk LiCB CR2032 PC Batteries: amzn.to/36AHeJs
    Find the parts I recommend on my Amazon store: www.amazon.com...
    Follow me on Twitter @CraftComputing
    Support me on Patreon or Floatplane and get access to my exclusive Discord server, as well as other premium content. Chat with myself and the other hosts on Talking Heads all week long.
    / craftcomputing
    www.floatplane...
    Music:
    Bossa Antigua by Kevin MacLeod
    Link: incompetech.fi...
    License: creativecommons...

Комментарии • 245

  • @MartinPaoloni
    @MartinPaoloni 8 дней назад

    After investigating a lot about Linstor and Ceph, I went back to this 4 year-old video because I knew it was EXACTLY how I wanted to set up my 3xN100 cluster. I don't need zero downtime. I don't mind losing a few hours/days of data. Your guide worked perfectly with Proxmox 8.3.3. Thank you!
    It's hilarious that this little 6W CPU has a higher Passmark than those Xeons you used!

  • @Henry00
    @Henry00 4 года назад +23

    I immediately jumped with enthusiasm upon seeing this video and had to show all my friends that I had asked for a clustering tutorial only a week ago! Thank you very much for taking the time to actually make it happen! This was once again quick and to the point and so revealing that I now finally understand what this feature is all about and the benefits it offers. Thanks again!

  • @coletraintechgames2932
    @coletraintechgames2932 3 года назад +10

    Thanks for all you do, you have been a big help to me.
    Now to cause some "trouble" and hopefully you take it in a fun way.
    At 7:35 you talk about a shared storage video, it's been 8 months! Where is it? Either drink some more beer or less, whichever helps and get going on this! 😉

  • @Ne0_Vect0r
    @Ne0_Vect0r 3 года назад +8

    I just need to tell you how much I love this Video :)
    This speed and simplicity is just perfect!
    but what about a video using CEPH?

  • @MichaelGauthreaux
    @MichaelGauthreaux 8 месяцев назад +1

    Great video as always! Glad I found this one while searching how to cluster the Proxmox servers. I am just starting my journey into self hosting and server work in general since as a dedicated network engineer I don’t get to play with server at work.

  • @rklauco
    @rklauco 8 месяцев назад

    This is the best video on the topic by far. Important things - minimum 3 nodes (with workaround!) and ALLOW the ZFS to be accessible to other nodes - those 2 things were missing from all other videos I've seen! Thanks a LOT!

  • @zparihar
    @zparihar 2 года назад +2

    Mr. Craft! Thank you for this!
    Helped me understand the shared storage and the importance of giving them the same name.
    Unfortunately, the first time I did it, it wasn't with the same name and then I tried removing nodes from the cluster... and I ended up decimating everything.
    If you are up for it, consider creating a video on how to:
    1. Remove a Node Cleanly from a cluster (without re-installing)
    2. Remove the Entire Cluster and start from scratch (without re-installing)
    I think these 2 would be invaluable!
    Cheers!

  • @lanealucy
    @lanealucy 4 года назад +80

    I would like to see ceph as shared storage for ha

    • @lanealucy
      @lanealucy 4 года назад +2

      @Gareth me too, and with the in-built ceph it is very easy

    • @cryptout
      @cryptout 4 года назад +6

      Ceph is so complex that data loss in the future is almost guaranteed. Stay away form it for home use.

    • @t4b3m4S
      @t4b3m4S 4 года назад

      may i know if you are using ceph method, how about the harddisk if i using RAID Controller in my hardware ? let say i'm using RAID 10. the ceph method is support ?

    • @lanealucy
      @lanealucy 4 года назад

      @@t4b3m4S ceph is like a software raid, so you don't need any hardware raid anymore. So another raid in hardware just produce much overhead. You can still do it, if you want, but it's useless.
      Try to set your raid controller to JBOD or make every hdd a single raid 1

    • @pjbramsted
      @pjbramsted 3 года назад

      Move this comment to the top!

  • @randallsmith2521
    @randallsmith2521 4 года назад +15

    Fantastic, thank you for this video. I'll be more interested in the shared storage setup because of what my future plans are, but this gets me started.

    • @CraftComputing
      @CraftComputing  4 года назад +14

      I just ordered a storage server to add to the homelab, so more content coming soon!

    • @randallsmith2521
      @randallsmith2521 4 года назад

      @@CraftComputing that will probably be somewhat perfect for me. I'm in a situation where I'm going to be moving soon. If all works out like we think it will, we will be living next door to a family member. Our goal is to setup a shared network between the houses. I'm hoping to be able to replicate a backup server there. Your tutorials are helping me on my way (though I realize I still have a ton of tutorials and documentation to pour over online to get where I want to be).

    • @tracer888
      @tracer888 4 года назад +1

      @@CraftComputing with proxmox couldn't you just add all that storage to a ceph pool as the shared node? Then you wouldn't need another storage server like a freenas?

  • @endlessoul
    @endlessoul 2 года назад +1

    Hugely helpful and is how my cluster was born! Thank you Jeff.

  • @chrisumali9841
    @chrisumali9841 3 года назад

    Tanks for the demo and info. Managed to setup HA with no issues, thanks to this video.. Cheers

  • @darklocksly3615
    @darklocksly3615 4 года назад +5

    Thanks for your love for Proxmox :) I had it running in production, and was a very happy sys-admin. 100% uptime in a years time without HA.

  • @RoxzinGaming
    @RoxzinGaming Год назад

    awesome guide, thanks! will try to replicate on my homelab

  • @operation-0158
    @operation-0158 3 года назад

    I love proxmox and you, because it is really free for vitual server like VMware.... love it

  • @PhG1961
    @PhG1961 4 года назад +1

    Great video. Very informative, both IT and beer info. Btw, don't forget to breathe in time during your explanation...

  • @prasadsawool
    @prasadsawool 4 года назад +3

    This is one of the most detailed videos I have seen on Proxmox on youtube

  • @egoruderico3038
    @egoruderico3038 2 года назад

    Great video, thanks for the detailed explanation and editing work of each step. However, I had to slow down a bit the speed to take notes :) .

  • @kirksteinklauber260
    @kirksteinklauber260 4 года назад +3

    Nice Video!!! can you elaborate on how to build the quorum disk? or where to find more info on how to set it up?

  • @system247
    @system247 4 года назад +1

    Great video! I'd love to see more Proxmox videos.

  • @only_mango
    @only_mango 4 года назад

    I watch all of your video. Most favorite one I have is the vgpu through grid k1. You most do review on nutanix community edition. Which is awesome for cluster and vm. Thank you for knowledge videos as always.

  • @mauldus
    @mauldus 4 года назад

    I just upgraded to proxmox 6.2 to get the native encryption on my zfs pools. Glad to get some more ideas for future home HA designs.

  • @ls240ftw
    @ls240ftw 4 года назад +2

    Excited to see the video on shared storage/HA. Any ETA on when this might come out?

  • @markusgoebbels6022
    @markusgoebbels6022 4 года назад

    Thank you for sharing this with us. If you could share some information about Power consumption that was fantastic

  • @VladyslavKudlai
    @VladyslavKudlai 4 года назад

    cheers for proxmox! It was most great idea for your 3 nodes)

  • @rGunti
    @rGunti 4 года назад +3

    Good stuff man! I'm currently working on my own new cluster (with only 2 nodes though) and I was struggling with the storage setup.

  • @vnkamalov
    @vnkamalov 4 года назад

    Hi! Looking forward to hearing from you about iSCSI shared storage.

    • @ChrisCookTech
      @ChrisCookTech 4 года назад

      Yeah shared storage between the hosts is easier than each host having their own. Live migration is a thing of beauty.

  • @GabrielFoote
    @GabrielFoote 4 года назад +2

    Thank you for more ProxMox content. 🙏

  • @louissanjohkala2784
    @louissanjohkala2784 2 года назад

    Two minutes into the video, I just had to click the subscribe button. Thanks

  • @mads205
    @mads205 2 года назад +2

    I've done all this and had fun doing it, but now I've set it all up I have no idea what to use it for...

  • @GuillermoPradoObando
    @GuillermoPradoObando 4 года назад +2

    Excellent content. Please could you check the proxmox setup HA but with a shared storage? Like san or nas thanks

    • @CraftComputing
      @CraftComputing  4 года назад +1

      I have a storage server on the way to do exactly that :-)

    • @rednax007
      @rednax007 4 года назад

      @@CraftComputing would love to see this

    • @rednax007
      @rednax007 4 года назад

      @@CraftComputing would love to see this

  • @ironskilit
    @ironskilit 2 года назад

    Subbed to the channel after watching 2 videos!

  • @bryannagorcka1897
    @bryannagorcka1897 3 года назад +1

    Hmm, now I want a beer.

  • @TheRogueBro
    @TheRogueBro 4 года назад +2

    Yay homelab stuff! The one thing i want to see is Ceph and Hyper-convergence with Proxmox. I have been trying to play with it myself, but all my servers at work are being used and i don't have enough equipment at home.

  • @gaby1491
    @gaby1491 4 года назад +7

    This is great, now i need to convince my wife to let me run up the electric bill and acquire two more servers lmao

  • @lanzhao
    @lanzhao 3 года назад

    Thanks. Great tutorial...is your shared storage vid dropping soon?

  • @ImARichard
    @ImARichard 4 года назад

    Eventually Id like to have a setup like this. For now Ive got a dedicated NFS storage for my various docker containers across devices. Similar outcome, but not as clean Im sure lol

  • @derrydobbie8375
    @derrydobbie8375 3 года назад +2

    You should re-do this video but use Ceph. I followed this tutorial and was running with it great for a while, but a migrate task requires migrating a snapshot of the disk to the other nodes between replications. You also lose any time between the last replication task in the event of a hard node-failure. With Ceph the replication happens in real time which makes migration instant for VMs/CTs which are shutdown and for live VMs/CTs only requires moving the ram image which is pretty quick. On a hard-node failure the migration for HA only results in maybe a few seconds of data loss instead of minutes. Revolutionized how I host game servers for friends as I can quickly shutdown a node to upgrade it/do maintenance on it and it's so much faster with Ceph than using standard ZFS pools.

    • @decorbett
      @decorbett Год назад

      Thank you for this! I thought the two technologies were similar in function and you clearly explained why I would choose Ceph over HA. Great video - also this comment should be pinned!

  • @Nickscrazylips
    @Nickscrazylips 4 года назад +4

    I've been reading about cephfs. Have some experience with gluster. The big disk shelves and iscsi PCI expansion cards are purdy cheap rn. I know a few MSP's that have adopted both open source solutions. The cost of HA continues to rise 😎.

    • @JoaoSilva-gs5jb
      @JoaoSilva-gs5jb 4 года назад

      like what?

    • @JoaoSilva-gs5jb
      @JoaoSilva-gs5jb 4 года назад

      xcp-ng?

    • @andrewjohnston359
      @andrewjohnston359 4 года назад

      I've got a 5 node proxmox cluster with ceph running..cobbled together with old servers, HBA/RAID controllers flashed into jbod mode etc, and some second hand SAS drives...I'm in love with ceph...but I haven't gotten my hands dirty with journalling on SSD's or configuring SSD cache tiers...but yeah I can migrate VM's across nodes without dropping a file transfer or even a voip call....read performance is good at about 350MB/s...put writes and small io stuff like VM's want are not great without SSD's..

  • @osamaa.h.altameemi5592
    @osamaa.h.altameemi5592 3 года назад

    any updates on the HA with the shared storage on Proxmox? Thx for the awesome video.

  • @kpricepc
    @kpricepc 4 года назад

    Great channel really enjoy your content! Video idea that I'm sure others would be interested in is getting redundant networking backend for this setup. Maybe stacking the microtik 10g switches with bonding on the server side?

  • @JorgeOvalles1980
    @JorgeOvalles1980 4 года назад +1

    Hi everyone, craft, you need to probe polar beer from Venezuela, pilsen type. a question regarding replication, the total space of a VM / CT is occupied in each node. I mean, the 2GB storage of your pihole would be in the 3 nodes all the time, 2GB in node1, 2GB in node2 and 2Gb in node3 ? thanks

  • @austinwebdev
    @austinwebdev 2 года назад

    You're awesome. Thank you for this.

  • @TheAnoniemo
    @TheAnoniemo 4 года назад +2

    Nice video! Will you also be doing a video on TrueNAS Scale when that comes out? It has similar functionality to Proxmox, but with FreeNAS storage management, I'm considering changing my FreeNAS box over to it so I can run better VMs and Docker.

    • @CraftComputing
      @CraftComputing  4 года назад

      Absolutely will be looking at TrueNAS Scale :-)

    • @philliphs
      @philliphs 3 года назад

      Second this. I'm also wanting to take a plunge into proxmox with 2 nodes HA. But found out about Truenas scale and decided to wait.
      Your video on Truenas scale HA capabilities would be every much appreciated

  • @sticky42oh
    @sticky42oh 3 года назад

    Did you ever do that follow up for shared storage? I searched through your channel but couldn't find it.
    Great video though, helped me get me 3x Dell 3060 cluster up and running. thanks

  • @Blond501
    @Blond501 4 года назад +3

    Good to know. But I'm curious: Wouldn't it be better, to setup a Server like a SAN, and every VM use that as an Storage instead of copying every VM every 15 minutes to every server?

    • @pawelhener5338
      @pawelhener5338 4 года назад +1

      By using a SAN for storing the VMs you will end up with a single point of failure - to avoid that you need to use two SAN servers which will still have to replicate the data.

    • @JeremyMarkel
      @JeremyMarkel 4 года назад

      Gluster would work instead of Ceph but I found Ceph to be more reliable in my lab.

    • @derrydobbie8375
      @derrydobbie8375 3 года назад

      Yup, proxmox uses Ceph for this and it works great! I even have pools tied to replication rules that ensure certain VM's run on certain classes of storage (hdd, ssd or nvme).

  • @johnwashifi
    @johnwashifi 2 года назад

    Hello, could you make a tutorial on how to spin-down hdd in Proxmox? thanks in advance!

  • @shaikhrecommends4912
    @shaikhrecommends4912 2 года назад +1

    Interesting stuff, very well explained, I'm planning to create a HA web server cluster using VMs or RaspberryPis for my college project the same like this one as shown in the video. But can someone please highlight the scope of this.. like where it can be actually used in the real world scenario. What are the applications?I'm new to this stuff 😅

    • @samithaqi2379
      @samithaqi2379 2 года назад

      maybe im late to answering you this but this can be used on pritty much everything frome a web server to gaming servers to facebook etc (maybe not all but they use the same technology with cluster to do this jobe )

  • @ruwn561
    @ruwn561 4 года назад +11

    Given the amount of space ‘wasted’ with 3 x Mirror, you may want to use CEPH, get more space and more resilience.

    • @Rickety3263
      @Rickety3263 3 года назад

      yeah... you only need 6 nodes for a ceph network.

    • @Keneo1
      @Keneo1 3 года назад

      @@Rickety3263 why can’t you do it with 3?

    • @buzzz6118
      @buzzz6118 3 года назад

      @@Keneo1 Yes you can

    • @buzzz6118
      @buzzz6118 3 года назад +1

      @@Rickety3263 - thats incorrect - 3 is the minimum.

    • @Bpinator
      @Bpinator Год назад

      @@buzzz6118 You can, but isnt it slow as hell?

  • @johncarter2383
    @johncarter2383 3 месяца назад

    it would be great if the documentation was open source so these nuggets of wisdom could be absorbed into the main repository

  • @Keneo1
    @Keneo1 3 года назад

    What are you using for fencing hardware? And how did you configure it? Proxmox requires hardware fencing since there is no software fencing according to their docs.
    This is to avoid a split brain I think, where the one unplugged node is still connected to some clients but not to the other 2 nodes in the cluster, so it could keep on serving replies to those clients, you want fencing to be sure it is completely disconnected from all clients when the cluster thinks it is down..

  • @mikeloose9270
    @mikeloose9270 Месяц назад

    Hi, I wanted to ask if the Raid 1 arrangement for local-zfs pool on each machine has worked well after some years. Thinking to do the same.

  • @Alexcide007
    @Alexcide007 2 года назад

    I messed up! I followed this process with a system interconnected to each other using IT flashed storage controllers on all my proxmox servers connected to a JBOD controller. Ended up corrupting my zfs pool losing my data. Each server had the ability to access the storage without replication, I was under the impression that proxmox would be aware and lock access to the drives preventing two servers from trying to access the same data. I come from VMWare world and access to the storage between hosts is managed at the cluster level and assumed it was the same way here. I am wishing for a time machine right about now, I am rebuilding my home lab as I had taken on too much and lost my backups as well due to a different project I was working on...

  • @Propotus
    @Propotus 4 года назад +1

    Can you make a video on the best way to create documentation for your physical and virtual appliances/applications? Also how to best understand how to configure permissions/users for applications. All my stuff is read/write, but I know it’s not great. But reconfiguring where everything points and reconfiguring the permissions is a little terrifying.

    • @bentheguru4986
      @bentheguru4986 4 года назад

      I been asking the same, most machines are Windows based and migrating to ProxMox sucks balls so badly, I just stick with Hyper-V.

  • @MichaelStrautz
    @MichaelStrautz 4 года назад

    Hey would you be able to cover this same topic but with docker containers? Possibly even a soft note while covering shared storage?

  • @zippytechnologies
    @zippytechnologies 3 года назад +1

    can you do a bit zfs vs ceph pool for ha speed and reliability?

  • @dimitristsoutsouras2712
    @dimitristsoutsouras2712 4 года назад +2

    Nice informative video. Couple of observations .....
    -noticed that on all your nodes the devices which create the zfs pool are on all sdb and sdc I guess that is irrelevant to the correct operation of the shared storage (for instance if the second node had the zfs pool being created by disks sdc and sdd )
    -In case of 2 nodes with different storage each, if you add them to the cluster only to be able to migrate VM'sis that possible without shared storage? (Because you mention about the path during migration of being different -which of course will be) In other words is there any benefits of adding two nodes with different storage each in a cluster other than being able to control both from one gui?
    -are the shared storage option only mandatory for zfs pool only or lvm also?
    -8:48 which model box is this?
    PS You should make a video for the setup of the machine (VM maybe you meant) for the votes quorum. Not all people have the ability for 3 server setup in a home lab environment.

  • @MichalCanecky
    @MichalCanecky 2 года назад

    What happens when you plug the cable back? Now you have two instances running or what?

  • @mithubopensourcelab482
    @mithubopensourcelab482 4 года назад +1

    Better choose servers with 3 numbers of Ethernet cards. First card for accessing vm. Second card for exclusive cluster traffic between nodes and third card can be used to attached to shared storage. Storage node can be a Linux machine with ZFS. Create a zpool and create required datasets. Exposed these datasets over NFS. If you put all the hdd's on a linux headless server with ZFS with RaidZ2, you with excellent result. At my place , with this setup, and configuring high availability [ If you have shared storage, you need *not* have to configure replication separately and in the event of anything happens to host running vm which is configured for high availability, the other node starts the vm in just a few seconds.
    The implementation you have showed, is not at all suggested. Reasons : 1. You have to create 3 pools on each of hosts with similar name. 2. There will be network bottleneck. 3. In the event of any outrage, the use is supposed to loose the data for period from last replication.

    • @GuillermoPradoObando
      @GuillermoPradoObando 4 года назад

      Hi could you tell me how can expose the storage over NFS ? All nodes should connect to the storage?

    • @gglovato
      @gglovato 4 года назад

      saying "it's not suggested" is not really true, what Jeff has done is essentially a form of HCI which is currently suggested and "in vogue", in which each node has processing+storage and everything is replicated in realtime.
      Your setup has a single point of failure being the single storage server, not much HA in that case (i've seen super expensive "HA" setups like yours utterly destroyed by a storage failure with very very expensive downtime)

    • @mithubopensourcelab482
      @mithubopensourcelab482 4 года назад

      @@GuillermoPradoObando - Install nfs server on host supposed to be shared storage. /etc/exports file will have the proxmox nodes ip... once that is done, one can add nfs storage proxmox ui very easily.

    • @mithubopensourcelab482
      @mithubopensourcelab482 4 года назад +1

      @@gglovato not true sir. The storage server can be easily replicated using zfs command. So two storage server [ master and replication ] can support proxmox cluster of any size. Infact you can attach more than one proxmox cluster as well. Albeit, you need to make sure that the traffic between proxmox node and storage node should be on separate ethernet adapter.

  • @martynayshford4318
    @martynayshford4318 3 года назад

    Hi Jeff. What does that Add Storage tick, in ZFS create actually do? Plus why does it have to be unticked for subsequent pools? I am using iSCSI/LVM and NFS for VM failover and whilst failover is great it doesn't actually help my availability in the round, so lately most VM are on local ZFS storage and backed up to NFS box. A VM move though is a PITA. However I'm thinking simple storage replication might be the pragmatic way to go. Hence I need to add ZFS pools and thus my question.

  • @ciaduck
    @ciaduck 4 года назад +4

    Now do it again using Ceph! :P

  • @kirksteinklauber260
    @kirksteinklauber260 3 года назад

    How to roll back the VM moved on node 2 to node 1 when it comes available without unplugging the cable on node 2 to force it via the HA manager?

  • @nethfellearnspiano9655
    @nethfellearnspiano9655 4 года назад

    Have you noticed any performance issues using ZFS mirrors for the VM storage? I'm seeing real sluggish performance on any VMs stored on a mirror (not doing a cluster, just a standalone VM Host, my previous cluster I used CEPH and hadn't decided on my cluster storage yet for the future cluster build)

  • @WillFuI
    @WillFuI 4 года назад +1

    Still don’t have an second system but love this stuff

  • @QuentinStephens
    @QuentinStephens 4 года назад +1

    Did I miss where you set the IP address for the cluster itself (rather than the nodes)? Doesn't the cluster need an IP address for management?

    • @ahmetemin08
      @ahmetemin08 4 года назад +1

      as long as the servers can reach each other, it doesn't matter as long as they have enought bandwidth between them. So you can create a cluster between two continents. It is just a regular IP configuration.

  • @adilmehmoodbutt007
    @adilmehmoodbutt007 3 года назад

    Amazing, I love your way of guidance.

  • @greyshadow9498
    @greyshadow9498 4 года назад +2

    Watch all your videos, but watched this and the homelab setup with interest as this is going to also be a pet project of mine.
    I would like to go the way you did for cash reasons, plus being able to build a rackmount server with off the shelf PC parts would be better for me, mainly because I always have parts lying around, and I can easily add Noctua fans to the Rosewill case for a bit of quiet vs standard server fans.
    My main issue is availability of X79/X99 boards in the US. I don't have all year to order from China, and I would also rather not spend $300 or more on a mainstream board like Asus.
    That is why I have been looking in the refurb server market. Which unfortunately is littered with 1U and 2U Dell's and HP's. While there is nothing terribly wrong with them, they do have a few drawbacks: cramped, odd pcie placements mean no full size cards, limited cpu cooling options, LOUD fans, even worse 10k drives, and finally they are WAY too deep for the typical off the shelf rack mounts you find these days.
    This rack has to go in my utility room just off of my living room, I can't have it sounding like 6PM at O'Hare.
    And tower servers just don't offer enough bang for my buck, and nothing in the way of expandability.
    I am looking to start with a dual x99 in the Rosewill case (or a single if the dual doesn't fit), and a 1500V/900W battery in a 12U enclosed rack. That will leave me 6U for future expansion.
    I'd like to at least setup a second server at some point (for redundancy) as well as maybe a FreeNAS server built from a cheap HP server.
    In the past in have always rented dedicated server space for various projects. But my wife and kids (and yeah maybe even me) want to get into streaming and YT'ing, and podcasting and all that junk, so I need a home network that will be able to serve multiple VM's and be able to edit video using Resolve and do some graphic design(with the help of a mid level Quadro I can get my hands on).
    Normally I would just rent a server, but I want everything in-house, and to be honest I don't even know if you could edit video remotely.
    LTDR of my rambling: Thank you for the home lab vids, they have inspired me to actually do for myself what I used to pay others to do!

    • @eDoc2020
      @eDoc2020 4 года назад +1

      My HP DL380p Gen8 can take full size cards and has no problem cooling its CPU, but yes it can get loud. With Dell servers there's a way to override fan curves so a R710 or newer can be super quiet. I can't imagine any 2U server not accepting full-size cards.
      If you want something smaller, you don't need to use a Xeon E5. It seems lots of people these days are using multicore AMD Ryzen chips which can sometimes even take ECC RAM.

    • @greyshadow9498
      @greyshadow9498 4 года назад

      @@eDoc2020 good to know thanks!

  • @linklink14
    @linklink14 3 года назад

    I picked up some of the hyve zeus servers and all of them the NIC ports do not work. Dose anyone know how to fix this? love the video!

  • @TritonB7
    @TritonB7 4 года назад

    I primarily use VMware, but appreciate the content. May try experimenting Proxmox in the near future.

    • @mike406
      @mike406 4 года назад

      VMWare is leagues more stable and enterprise friendly. Proxmox is nice for small/home business but be prepared for it to break A LOT. We run a cluster at my company and I swear you even breath on it funny it goes down. It’s great that it is free but take backups frequently.

    • @gglovato
      @gglovato 4 года назад

      @@mike406 the issue is that VMWare is stupidly and unholy expensive if you want anything even remotely like what PVE has for free, the cheapest vmware(which is essentials) has NO HA capability, NO replication, a 3-server limit and you the Vcenter VM eats resources like a hog. Plus there's no way to expand that essentials license.
      If you go to the "a la carte" pricing, ohh boy, you're going bankrupt just from licensing vmware, which will probably cost more than your 3 servers combined.
      Even paid PVE(which you only pay a subscription fee for support) is dirt cheap

    • @pawelhener5338
      @pawelhener5338 4 года назад

      @@gglovato I am pretty happy with hyper-v 2016: running 2x 2016 hyper-v core and 1x 2016 server standard. Windows and ubuntu VMs are distributed amongst these machines utilizing either replication (file server) or HA features (ADDS, Exchange, Remote Desktop). I believe Hyper-v core is really decent - and it’s free!

  • @User_ML907
    @User_ML907 2 года назад

    Great content.

  • @aoikuroyuri6536
    @aoikuroyuri6536 3 года назад

    Would this work with externally connected storage? as in none of the nodes have physical storage but network attached storage

  • @kevinbs05
    @kevinbs05 4 месяца назад

    Is this cluster distributed (i.e. a bunch of containers spread out) with the ability to migrate. Or is it just 1 server doing stuff with 2 backups?

  • @diavuno3835
    @diavuno3835 4 года назад

    At 8:48 he showed a supermicro server.... I need to know what pci card was that with dual 2.5 SSDs mounted?!

  • @nnmbnmbnmnm
    @nnmbnmbnmnm 3 года назад

    How do we manage split brain? when the older node comes back, do you think the older node will kill the older VM and start re replicating from the failover over, or this is something manual process?

  • @crimsionCoder42
    @crimsionCoder42 2 года назад

    So does a proxmox cluster increase availability and stability or reduce the time for task completion by separating out the tasks, like, let's say, machine learning and data science calculations? Or is it both tasks are separated across the node, and high availability is there by design?

  • @MrEric377
    @MrEric377 3 года назад

    Just a reminder for the shared storage video ☺️

  • @amirmujo1600
    @amirmujo1600 3 года назад

    Does proxmox automatically give a static IP for every Machine or Container ? Or do i have to rent an IP from the hoster For every Machine ?

  • @russtysandwiches208
    @russtysandwiches208 3 года назад

    how dose it handle all the system resources? is it like a pool or are they all separate

  • @alireza2557-j3k
    @alireza2557-j3k 3 года назад

    How did you managed the networking part ? apparently you did not explain about how networks work on failover?

  • @bikerchrisukk
    @bikerchrisukk 4 года назад

    Very useful, thank you very much 👍

  • @oah8465
    @oah8465 3 года назад

    Fantastic video, when the VM is started on node-2 does it preserve its original IP address? or it will be started with a different IP?

  • @djvincon
    @djvincon 4 года назад +1

    Very cool! So just a question. Could you HiAv storage as well? So if one storage node fails the proxmox node's switch storage automatically?

  • @junaidij3683
    @junaidij3683 2 года назад

    the problem is every time the VM move to other server on HA, the os will restart, how to solve this?

  • @fixking200
    @fixking200 4 года назад

    You're mast chef in proxmox ;D

  • @firewall6810
    @firewall6810 4 года назад

    What about the local zfs storage mounted directly into the Virtual trueNas server? Any posibilities? Regards from Austria (no kangoos) 🤪

  • @razredge68
    @razredge68 3 года назад

    Any update as to when the shared storage for HA video will come out?

  • @kenzieduckmoo
    @kenzieduckmoo 4 года назад

    my only dislike for ZFS (and LVM) in proxmox is that it wont allow you to use qcow2 disks if your drive is a block storage. i set up a 1tb nvme on ext4 just for that on mine. couldnt figure out how to do that on the gui though (didnt really spend too long looking though) so did it on the shell.

    • @davidbentham9586
      @davidbentham9586 4 года назад

      yeh its a bit of a bug bear as well, the zfs management is still done by the cli for growing the pools.

  • @CarlosGomes42
    @CarlosGomes42 4 года назад +1

    Very Nice vídeo!
    Please do one on proxmox networking, VLANs, vxlans ... o/

  • @gabrielporto.mikrotik
    @gabrielporto.mikrotik 2 года назад

    Hello there Jeff. Hope you're doing well. May I ask you a question? I'm having trouble when using HA on my containers. The problem is that It migrates the container, but the container won't start on another node because of the local pve that it was installed on the origin node. And I couldn't find a way to install the containers on my iSCSI shared drives. Is It possible to do?

  • @marioStortuga
    @marioStortuga 4 года назад

    Nice was waiting for this

  • @hooami6245
    @hooami6245 4 года назад

    I have a machinist x79 board as well, however it does not see my 256gig m.2 drive. did you have to configure the board for it to recognize the drive?

  • @wolfeman781992
    @wolfeman781992 3 года назад

    What happened to the centralized storage video?

  • @alexantony007
    @alexantony007 3 года назад

    Does the nodes have to be identical in specs and storage?

  • @MartinSeidel-i7f
    @MartinSeidel-i7f Месяц назад

    I tried this on 2 server and 2 Synologys (for quorum) but it didn't work. The 2 server have different sizes of their ZFS pools (2.2 and 2.4 TB) is that a problem?

    • @MartinSeidel-i7f
      @MartinSeidel-i7f Месяц назад

      Solved! Datacenter -> Options -> HA -> shutdown_policy=migrate

  • @chltechnology2588
    @chltechnology2588 3 года назад

    my proxmox HA 11 vm got corrupted and showing no boot record found, any solution.

  • @guywhoknows
    @guywhoknows 4 года назад

    Ahh I did this before and well... It didn't go well.
    Three nodes.
    Removed one, cluster refused to start.
    Tried to remove the machine, which is not straight forward. And we had booting issues.
    So for high availability, or fall over, once fallen they won't boot if one is damaged/unavailable. Then the whole removal is much faster to set the whole lot up again and then restore from back up.
    This issue with Proxmox was so off putting, along with the storage control and management that I dumped the lot and only run it as a single system VM.
    Also this isn't the best economic way.
    A mass and SAN which is addressed allows for a power on of metal to take over in event of a failure.
    Being jointed at the storage level makes the fall fairly seemless depending on the boot times.
    The software to hardware stand alone is very simple to make and programme for auto availability.
    And as you don't have additional hardware running, it saves on power bills for hungry servers...
    #instances.

  • @martinpalenik
    @martinpalenik 4 года назад

    Great video, Thank You.

  • @DiyintheGhetto
    @DiyintheGhetto 3 года назад

    If i have a Proxmox second system at my brothers house how can i do the same thing with high Availability clustering to my home system? Sinse we are on different networks i have no clue.

    • @derrydobbie8375
      @derrydobbie8375 3 года назад

      If you only have 2 nodes total you can't due to quorum requiring at least 3 nodes and preferably an odd number if you have more.
      As far as your particular issue, you may be able to use a VPN to do a site-to-site connection between your two networks on a virtual IP address space. That's some tricky business though and I'm not sure how you'd go about it; plus your VPN then becomes a single-point-of-failure for cluster communication. Would probably be better to co-locate the nodes and maybe have a proxmox backup server at another location if you're worried about disaster scenarios.

    • @DiyintheGhetto
      @DiyintheGhetto 3 года назад

      @@derrydobbie8375 Hello all of us have static ip addresses so that is not a issue on connecting to each other. I can setup a Third node and put it in my basement or something if 3 nodes is what's really needed.

  • @soam8175
    @soam8175 3 года назад

    Nice video. Keep calm and create new one about Proxmox Backup.

  • @nickkayser3729
    @nickkayser3729 3 года назад

    Can you or someone on here point me in a direction?
    I'm trying to find out if it would be worth my while to setup a Proxmox cluster of 3.
    Also if the following hardware configuration would work for it:
    Node 1: single large server with lots of RAM and heavy hitter CPU
    Node 2 & 3: NUC like systems with less RAM and CPU
    All storage other than the OS drive would be in the Node 1. I do have a NAS that backups are sent to, but want to keep the VMs local to Proxmox
    I don't have space for rack mount anything. My NAS is also currently on hold desktop hardware, looking to upgrade both NAS and Proxmox systems, but slowly, so just trying to see my options.
    My current Proxmox setup is my old desktop system that runs Emby, Nextcloud, media editor, Guacomole, Home Assistant, and maybe another one or 2 servers for tinkering.

  • @pawitwahib886
    @pawitwahib886 4 года назад

    what happened if I share same zfs pool name but with different size hard drive on each node ? i have 2x2tb running on my main server and 2x300 sass drive on the sevond server. is that possible ?