Log in... funny I know right :D! I'm too new to proxmox to really have a routine of what I'm doing, I just jump straight in installing machines. Need to start playing with containers more though...
This is such a tremendous video. I've been in IT for over 22 years now and had a Proxmox question today. Heading down the Google rabbit hole led me to this video, which not only answered my question, but taught me 15 other things. I love how the creator of the video is able to clearly explain in 30 seconds what other videos struggle to explain in 10 minutes, and this happens over and over. This 23 minute video contains more actionable information than any 23 *hours* of content I usually consume. I'm truly glad I found this channel today and I can't wait to see what else I'll learn from your channel.
Found this channel recently as I decided to setup a home server. In the space of a week I've gone from having zero idea of what to buy and how to set it up, to having a basic parts list and deep diving on Proxmox installations 😂 Can't thank you enough Tim 🙏 you keep making vids, I'll keep watching them mate 👍
On the fdisk step to clear out disk partition information, you can avoid all the extra steps (p, d, etc for each partition) by just hitting g (write a gpt partition) followed by w (write to disk) ... saves lots of keystrokes when clearing out multiple drives.
As a systems and network guy I can tell you your LACP extra-lane explanation was spot on! I'm going to steal that because it's so simple yet so clear. In a little more detail, the LACP participants like the switch and the hypervisor choose which of the physical gigabit ports in the bond to use ("hashing"), based on a number of things such as source IP+port, destination IP+port (and possibly others such as MAC addresses). This means a single TCP connection, which has a static set of IPs and ports, will always use the same switch port, and will never surpass the speed of that one port (i.e. 1Gbit) while another connection may be using the other port, also maxing it out at 1Gbit. Plus of course, if you lose/unplug one, you'll have instant failover.
I'm studying networking with Cisco and that's not what I am reading in the Netacad. Etherchannel (LACP or PAgP) creates a logical link from two or more physical links, one of the advantages is increasing the bandwidth. So with 2 x 1gb ports, the network should use both ports cause it sees them as one logical link, the frames are sent with 2gb bandwidth, if one of the cables fails, the other continues to operate, and the bandwidth decrease to 1gb.
@2:50 you should be using the command "pveupgrade" instead of "apt dist-upgrade". The former is a wrapper that calls the latter but it also will sync other things like EFI boot partitions, etc.
be careful however, pveupgrade on some version of proxmox automatically runs autoremove as well. For those running non-pve kernels or want backup kernels, this automatically removes them. It also removes support libraries for packages built from source. Many dont correctly specify their runtime dependencies, expecting build-dependencies to remain post-packaging.
I've now spent the better part of the day fighting with a lost boot partition after trying to get iommu running. After hours of trying to fix the bootloader, I reinstalled pve 7.2. I have backups of my VMs so I figured it was a good last resort - no dice, it didn't install an EFI for some reason 👿
One more home-lab related item I do - set the bios not to auto-start after power restore (tends to break a lot of electronics if your grid is a bit unstable or your lovely neighbor turns whole building breakers on-off-on-off 5 times in a row in 10 seconds). And, then, to be able to start the machine once the power is back on, I set also WakeOnLan mode to g to enable it - this will allow me to start the server using simple WoL command from any machine, even without IP address, just on the same switch and VLAN. And then you can go and try to talk some sense to your neighbor, before you find out he is completely drunk and barely stands on his feet :)
The best part was where you talked about the VLANs. I would love to see a shorter form video that has an easy to find name for people looking to do this specific thing. This is the best tutorial I was able to find. ❤❤
That reminds me of my data communications professor from grad school. I was in his office one day, worried that there was simply too much material to cover and that I was feeling overwhelmed. He chuckled to himself, and explained that in his 20 years in the industry he felt the same way every single day. The tech industry is a massive behemoth of ideas and concepts, and it grows much faster than any single human can keep up. The important thing is to drill down on a specific sector and get good enough to build a career around it. He also told me to never trust someone who spends their time trying to teach/convince you of some new technology without ever admitting to not knowing some small detail or another, like how we see repeatedly in this video. A few months later he was killed in a school shooting, and I really appreciate you pointing this out and reminding me of this anecdote; he was a great dude who taught me some valuable lessons as I transitioned into a career in tech
One thing you might want to think about: When creating a template, make sure the main disk is small. You can always expand the disk, but you can't shrink ik. Take a large main disk, and you're stuck with it.
> but you can't shrink ik You actually can if you are using ZFS datasets: 0. Free up space in the dataset so it is smaller than the size you want to shrink to 1. Set new smaller quota with zfs for that dataset 2. Edit config file for that container to reflect new (smaller) size. Also if you are wondering this is not implemented in the GUI (atm) as (afaik) ZFS is special in that it can shrink datasets like that.
@@gamerbene19 you can manually edit the size of a disk inside the vm config, after shrinking it inside the vm. Just like in vmware. This works not only in zfs but on regular lvm too
I take following care while installing Proxmox. 1. Boot Drives - 120 GB SSD X 2 - Select zfs - super useful if something breaks while updating it. Taking a boot drive snapshot using zfs command before updates. Plus you get added redudancy. No other spinning disks are added to Proxmox. Its not needed and compute power of Proxmox can be fully utilised if you have separate Storage Server. [ explained with following points ] 2. Keep NFS Server ready - I normally go with 5 spinning disks with on a separate host [ physical machine - with atleast 6 cores ] with again 2 120 GB SSD X 2 as boot drive [ in Raid 1 ] and 5 spinning disk with ZFS [ Choose RaidZ2 for RAID] . Create a dataset and export dataset as NFS. - I call this as Storage Server. Keeping separate storage server super useful. You will get full flexibility to do many things. With zfs you can set automatic snapshots with cron utility. I generally create dataset for each virtual machine. On a seperate NAS I export the ZFS snapshots. [ My snapshot policy is - 15 mins - hour life =4 snapshots, 1 hour - day life = 24 snapshots, 1 day - 1 week life = 7 snapshots , 1 month life - 4 months life = 4 snapshots ] So you will never have feel sorry, if god forbids, happens to your Proxmox Server, VM, Storage etc. Only with seprate storage server you can have very smooth, fast live migration of VM's if you have a cluster. IO overhades are taken by Storage Server instead of Proxmox. 3. Create a sepearate and exclusive network between Proxmox and storage network without any gateway and connect the storage with seperate unmanaged and cheap giga switch. 4. Edit /etc/hosts file and add storage server ip and add storage host as storage.myoffice.local This facilitates me to change storage server IP at later course if required. 5. Install a few packages on Proxmox - zip, mlocate, net-tools, fail2ban, rkhunter, vim, git, ifupdown2 6. Install a proxy manager and expose Proxmox on port 80 instead of 8006 port. Also you can apply Let's encrypt certificate. 7. I choose install containers for Linux VM's [ They are super cool and bearly takes up ram ] and you can reset password, ip address from Proxmox ui itself. 8. For backups, I install a separate VM with Proxmox Backup Server - Integrate with Proxmox Host - get differential backups, which are very fast. 9. I generally disable the updates. I carry out update Proxmox once in a month after taking life snapshot of my boot drives. 10. I have tried CloudInit for Windows VM's- but not very successful. 11. Enable 2FA for admin UI [ With this you can expose Proxmox Server safely ] 12. I do not overprovision any of VM's [ In terms of cores and ram ] Use max 80 % of the host. 13. Install virtio drivers for every Linux and Windows VM's 14. Create seprate user for UI management. Never login with root. 15. Create separate datasets for ISO files, Backups. All VM's conf files are backed up [ very small - generally 1 kb ] and kept on gdrive. With this Proxmox is super stable [ never failed ] and delivers me Production class - enterprise solution. If you like these tips, you can add up on you git - with due credit.
The xmit hash policy configures how to reduce (hash) the outgoing network packet to generate a number (which is then used to determines which port to send it out over). Having a stable distribution makes sure that a flow of packets all go out over the same interface (which avoids reordering). layer3+4 means ip addresses and ports are hashed. This is good for loadbalancing to multiple clients but also to the same device with multiple parallel tcp streams, they randomly use different interfaces. Lower layer2 hashes might be needed if you use tunnels where the hash does not know the package content or cannot see the actual flow details. They will sent all traffic to the same device over the same port (less parallel but this could be wanted to avoid one client getting all bandwidth and usually good for servers with many clients). BTW Each side determines this only for outgoing packets. When testing such LAGGs with iperf make sure to use multiple tcp streams.
Not sure if someone pointed this out but the reason it gives you a range that has to be "near" physically is that each block of 4 ports is it's own little cluster. In an 8 port switch there are two boards with 4 ports each, those are then tied to the backplane, in a 16 port switch there are 4 boards with 4 ports each, etc. etc. It dawns on me now I have never tried to bond ports that weren't on the same switch group on the big boy switches we use at work but I wouldn't be entirely shocked if they had a similar restriction (though port groups larger than 4 definitely exist).
Great video, one thing I discovered when trying to enable IOMMU on proxmox when running on zfs is to add this command (root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on rootdelay=10) to (/etc/kernel/cmdline) and then run (update-initramfs -u -k all) and (pve-efiboot-tool refresh) as zfs somehow ignore the default grub directory when running grub-update. Hope that this help those who are having trouble enabling IOMMU.
From what I understand (and read on the wiki) this is for UEFI boot, while grub is for legacy/bios boot. The guides also usually mention both places. So which you need depends on your boot method I think?
Great job Tim on your videos. Learned several things as I'm new to Proxmox itself. Did you ever get your answer on the LAG? miimon is basically when one of the links goes down how long does it take to switch the traffic to the other link(s), 1000 is 1 sec. The hash part you select L2+L3, it takes the src/dst mac and ip does an xor on the bits and modulo of port count to get a number from 0 to however many links in the lag. Uses that to determine the port link to transmit that specific mac/ip src/dst pair. As you can see it does not "load balance" between the links, you can still oversubscribe a link. In your freeway scenario think of a toll station before getting onto the freeway asking your destination and it then directs you to the lane to use. There are other options like TLB/ALB that can get closer to a load balancing aspect, that is a much deeper topic and also depends on your switch side what it supports as well and its balancing algorithm's. Anyways keep up the great work and have me as a subscriber.
11:00 ...Choose Life. Choose a job. Choose a career. Choose a family. Choose a... sorry but I couldn't help myself not to drop Trainspotting reference. Excellent video. Got my like and sub.
I spent the first few minutes in the video thinking about what doesn't fit.... Then I noticed you don't wear a cap today 😂 thank you for your work and the videos. Greetings from Germany
Love your videos Tim, the fact that you recently started creating videos and just like that we all depend on them is amazing, I love the way you explain things, the content of your videos and even your background, keep up the amazingly detailed work, the key for me is on the details that others ignore..!
First thing I do is pull up @TechnoTim video and fallow along pausing as needed to set up my proxmox and vms as needed. Your a life saver man. Between you and @NetworkChuck I have repurposed some older equipment to the point it is now invaluable. Proxmox running PFSense + TruNAS Scale + Ubuntu ++ PlexServer ++ SMBA ++ Home Security Video/Audio/Alarm direct linked to my cell phone.
Hi Tim, Thanks for this video. I have to say that I really like your layout and how you explain what you do and the reason why. Many tutorials advise people how to, but don't ever touch on why. Terrific stuff!
@@TechnoTim I agree! Thank you for all this, best videos! If I may just one thing... I find your tutorials sometimes too fast for people not that familiar with linux, command line, etc. :-)
May I suggest doing a video with Lawrence Systems Tech on the network side of link aggregation. He has a lot of knowledge about setting up a Ubiquiti network.
Just installed Proxmox for the first time, two hours ago, while watching your howto-video. Got my first ubuntu VM up easily. I've got a home lab with three old HP DL380 G6/G7, currently running VM's with KVM on Ubuntu Server 18. Got it nicely scripted, via virsh, including cloning and creating custom netplan and hostname files. Never knew about that machine-id and DHCP issues, but I use static IP anyway, so it's never been a problem. Just wanted to thank you for your channel. Really high on content!
Been building a homelab, and your content has been really nice. One of the better channels I've come across. You're easy to understand, don't ramble, don't get too better than thou. Just useful and solid info. Strikes me because of another Proxmox guide video I watched. The dude was some big time linux user. Old school type of guy, lives in the command line. Which is fine. That's his gig. I get pretty deep into my own topics. But from a noobs perspective, it was just an elitists interjecting his own views/issues into it, as opposed to just giving a good general overview. Basically would command line stuff, because he didn't like the GUI. But that doesn't help me understand. Is this a command line task? Or is it a GUI task, but you just don't like the GUI? Ya know. What is it about the GUI do you not like? What options is it missing? Why would we need that option? Stuff like that. So, just realized I've been gravitating towards your videos and wanted to give a little nod.
The Unifi Pro "physically next to each other" for teaming ports is maybe because they use more than one switch chip at the hardware level, so teaming only works well for ports close to each other. Ports further apart would have to traverse through a longer logical path and take more cycles. It may not be a huge problem for performance but they may have made that logical rule to keep performance high in the specs.
smartctl -a doesn't say anything about whether SMART monitoring is enabled or not using a dameon or other monitoring service. It is a manual read of SMART data for a device, and can run independently of smartd or other daemons. It does say something on whether reading SMART is possible or not though, as some older RAID devices don't pass that data through to the host.
This training video is dense and incredibly useful. Thank you for producing this! I just bought a $150 Chuwi LarkBox X Mini PC for ProxMox to run Wazuh Open Source SIEM and whole house ad blocking, plus PFsense if the package will fit in 12GB DDR5 RAM. ProxMox is running fine. Now I'm ready to add Wazuh server (Whaaat's up - is what I call it). Again - thank you. Great info.
Cool! Much appreciate your honest approach to "I don't know how this works." and especially the fact that you took the time to share, man! Means you're a genuinely good guy! Cheers!
Wow. This is Super helpful. You setup everything I was wanting to setup, and honestly to best IT practices. Fantastic work. Thank you so much for making this terrific video!!
A lovely little "glitch in the Matrix" moment at 18:55 😃 Great stuff! Since PVE is on a AGPLv3 license that permits any kind of modification without redistrubuting I also remove the "free version" nagscreen when deploying for my private use.
Excelent video! I have another tip for windows vms that accelerates installation process. Add two cd drives for installation, that way you dont have to switch cds whe you have to install virtio drivers (if you use virtio). Once windows is installed, shut it down, remove the drives and you are done. This is tipically a one time only thing, since once you have your first "base" windows installation then you can clone it very fast or convert it to a template You can also have a pendrive with vm backups for starting new deployments very quickly
Couple things to add. I just reinstalled Proxmox and found the vfio modules were loaded by default. It's also worth noting that apt and Aptitude aren't the same thing. Those are little things. Overall, I'm a big fan of what you're doing here.
At this point, the smartest thing to do is to automate those tasks. Because it is the check list you follow on every install. Be lazy. Don't repeat yourself. And use your energy wisely.
I am trying to set up some Ansible scripts to prepare a new proxmox server before joining it to the cluster and to manage scheduled updates over time. A bit of work required but much better than having to rebuild everything from scratch in the future.
Hey Tim. Very good video and you brought up some great points, I go through the same thing. I have installed Proxmox at least 20 or more times and every time I forget something so I started keeping a list as well. Thanks for your insight.
For everyone who uses Proxmox 8 and is watching this video: Proxmox 8 uses a different kernel and you have to use bookworm instead of buster when changing the sources.list.
So +1 for tip #10, things I would add is Tuning ZFS to your dataset, configuring UPS Daemon, configuring ZED and email alerts, I wished proxmox devs would add a gui to configure emails alerts
The most consistent one I found was gdisk. I created a script that I can run on all disks when playing around with a rebuild. #!/bin/bash # Format disk. ( echo x echo z echo Y echo Y ) | gdisk /dev/sda
haha probably should have looked a lil more before I posted this exact comment pretty much... D'oh! haha great call tho :) I work at 45Drives in R&D/Engineering and we do a lot of work with ZFS.
Very informative, thank you. And, this is the first video that the presenter asked about a "like" at a point where i know whether or not i liked it. Every other youtuber asks for a like before i've seen the quality of the video.
This field controls the algorithm which car is placed on which lane. (To stay with your example). Also if you use lacp the switch can negotiate this with your server. Is you use a active passiv lag or switch independent bund type they can't negotiate.
Thanks so much Tim! This is just great. I'm level 0 at this stuff but i followed this setup and after a hd failure, as soon as the server connected to the NFS share ----- there were the backups.
Thanks for the useful list. Did you ever consider creating an Ansible playbook for it? I believe most of the things you showed could be easily automated with Ansible. Not only it would be simpler and safer, you'd have it automatically documented as IaC.
I only have a very small clue about what you are talking about, and probably will not put it to practice, but find your videos highly informative and they really spark my curiosity. Keep it up!
There are a whole bunch of features you get with ZFS and IMO better performance over LVM/EXT4. I run a ZFS mirror with two NVMe 2TB drives. I have been extremely happy with it so far. I bought two PCIe NVMe cards that will hold 2x drives each. So I have room to grow. One note, some containers don't work well with ZFS as they want to use swap files. I had major issues with KASM because of this. Including not being able to use Proxmox backup to back up containers (LXC and docker). I had to create a EXT4 storage for those containers. This is with current PMX v7.2. and PBS 2.2.. I will say one issue with ZFS is over iSCSI. When you reboot the ZFS pool import process runs before iSCSI do I have to manually activate my Synology based iSCSI volumes. Still working on that issue. Thanks for the video Tim!
1. virtio driver for windows: I will suggest to download latest driver not stable one. stable driver isn't workable all the time 2. LACP: miimon is the time interval that server need to detect link status to enable/disable ports. it is no need to be changed until your switch isn't working with it. 3. LACP hash: I will suggest to use layer3+4 mode, it will be more balance if you use only single source IP or destination IP. Because it will use TCP/UDP port number to calculate and load balance
I liked this then disliked it just so I could like it again. The nic team information was especially helpful, but mostly leaving this comment so the channel can grow and more people can find you. Thank you for all that you share with the community.
@@TheAnoniemo That works for MBR but GPT keeps a backup at the end of the disk so you'd have to either overwrite the entire thing or calculate the start sector of the GPT backup. wipefs takes care of all that.
this video helping me a lot. I got a couple used server from the Porsche dealership in Jakarta last week. it is IBM system X 3650 M2 fully upgraded and IBM DS3400 storage server . yes I know it is an old system but it is only $355 :D
Good stuff Tim. I’ve found that a balance-alb bond works pretty good on a quad port NIC and a basic switch that doesn’t support LACP. I’ve gotten a couple hundred megs when transferring between VMs.
I love you man, you are both clear, and knowledgeable.. I just dig you I have been piecing together proxmox information for over a year at a sometimes painful rate/experience.. Where the hell have you been.. more content please!!!
@@TechnoTim hey does 4.99/ mo go directly to you or does youtube get a cut? it wont change my decision to support but I just want to know if it goes to you. 2nd question I need to look through you video post. have you done one on syslog server? 3rd do you use one? 4th question graylog any opinion?
You got it right on the money for Link aggregation. It's the same thing as a CPU vs ram. Ram doesn't speed up your system, it allows you to do more multitasking, because of more memory space. Same thing with LAGG. You don't get faster speeds per se, but you can transfer more data at once.
First of all thanks for sharing this video .. in my opinion within top 10 initial post-installation actions are to setup Postfix and setup crontab to setup your rsync jobs /smartctl / logwatch /etc..
SysPrep is a great tool. P2V from Systernals can do a hot / live copy , using the Shadow Volume data if I recall. Then you can import the output VHD into PM.
Two things I'm interested in knowing about: 1) How can the config of the Proxmox VE server get backed up? 2) Can a cluster be used to restore another Proxmox VE server? Background I'm currently migrating off of ESXi and noticed that backups of the proxmox ve config would require either backing up the entire drive or knowing which files(directories) to backup. Also for anyone who uses Distributed switching in VSphere and is wondering if Proxmox can do it, the answer is Yes. In the System > Network page Linux VLANs would need to be created first (bond0.10, for vlan 10), then a Linux Bridge with that vlan port assigned to it would need to be created (vmbr10 for example). From there you can assign the bridge to the VM. Another tip for ESXi migrants, if you want to LAG "uplinks" at the hypervisor level, you simply create a Linux bond listing the ports you want to LAG (without commas, space separated). Then You can create the distributed switches using the information I shared earlier. Again, I'll be curious to know if there is an easier way to backup the Proxmox VE "configs", it would be cool to be able to back it up to Google Drive or Next Cloud like OPNSense. Thanks again Tim, great video.
Off course, yes but manually or via cron jobs. Checklists -- 1] Backup of your /etc/network/interfaces file. 2] /etc/pve - folder This is more than enough. You can zip these files and put it on googledrive or dropbox. Use it whenever required. Additionally you refer following tips. I take following care while installing Proxmox. 1. Boot Drives - 120 GB SSD X 2 - Select zfs - super useful if something breaks while updating it. Taking a boot drive snapshot using zfs command before updates. Plus you get added redudancy. No other spinning disks are added to Proxmox. Its not needed and compute power of Proxmox can be fully utilised if you have separate Storage Server. [ explained with following points ] 2. Keep NFS Server ready - I normally go with 5 spinning disks with on a separate host [ physical machine - with atleast 6 cores ] with again 2 120 GB SSD X 2 as boot drive [ in Raid 1 ] and 5 spinning disk with ZFS. Create a dataset and export dataset as NFS. - I call this as Storage Server. Keeping separate storage server super useful. You will get full flexibility to do many things. With zfs you can set automatic snapshots with cron utility. I generally create dataset for each virtual machine. On a seperate NAS I export the ZFS snapshots. [ My snapshot policy is - 15 mins - hour life =4 snapshots, 1 hour - day life = 24 snapshots, 1 day - 1 week life = 7 snapshots , 1 month life - 4 months life = 4 snapshots ] So you will never have feel sorry, if god forbids, happens to your Proxmox Server, VM, Storage etc. Only with seprate storage server you can have very smooth, fast live migration of VM's if you have a cluster. IO overhades are taken by Storage Server instead of Proxmox. 3. Create a sepearate and exclusive network between Proxmox and storage network without any gateway and connect the storage with seperate unmanaged and cheap giga switch. 4. Edit /etc/hosts file and add storage server ip and add storage host as storage.myoffice.local This facilitates me to change storage server IP at later course if required. 5. Install a few packages on Proxmox - zip, mlocate, net-tools, fail2ban, rkhunter, vim, git 6. Install a proxy manager and expose Proxmox on port 80 instead of 8006 port. Also you can apply Let's encrypt certificate. 7. I choose install containers for Linux VM's [ They are super cool and bearly takes up ram ] and you can reset password, ip address from Proxmox ui itself. 8. For backups, I install a separate VM with Proxmox Backup Server - Integrate with Proxmox Host - get differential backups, which are very fast. 9. I generally disable the updates. I carry out update Proxmox once in a month after taking life snapshot of my boot drives. 10. I have tried CloudInit for Windows VM's- but not very successful. 11. Enable 2FA for admin UI [ With this you can expose Proxmox Server safely ] 12. I do not overprovision any of VM's [ In terms of cores and ram ] Use max 80 % of the host. 13. Install virtio drivers for every Linux and Windows VM's 14. Create seprate user for UI management. Never login with root. 15. Create separate datasets for ISO files, Backups. All VM's conf files are backed up [ very small - generally 1 kb ] and kept on gdrive. With this Proxmox is super stable [ never failed ] and delivers me Production class - enterprise solution. 10000 times better than ESXi
OMG, thank you so much! Been quite a while since I installed Proxmox! Deploying a server at my boss's house in the new year (HP Z440 with lots of nice stuff inside) and it'll be running Proxmox. Thanks again :)
Not sure if anyone answered it below, but the ports needing to be next to each other is inside they're using basically breakout cables and the aggregator internal requires and link agg to be connected. This is common where you'll see inside switches that have sage 100G inside but then break it out to four 25G. Similar for other topos.
Great vid Techno Tim. It was great your comments on NIC bonding and I'd have liked to have seen the Network section in the Proxmox Web interface after you'd config'd that. It was also great to know that you can't have any VM's on a Proxmox server you're joining to the primary. I'm about to do this myself in a couple of weeks.
Excellent video. Hopefully I can get my server and fresh ProxMox install to work. I appreciate all the work you put into the videos. You have helped me get my networking skills up-to-date. I cannot believe how much I have forgotten in over 10 years, and how much I still remember. Cheers!
I‘m a little late, but didn’t read it in the comments: A linked vm from a template has 1 big advantage: Space. If you install 1 windows vm and clone them, every full vm will required the complete space, a linked vm just needs the space of changed files (like snapshots).
the Hash Policy L2+L3 means the protocol split connection based on mac address and IPs, I have L3+L4 which means the connection are distributed based on IPs and TCP/UDP port numbers, much better but the switch must support it
Thank You very much, I have learned a lot about proxmox in pretty condensed way, the way I like it. Might be difference in version as I do run 7.1-7 now, but ... 1. repos can be added, enabled/disabled from GUI - however manually added pve-no-subscription repo into pve-enterprise.list is not recognized by GUI and if I disable pve-enterprise repo from that list (leaving pve-no-subscription only enabled) then GUI complains that I would have no PVE updates at all, regardless of pve-no-subscription is enabled in pve-enterprise.list - when pve-no-subscription repo is added (manual or GUI) to sources.list , so on the level of Debian, then GUI is fine and stops complaining. 2. for clustering you said it's needed to have clear node, no VMs on it. Maybe for secondary node only, as my first node joined as first cluster member with VMs and Template on it. Nothing seems to be lost so far. So first node, which will auto-join cluster, seems to be safe and logically exempted.
Hey Tim ! As always, that was a great video ! Very instructive and a pleasure to watch ! But, let me add something, at 18:54, when you talk about things that we might change on our virtual machine, I think you forgot to mention the "Machine-id", which is the unique id for the machine. As this is an important detail, I think it must be told at least twice 😂 ! Keep doing such high quality content !
Tim, wanted to let you know that although all the Proxmox documentation has always said that host names, IP addresses etc cannot be changed after the node is added, that is not correct and has NEVER been correct. What they should say is "We would rather you not do this because you have to do it right or the world is sucked into a black hole and everyone dies". So - the node name cannot be changed later easily (it can be, but much easier to delete and re-add). But you can simply edit the hosts file and /etc/network/interfaces to change IP and take total control of the network. One reason for this is to force cluster (inter server) traffic onto a specific high speed network, while keeping public IPs and VM traffic on a different network, and to force CEPH traffic onto a different high speed network. I run a phone company serving government offices (including three E911 centers) and thousands of businesses. Downtime is a big no-no. I run multiple 5 server clusters with multiple 100g networks. using balance-XOR bonding for CEPH and sometimes cluster traffic Switches are connected LACP to dual high availability Peplink SDX-PRO routers, which are each connected to dual data center router ports pair of 32 x 100g (SQFP28) Dell Z9100 switches connected by 4 100g ports for a 400g trunk. Dell R730 dual mid-speed v4 14 core processors (best balance of speed/cores/cost), 384G ram, 16 drive bays. quad 1G NIC in dedicated NIC slot. these are used for management, back door access etc. In some clusters the VM public IP bridge is on 1gb nic 2 dual port 100g mellanox NICs NIC 1 port 0 is part of CEPH balance-XOR bond, connected to switch #1 NIC 1 port 1 is part of CLUSTER balance-XOR bond, connected to switch #1 NIC 2 port 0 is part of CEPH balance-XOR bond , connected to switch #2 NIC 2 port 1 is part of CLUSTER balance-XOR bond, connected to switch #2 so - two or three vmbridges depending on when I built the cluster - I used to keep NAS/backup traffic on separate NICs, but once I started using 100g instead of 10g I found that backups cannot congest the cluster network, so why not take advantage of the speed? the overhead of the backup process seems to limit it to about 25g BOND0 for CEPH BOND1 for CLUSTER and possibly backup/NAS and public VM traffic or also BOND 2 for backup/nas traffic OR - if you want to separate the cluster and backup traffic then instead of a cluster bond you use two cluster rings - ring 0 (primary cluster) on NIC 1 port 1, ring 1 (cluster failover) on NIC 2 port 1. Ring 0 and ring 1 have DIFFERENT SUBNETS AND IPS. i.e. Proxmox is handling the failover instead on Linux. But now you can create an extra vmbr for backup traffic on the same NIC as the failover cluster. Then vmbr0 uses BOND1 possibly also a vmbr1 for public IPs and a vmbr2 for backups/NAS. each server has 14 samsung PM1643 12G SAS SSDs (960gb). 2 for proxmox mirrored, 12 for CEPH. last few bays are filled on two servers with 4TB drives for NAS use. So I have great performance and NO single point of failure. CEPH distributed file system delivers about 11GB (i.e same speed as 12G SAS SSD used locally), but I can lose a drive, NIC, switch, router, uplink port and nothing goes down. But - to make any of the above work you really need to get into the hosts table and rename a few things. by making "node05" an alias on a host entry with ips on the NICs/bonds you want. Plus manually add host entries for the nodes on all the nics/vmbrs. It takes some thought and planning, but it is not actually complicated.
@@TechnoTim I have...several times. I have been using Proxmox since...2007? whenever proxmox 1 came out. Ping me if you want to see more detail. It is possible to tune a cluster to give VMs the same performance they would have on dedicated machines with fast SSDs. We were upgrading RAM last night, so bulk migrate from node 1 to node 2 and 3, shutdown 1, pull and add ram, put back in , wait for CEPH to recover (maybe a minute), put the Vms back, do the next one. Zero down time, upgraded 5 servers in a couple of hours.
Excellent content! Immediate subscription :) Was just looking at hardware for home lab. Did not know about the IOMMU. Gonna check it out. You saved me a lot of headache!
I've setup Proxmox several times and used it in a test environment, and all went well in that application. What I'm not sure is, is how to "re-create" the machine if I had a hardware failure, or the system got corrupted in some form or shape. I would love to see a video on how to recovery quickly in such said event if I had a Proxmox server running multiple host in a production environment. I probably know just enough to be dangerous... :)
For #2 using a nice utility 'wipefs' to remove existing partition table is more convenient than running fdisk manually. It also removes LVM and RAID signatures from the disk
What's the first thing you do after installing Proxmox?
I end up installing a Home Assistant VM
Log in... funny I know right :D!
I'm too new to proxmox to really have a routine of what I'm doing, I just jump straight in installing machines. Need to start playing with containers more though...
spent 2 hours to get 2 disks/storage spaces to show in the interface. 1 for storing iso's, 1 for storing VMs.... What a ride xD
Check in with Techno Tim as to what I *really* should do first...
Thank you for all your guidance and humor along the way.
tell my parents I love them.
This is such a tremendous video. I've been in IT for over 22 years now and had a Proxmox question today. Heading down the Google rabbit hole led me to this video, which not only answered my question, but taught me 15 other things. I love how the creator of the video is able to clearly explain in 30 seconds what other videos struggle to explain in 10 minutes, and this happens over and over. This 23 minute video contains more actionable information than any 23 *hours* of content I usually consume. I'm truly glad I found this channel today and I can't wait to see what else I'll learn from your channel.
Thank you so much for the kind words! Outside of a few jokes (mainly laughing at myself) I try to keep it strictly business!
I'm a newb to all this, and this happened to me 😂had everything I need
Johnny Depp alternate version where he became a tech instead of an actor, good video.
Hehehe was just thinking that. He's like the Johnny Depp of Tech!
I said this long time ago, and I'm glad I'm not the only one who thinks that. I love Timm Depp 😍
Thought the same looked in the comments and saw it. Lol
Wow, Now I can't see him in another way 😅
Watch out for Amber Harddrive
Found this channel recently as I decided to setup a home server. In the space of a week I've gone from having zero idea of what to buy and how to set it up, to having a basic parts list and deep diving on Proxmox installations 😂 Can't thank you enough Tim 🙏 you keep making vids, I'll keep watching them mate 👍
The command for number 4???
On the fdisk step to clear out disk partition information, you can avoid all the extra steps (p, d, etc for each partition) by just hitting g (write a gpt partition) followed by w (write to disk) ... saves lots of keystrokes when clearing out multiple drives.
As a systems and network guy I can tell you your LACP extra-lane explanation was spot on! I'm going to steal that because it's so simple yet so clear.
In a little more detail, the LACP participants like the switch and the hypervisor choose which of the physical gigabit ports in the bond to use ("hashing"), based on a number of things such as source IP+port, destination IP+port (and possibly others such as MAC addresses). This means a single TCP connection, which has a static set of IPs and ports, will always use the same switch port, and will never surpass the speed of that one port (i.e. 1Gbit) while another connection may be using the other port, also maxing it out at 1Gbit. Plus of course, if you lose/unplug one, you'll have instant failover.
Thank you!
I'm studying networking with Cisco and that's not what I am reading in the Netacad. Etherchannel (LACP or PAgP) creates a logical link from two or more physical links, one of the advantages is increasing the bandwidth. So with 2 x 1gb ports, the network should use both ports cause it sees them as one logical link, the frames are sent with 2gb bandwidth, if one of the cables fails, the other continues to operate, and the bandwidth decrease to 1gb.
In my quest for finding an OS to put on my soon-to-be-built server, I came across your Proxmox tutorials. Absolutely LOVE them.
Thank you so much!
The best homelab content on YT atm. I appreciate that you learn as you go and are upfront with things that you might not fully understand 👍
Wow, thank you!
@@TechnoTim i agree
@@TechnoTim Ya it's really good. Thank you. I've watched 10+ vids and agree. I suggest a Wendel at level1tech's colab!
@2:50 you should be using the command "pveupgrade" instead of "apt dist-upgrade". The former is a wrapper that calls the latter but it also will sync other things like EFI boot partitions, etc.
be careful however, pveupgrade on some version of proxmox automatically runs autoremove as well.
For those running non-pve kernels or want backup kernels, this automatically removes them. It also removes support libraries for packages built from source. Many dont correctly specify their runtime dependencies, expecting build-dependencies to remain post-packaging.
I've now spent the better part of the day fighting with a lost boot partition after trying to get iommu running. After hours of trying to fix the bootloader, I reinstalled pve 7.2. I have backups of my VMs so I figured it was a good last resort - no dice, it didn't install an EFI for some reason 👿
One more home-lab related item I do - set the bios not to auto-start after power restore (tends to break a lot of electronics if your grid is a bit unstable or your lovely neighbor turns whole building breakers on-off-on-off 5 times in a row in 10 seconds). And, then, to be able to start the machine once the power is back on, I set also WakeOnLan mode to g to enable it - this will allow me to start the server using simple WoL command from any machine, even without IP address, just on the same switch and VLAN.
And then you can go and try to talk some sense to your neighbor, before you find out he is completely drunk and barely stands on his feet :)
Good advice if your electric supply can be a bit iffy
The best part was where you talked about the VLANs. I would love to see a shorter form video that has an easy to find name for people looking to do this specific thing. This is the best tutorial I was able to find. ❤❤
I like how you don't sugar coat the things you don't know/understand. Still, you still give what you have learned is the recommended selection.
That reminds me of my data communications professor from grad school. I was in his office one day, worried that there was simply too much material to cover and that I was feeling overwhelmed. He chuckled to himself, and explained that in his 20 years in the industry he felt the same way every single day. The tech industry is a massive behemoth of ideas and concepts, and it grows much faster than any single human can keep up. The important thing is to drill down on a specific sector and get good enough to build a career around it. He also told me to never trust someone who spends their time trying to teach/convince you of some new technology without ever admitting to not knowing some small detail or another, like how we see repeatedly in this video. A few months later he was killed in a school shooting, and I really appreciate you pointing this out and reminding me of this anecdote; he was a great dude who taught me some valuable lessons as I transitioned into a career in tech
As a Proxmox noob, this has been the most helpful starter vid I've come across. thank you so much for taking the time!
One thing you might want to think about:
When creating a template, make sure the main disk is small. You can always expand the disk, but you can't shrink ik. Take a large main disk, and you're stuck with it.
> but you can't shrink ik
You actually can if you are using ZFS datasets:
0. Free up space in the dataset so it is smaller than the size you want to shrink to
1. Set new smaller quota with zfs for that dataset
2. Edit config file for that container to reflect new (smaller) size.
Also if you are wondering this is not implemented in the GUI (atm) as (afaik) ZFS is special in that it can shrink datasets like that.
@@gamerbene19 you can manually edit the size of a disk inside the vm config, after shrinking it inside the vm. Just like in vmware. This works not only in zfs but on regular lvm too
So i think i may have made this mistake. my LVM partition is 2tb. Do i need to shrink this? If so how?
I take following care while installing Proxmox.
1. Boot Drives - 120 GB SSD X 2 - Select zfs - super useful if something breaks while updating it. Taking a boot drive snapshot using zfs command before updates. Plus you get added redudancy. No other spinning disks are added to Proxmox. Its not needed and compute power of Proxmox can be fully utilised if you have separate Storage Server. [ explained with following points ]
2. Keep NFS Server ready - I normally go with 5 spinning disks with on a separate host [ physical machine - with atleast 6 cores ] with again 2 120 GB SSD X 2 as boot drive [ in Raid 1 ] and 5 spinning disk with ZFS [ Choose RaidZ2 for RAID] . Create a dataset and export dataset as NFS. - I call this as Storage Server. Keeping separate storage server super useful. You will get full flexibility to do many things. With zfs you can set automatic snapshots with cron utility. I generally create dataset for each virtual machine. On a seperate NAS I export the ZFS snapshots. [ My snapshot policy is - 15 mins - hour life =4 snapshots, 1 hour - day life = 24 snapshots, 1 day - 1 week life = 7 snapshots , 1 month life - 4 months life = 4 snapshots ] So you will never have feel sorry, if god forbids, happens to your Proxmox Server, VM, Storage etc.
Only with seprate storage server you can have very smooth, fast live migration of VM's if you have a cluster. IO overhades are taken by Storage Server instead of Proxmox.
3. Create a sepearate and exclusive network between Proxmox and storage network without any gateway and connect the storage with seperate unmanaged and cheap giga switch.
4. Edit /etc/hosts file and add storage server ip and add storage host as storage.myoffice.local This facilitates me to change storage server IP at later course if required.
5. Install a few packages on Proxmox - zip, mlocate, net-tools, fail2ban, rkhunter, vim, git, ifupdown2
6. Install a proxy manager and expose Proxmox on port 80 instead of 8006 port. Also you can apply Let's encrypt certificate.
7. I choose install containers for Linux VM's [ They are super cool and bearly takes up ram ] and you can reset password, ip address from Proxmox ui itself.
8. For backups, I install a separate VM with Proxmox Backup Server - Integrate with Proxmox Host - get differential backups, which are very fast.
9. I generally disable the updates. I carry out update Proxmox once in a month after taking life snapshot of my boot drives.
10. I have tried CloudInit for Windows VM's- but not very successful.
11. Enable 2FA for admin UI [ With this you can expose Proxmox Server safely ]
12. I do not overprovision any of VM's [ In terms of cores and ram ] Use max 80 % of the host.
13. Install virtio drivers for every Linux and Windows VM's
14. Create seprate user for UI management. Never login with root.
15. Create separate datasets for ISO files, Backups. All VM's conf files are backed up [ very small - generally 1 kb ] and kept on gdrive.
With this Proxmox is super stable [ never failed ] and delivers me Production class - enterprise solution.
If you like these tips, you can add up on you git - with due credit.
interesting thx
The xmit hash policy configures how to reduce (hash) the outgoing network packet to generate a number (which is then used to determines which port to send it out over). Having a stable distribution makes sure that a flow of packets all go out over the same interface (which avoids reordering). layer3+4 means ip addresses and ports are hashed. This is good for loadbalancing to multiple clients but also to the same device with multiple parallel tcp streams, they randomly use different interfaces. Lower layer2 hashes might be needed if you use tunnels where the hash does not know the package content or cannot see the actual flow details. They will sent all traffic to the same device over the same port (less parallel but this could be wanted to avoid one client getting all bandwidth and usually good for servers with many clients). BTW Each side determines this only for outgoing packets. When testing such LAGGs with iperf make sure to use multiple tcp streams.
Thank you for the explanation!
Thank you! This should be pinned IMHO.
@@pskry thanks, fixed some typos ,)
Not sure if someone pointed this out but the reason it gives you a range that has to be "near" physically is that each block of 4 ports is it's own little cluster. In an 8 port switch there are two boards with 4 ports each, those are then tied to the backplane, in a 16 port switch there are 4 boards with 4 ports each, etc. etc. It dawns on me now I have never tried to bond ports that weren't on the same switch group on the big boy switches we use at work but I wouldn't be entirely shocked if they had a similar restriction (though port groups larger than 4 definitely exist).
Great video, one thing I discovered when trying to enable IOMMU on proxmox when running on zfs is to add this command (root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on rootdelay=10) to (/etc/kernel/cmdline) and then run (update-initramfs -u -k all) and (pve-efiboot-tool refresh) as zfs somehow ignore the default grub directory when running grub-update. Hope that this help those who are having trouble enabling IOMMU.
Thank you!
From what I understand (and read on the wiki) this is for UEFI boot, while grub is for legacy/bios boot. The guides also usually mention both places. So which you need depends on your boot method I think?
GREAT content, going down the Proxmox rabbit hole myself, self-taught. Good tips on cloning and clustering - cheers!
Great job Tim on your videos. Learned several things as I'm new to Proxmox itself. Did you ever get your answer on the LAG? miimon is basically when one of the links goes down how long does it take to switch the traffic to the other link(s), 1000 is 1 sec. The hash part you select L2+L3, it takes the src/dst mac and ip does an xor on the bits and modulo of port count to get a number from 0 to however many links in the lag. Uses that to determine the port link to transmit that specific mac/ip src/dst pair. As you can see it does not "load balance" between the links, you can still oversubscribe a link. In your freeway scenario think of a toll station before getting onto the freeway asking your destination and it then directs you to the lane to use. There are other options like TLB/ALB that can get closer to a load balancing aspect, that is a much deeper topic and also depends on your switch side what it supports as well and its balancing algorithm's. Anyways keep up the great work and have me as a subscriber.
11:00 ...Choose Life. Choose a job. Choose a career. Choose a family. Choose a... sorry but I couldn't help myself not to drop Trainspotting reference. Excellent video. Got my like and sub.
Great movie!
I spent the first few minutes in the video thinking about what doesn't fit.... Then I noticed you don't wear a cap today 😂 thank you for your work and the videos. Greetings from Germany
Just trying to mix up the RUclips algorithm. Plus, I left it in the car and didn't want to go grab it.
Kudos. Straight to the point so you don't get all the unnecessary and cumbersome and sometimes pointless guides where you look at a progress bar etc
Love your videos Tim, the fact that you recently started creating videos and just like that we all depend on them is amazing, I love the way you explain things, the content of your videos and even your background, keep up the amazingly detailed work, the key for me is on the details that others ignore..!
First thing I do is pull up @TechnoTim video and fallow along pausing as needed to set up my proxmox and vms as needed. Your a life saver man. Between you and @NetworkChuck I have repurposed some older equipment to the point it is now invaluable. Proxmox running PFSense + TruNAS Scale + Ubuntu ++ PlexServer ++ SMBA ++ Home Security Video/Audio/Alarm direct linked to my cell phone.
Hi Tim,
Thanks for this video. I have to say that I really like your layout and how you explain what you do and the reason why. Many tutorials advise people how to, but don't ever touch on why.
Terrific stuff!
Wow, thanks!
@@TechnoTim I agree! Thank you for all this, best videos! If I may just one thing... I find your tutorials sometimes too fast for people not that familiar with linux, command line, etc. :-)
@@vti5 tim is the goat
Thank you Sir! new to prox - have built labs for myself for 20 yrs. And having used VM's - I followed on to this - and hence I am here.. Cheers!
May I suggest doing a video with Lawrence Systems Tech on the network side of link aggregation. He has a lot of knowledge about setting up a Ubiquiti network.
I'd love to! (If he knows who I am)
@@TechnoTim send him an email, with a link to your channel, and then you can talk to the guy and see what he says
Just installed Proxmox for the first time, two hours ago, while watching your howto-video. Got my first ubuntu VM up easily.
I've got a home lab with three old HP DL380 G6/G7, currently running VM's with KVM on Ubuntu Server 18. Got it nicely scripted, via virsh, including cloning and creating custom netplan and hostname files. Never knew about that machine-id and DHCP issues, but I use static IP anyway, so it's never been a problem.
Just wanted to thank you for your channel. Really high on content!
Nice!
Hey man, could you do a video about monitoring your rancher cluster with prometheus, grafana etc, thanks!
Hi Tim, this is the first time that I fully understand how much work is put into making videos like this. Mate, absolutely Bloody Awesome
Been building a homelab, and your content has been really nice. One of the better channels I've come across. You're easy to understand, don't ramble, don't get too better than thou. Just useful and solid info.
Strikes me because of another Proxmox guide video I watched. The dude was some big time linux user. Old school type of guy, lives in the command line. Which is fine. That's his gig. I get pretty deep into my own topics. But from a noobs perspective, it was just an elitists interjecting his own views/issues into it, as opposed to just giving a good general overview. Basically would command line stuff, because he didn't like the GUI. But that doesn't help me understand. Is this a command line task? Or is it a GUI task, but you just don't like the GUI? Ya know. What is it about the GUI do you not like? What options is it missing? Why would we need that option? Stuff like that.
So, just realized I've been gravitating towards your videos and wanted to give a little nod.
Thank you so much for the kind words! Glad you enjoyed it!
The Unifi Pro "physically next to each other" for teaming ports is maybe because they use more than one switch chip at the hardware level, so teaming only works well for ports close to each other. Ports further apart would have to traverse through a longer logical path and take more cycles.
It may not be a huge problem for performance but they may have made that logical rule to keep performance high in the specs.
I'll come back to this video in a few months when I finally buy my rackmount server
We'll be here!
smartctl -a doesn't say anything about whether SMART monitoring is enabled or not using a dameon or other monitoring service. It is a manual read of SMART data for a device, and can run independently of smartd or other daemons. It does say something on whether reading SMART is possible or not though, as some older RAID devices don't pass that data through to the host.
Bork: (Verb) The highly technical term for messing something up. "After deleting the wrong line from my config file, I borked my system."
fun fact: the word "borked" was named after a man. Imagine your last name being used to describe fucking something up lol.
I am glad my name is not Bork. 🙂
I always thought it was just an internetism. TIL 😁
This training video is dense and incredibly useful. Thank you for producing this! I just bought a $150 Chuwi LarkBox X Mini PC for ProxMox to run Wazuh Open Source SIEM and whole house ad blocking, plus PFsense if the package will fit in 12GB DDR5 RAM. ProxMox is running fine. Now I'm ready to add Wazuh server (Whaaat's up - is what I call it). Again - thank you. Great info.
You're very welcome!
Great video and excited to see how you'll implement HA with Kubernetes
Done! ruclips.net/video/UoOcLXfa8EU/видео.html
Cool! Much appreciate your honest approach to "I don't know how this works." and especially the fact that you took the time to share, man! Means you're a genuinely good guy! Cheers!
I learn from you a lot bro.. keep sharing for everyone.. love your tutorials bro..
Thanks, will do!
Wow. This is Super helpful. You setup everything I was wanting to setup, and honestly to best IT practices. Fantastic work. Thank you so much for making this terrific video!!
Glad it helped!
A lovely little "glitch in the Matrix" moment at 18:55 😃 Great stuff! Since PVE is on a AGPLv3 license that permits any kind of modification without redistrubuting I also remove the "free version" nagscreen when deploying for my private use.
Haha! I took the red pill 💊
Dude I thought I had a mini stroke or something
Wowie Wowie, been in this business 50+ years, this is awesome and I know understand our VM tech people a lot more. More comments to follow
And this slipped me, running a performance benchmark for a baseline, to ensure the expectations are aligned with the hardware capacity
Thank you!
What benchmark software do you run?
Excelent video!
I have another tip for windows vms that accelerates installation process.
Add two cd drives for installation, that way you dont have to switch cds whe you have to install virtio drivers (if you use virtio). Once windows is installed, shut it down, remove the drives and you are done.
This is tipically a one time only thing, since once you have your first "base" windows installation then you can clone it very fast or convert it to a template
You can also have a pendrive with vm backups for starting new deployments very quickly
Couple things to add. I just reinstalled Proxmox and found the vfio modules were loaded by default. It's also worth noting that apt and Aptitude aren't the same thing. Those are little things. Overall, I'm a big fan of what you're doing here.
Your video made me even more excited to get a home server and set up Proxmox. Thanks for the great content.
At this point, the smartest thing to do is to automate those tasks. Because it is the check list you follow on every install.
Be lazy. Don't repeat yourself. And use your energy wisely.
I am trying to set up some Ansible scripts to prepare a new proxmox server before joining it to the cluster and to manage scheduled updates over time. A bit of work required but much better than having to rebuild everything from scratch in the future.
Hey Tim. Very good video and you brought up some great points, I go through the same thing. I have installed Proxmox at least 20 or more times and every time I forget something so I started keeping a list as well. Thanks for your insight.
For everyone who uses Proxmox 8 and is watching this video:
Proxmox 8 uses a different kernel and you have to use bookworm instead of buster when changing the sources.list.
Does the command have to change as well since
the version that he had that is substituting buster for bookworm doesn't seem to work.
after the change to bookworm, always fails the update
So +1 for tip #10, things I would add is Tuning ZFS to your dataset, configuring UPS Daemon, configuring ZED and email alerts, I wished proxmox devs would add a gui to configure emails alerts
Totally agree!
I like it, just found that fdisk not always removes zfs metadata from the disk. I prefere tu use "wipefs -a" to clean a disk
Good call!
The most consistent one I found was gdisk. I created a script that I can run on all disks when playing around with a rebuild.
#!/bin/bash
# Format disk.
(
echo x
echo z
echo Y
echo Y
) | gdisk /dev/sda
haha probably should have looked a lil more before I posted this exact comment pretty much... D'oh! haha great call tho :) I work at 45Drives in R&D/Engineering and we do a lot of work with ZFS.
really is as simple as running wipefs -a /dev/sd[a-z] or whatever disk range you want to wipe all disks in a single command.
@@mitcHELLOworld wipefs erases the 'file system signature'
dd if=/dev/zero . . .
Very informative, thank you. And, this is the first video that the presenter asked about a "like" at a point where i know whether or not i liked it. Every other youtuber asks for a like before i've seen the quality of the video.
I believe layer 2+3 means it uses a combination of MAC address hashing (layer 2) and IP address (layer 3)
Thank you so much!
This field controls the algorithm which car is placed on which lane. (To stay with your example).
Also if you use lacp the switch can negotiate this with your server. Is you use a active passiv lag or switch independent bund type they can't negotiate.
This is a fantastic initial checklist for building a PROXMOX server... thank you!
Thank you!
jack sparrow teaching me tech now hell yeah!
Thanks so much Tim! This is just great. I'm level 0 at this stuff but i followed this setup and after a hd failure, as soon as the server connected to the NFS share ----- there were the backups.
Thanks for the useful list. Did you ever consider creating an Ansible playbook for it? I believe most of the things you showed could be easily automated with Ansible. Not only it would be simpler and safer, you'd have it automatically documented as IaC.
Yes! Soon!
I only have a very small clue about what you are talking about, and probably will not put it to practice, but find your videos highly informative and they really spark my curiosity. Keep it up!
Thank you! I know it's a lot to take in, but if you start with a small project and build from there you have know this in no time!
very cool man, how do you backup the vmhost/proxmox itself?
Wow, this was so clear and helpful. You're a great speaker. Subscribed and I'll see you on Twitch!
Hey Tim, amazing video with great content. Can you tell if there any performance benefits of using ZFS instead of LVM?
Hey! I don't think there are, at least for my VM workload.
There are a whole bunch of features you get with ZFS and IMO better performance over LVM/EXT4. I run a ZFS mirror with two NVMe 2TB drives. I have been extremely happy with it so far. I bought two PCIe NVMe cards that will hold 2x drives each. So I have room to grow. One note, some containers don't work well with ZFS as they want to use swap files. I had major issues with KASM because of this. Including not being able to use Proxmox backup to back up containers (LXC and docker). I had to create a EXT4 storage for those containers. This is with current PMX v7.2. and PBS 2.2.. I will say one issue with ZFS is over iSCSI. When you reboot the ZFS pool import process runs before iSCSI do I have to manually activate my Synology based iSCSI volumes. Still working on that issue. Thanks for the video Tim!
1. virtio driver for windows: I will suggest to download latest driver not stable one. stable driver isn't workable all the time
2. LACP: miimon is the time interval that server need to detect link status to enable/disable ports. it is no need to be changed until your switch isn't working with it.
3. LACP hash: I will suggest to use layer3+4 mode, it will be more balance if you use only single source IP or destination IP. Because it will use TCP/UDP port number to calculate and load balance
4:04 `sfdisk --delete /dev/sdX` might save you some time by skipping all that interactive stuff.
I liked this then disliked it just so I could like it again. The nic team information was especially helpful, but mostly leaving this comment so the channel can grow and more people can find you. Thank you for all that you share with the community.
easier to prep disks with 'wipefs -a /dev/sd{X..Z}'
Thank you!
I usually just dd /dev/zero to the disk to overwrite the boot section, but I think that's kinda dirty...
@@TheAnoniemo That works for MBR but GPT keeps a backup at the end of the disk so you'd have to either overwrite the entire thing or calculate the start sector of the GPT backup. wipefs takes care of all that.
If i am well, gdisk with z does the same! 'sgdisk -Z /dev/sd{X..Z}'
this video helping me a lot. I got a couple used server from the Porsche dealership in Jakarta last week. it is IBM system X 3650 M2 fully upgraded and IBM DS3400 storage server . yes I know it is an old system but it is only $355 :D
😡 Why do I always find these AFTER the fact? 😂
Turn off postfix setting.
Good stuff Tim. I’ve found that a balance-alb bond works pretty good on a quad port NIC and a basic switch that doesn’t support LACP. I’ve gotten a couple hundred megs when transferring between VMs.
Thanks for the info!
I have no idea why people use truenas or ZFS (on home servers at least). They're INSANELY resource-hungry.
Sir you are awesome and I appreciate your generosity on sharing your experiences. As I start this journey I am really grateful!
apt install ifupdown2
thank you!
I love you man, you are both clear, and knowledgeable.. I just dig you I have been piecing together proxmox information for over a year at a sometimes painful rate/experience.. Where the hell have you been.. more content please!!!
Awesome, thank you!
@@TechnoTim hey does 4.99/ mo go directly to you or does youtube get a cut? it wont change my decision to support but I just want to know if it goes to you. 2nd question I need to look through you video post. have you done one on syslog server? 3rd do you use one? 4th question graylog any opinion?
You got it right on the money for Link aggregation.
It's the same thing as a CPU vs ram.
Ram doesn't speed up your system, it allows you to do more multitasking, because of more memory space.
Same thing with LAGG. You don't get faster speeds per se, but you can transfer more data at once.
Thank you!
First of all thanks for sharing this video .. in my opinion within top 10 initial post-installation actions are to setup Postfix and setup crontab to setup your rsync jobs /smartctl / logwatch /etc..
Thanks for sharing
SysPrep is a great tool. P2V from Systernals can do a hot / live copy , using the Shadow Volume data if I recall. Then you can import the output VHD into PM.
A few years later, I still watch this video when I revisit proxmox.
One of the most interesting, helpful, and insightful channels on the Tube. Thanks so much for sharing, and keep ‘em coming
Two things I'm interested in knowing about:
1) How can the config of the Proxmox VE server get backed up?
2) Can a cluster be used to restore another Proxmox VE server?
Background
I'm currently migrating off of ESXi and noticed that backups of the proxmox ve config would require either backing up the entire drive or knowing which files(directories) to backup.
Also for anyone who uses Distributed switching in VSphere and is wondering if Proxmox can do it, the answer is Yes. In the System > Network page Linux VLANs would need to be created first (bond0.10, for vlan 10), then a Linux Bridge with that vlan port assigned to it would need to be created (vmbr10 for example).
From there you can assign the bridge to the VM. Another tip for ESXi migrants, if you want to LAG "uplinks" at the hypervisor level, you simply create a Linux bond listing the ports you want to LAG (without commas, space separated). Then You can create the distributed switches using the information I shared earlier.
Again, I'll be curious to know if there is an easier way to backup the Proxmox VE "configs", it would be cool to be able to back it up to Google Drive or Next Cloud like OPNSense.
Thanks again Tim, great video.
Off course, yes but manually or via cron jobs.
Checklists -- 1] Backup of your /etc/network/interfaces file.
2] /etc/pve - folder
This is more than enough.
You can zip these files and put it on googledrive or dropbox. Use it whenever required.
Additionally you refer following tips.
I take following care while installing Proxmox.
1. Boot Drives - 120 GB SSD X 2 - Select zfs - super useful if something breaks while updating it. Taking a boot drive snapshot using zfs command before updates. Plus you get added redudancy. No other spinning disks are added to Proxmox. Its not needed and compute power of Proxmox can be fully utilised if you have separate Storage Server. [ explained with following points ]
2. Keep NFS Server ready - I normally go with 5 spinning disks with on a separate host [ physical machine - with atleast 6 cores ] with again 2 120 GB SSD X 2 as boot drive [ in Raid 1 ] and 5 spinning disk with ZFS. Create a dataset and export dataset as NFS. - I call this as Storage Server. Keeping separate storage server super useful. You will get full flexibility to do many things. With zfs you can set automatic snapshots with cron utility. I generally create dataset for each virtual machine. On a seperate NAS I export the ZFS snapshots. [ My snapshot policy is - 15 mins - hour life =4 snapshots, 1 hour - day life = 24 snapshots, 1 day - 1 week life = 7 snapshots , 1 month life - 4 months life = 4 snapshots ] So you will never have feel sorry, if god forbids, happens to your Proxmox Server, VM, Storage etc.
Only with seprate storage server you can have very smooth, fast live migration of VM's if you have a cluster. IO overhades are taken by Storage Server instead of Proxmox.
3. Create a sepearate and exclusive network between Proxmox and storage network without any gateway and connect the storage with seperate unmanaged and cheap giga switch.
4. Edit /etc/hosts file and add storage server ip and add storage host as storage.myoffice.local This facilitates me to change storage server IP at later course if required.
5. Install a few packages on Proxmox - zip, mlocate, net-tools, fail2ban, rkhunter, vim, git
6. Install a proxy manager and expose Proxmox on port 80 instead of 8006 port. Also you can apply Let's encrypt certificate.
7. I choose install containers for Linux VM's [ They are super cool and bearly takes up ram ] and you can reset password, ip address from Proxmox ui itself.
8. For backups, I install a separate VM with Proxmox Backup Server - Integrate with Proxmox Host - get differential backups, which are very fast.
9. I generally disable the updates. I carry out update Proxmox once in a month after taking life snapshot of my boot drives.
10. I have tried CloudInit for Windows VM's- but not very successful.
11. Enable 2FA for admin UI [ With this you can expose Proxmox Server safely ]
12. I do not overprovision any of VM's [ In terms of cores and ram ] Use max 80 % of the host.
13. Install virtio drivers for every Linux and Windows VM's
14. Create seprate user for UI management. Never login with root.
15. Create separate datasets for ISO files, Backups. All VM's conf files are backed up [ very small - generally 1 kb ] and kept on gdrive.
With this Proxmox is super stable [ never failed ] and delivers me Production class - enterprise solution.
10000 times better than ESXi
OMG, thank you so much! Been quite a while since I installed Proxmox! Deploying a server at my boss's house in the new year (HP Z440 with lots of nice stuff inside) and it'll be running Proxmox. Thanks again :)
I would recommend setting up that bond, even if you're only using one NIC in case you need to add another one later.
Not sure if anyone answered it below, but the ports needing to be next to each other is inside they're using basically breakout cables and the aggregator internal requires and link agg to be connected. This is common where you'll see inside switches that have sage 100G inside but then break it out to four 25G. Similar for other topos.
Great vid Techno Tim. It was great your comments on NIC bonding and I'd have liked to have seen the Network section in the Proxmox Web interface after you'd config'd that. It was also great to know that you can't have any VM's on a Proxmox server you're joining to the primary. I'm about to do this myself in a couple of weeks.
Great suggestion!
Excellent video. Hopefully I can get my server and fresh ProxMox install to work. I appreciate all the work you put into the videos. You have helped me get my networking skills up-to-date. I cannot believe how much I have forgotten in over 10 years, and how much I still remember. Cheers!
Thank you! You got this!
I‘m a little late, but didn’t read it in the comments:
A linked vm from a template has 1 big advantage: Space. If you install 1 windows vm and clone them, every full vm will required the complete space, a linked vm just needs the space of changed files (like snapshots).
the Hash Policy L2+L3 means the protocol split connection based on mac address and IPs, I have L3+L4 which means the connection are distributed based on IPs and TCP/UDP port numbers, much better but the switch must support it
Thank you!
I had the Problem with the Machine-ID and the DHCP Server today... Perfect timing, now I know how to fix it, thx!
nice!
i think this guy deserves way more recognition. Until now I've avoided getting into proxmox. sorry Jeff @ craftcomputing...
Thank you! I love CraftComputing and Jeff has inspired me in many ways and almost daily with his awesome videos!
Never realized Johnny Depp is so good in IT!
Thank you for the awesome video and channel! 😉
Thank you! 😀
20:34 "Then we'll name this cluster, then you'll wanna name this cluster"
lol thank you for this tutorial on things though! Great help!
Need to have a talk with the editor 😂
Thank You very much, I have learned a lot about proxmox in pretty condensed way, the way I like it.
Might be difference in version as I do run 7.1-7 now, but ...
1. repos can be added, enabled/disabled from GUI
- however manually added pve-no-subscription repo into pve-enterprise.list is not recognized by GUI and if I disable pve-enterprise repo from that list (leaving pve-no-subscription only enabled) then GUI complains that I would have no PVE updates at all, regardless of pve-no-subscription is enabled in pve-enterprise.list
- when pve-no-subscription repo is added (manual or GUI) to sources.list , so on the level of Debian, then GUI is fine and stops complaining.
2. for clustering you said it's needed to have clear node, no VMs on it. Maybe for secondary node only, as my first node joined as first cluster member with VMs and Template on it. Nothing seems to be lost so far. So first node, which will auto-join cluster, seems to be safe and logically exempted.
Hey Tim ! As always, that was a great video ! Very instructive and a pleasure to watch ! But, let me add something, at 18:54, when you talk about things that we might change on our virtual machine, I think you forgot to mention the "Machine-id", which is the unique id for the machine. As this is an important detail, I think it must be told at least twice 😂 !
Keep doing such high quality content !
Thank you so much! I do have instructions on how to remove this in the documentation in the description. Thank you for the clarification!
@@TechnoTim lecatou is joking on you, because you said machine id 2 times in sequence at 18:54. :)
NICE!!! Very clear, to the point, hitting the got cha's, etc. etc. I just found your channel. You're Rock'n it Buddy. thanks for sharing!
Tim, wanted to let you know that although all the Proxmox documentation has always said that host names, IP addresses etc cannot be changed after the node is added, that is not correct and has NEVER been correct. What they should say is "We would rather you not do this because you have to do it right or the world is sucked into a black hole and everyone dies".
So - the node name cannot be changed later easily (it can be, but much easier to delete and re-add). But you can simply edit the hosts file and /etc/network/interfaces to change IP and take total control of the network. One reason for this is to force cluster (inter server) traffic onto a specific high speed network, while keeping public IPs and VM traffic on a different network, and to force CEPH traffic onto a different high speed network.
I run a phone company serving government offices (including three E911 centers) and thousands of businesses.
Downtime is a big no-no.
I run multiple 5 server clusters with multiple 100g networks. using balance-XOR bonding for CEPH and sometimes cluster traffic
Switches are connected LACP to dual high availability Peplink SDX-PRO routers, which are each connected to dual data center router ports
pair of 32 x 100g (SQFP28) Dell Z9100 switches connected by 4 100g ports for a 400g trunk.
Dell R730 dual mid-speed v4 14 core processors (best balance of speed/cores/cost), 384G ram, 16 drive bays.
quad 1G NIC in dedicated NIC slot. these are used for management, back door access etc. In some clusters the VM public IP bridge is on 1gb nic
2 dual port 100g mellanox NICs
NIC 1 port 0 is part of CEPH balance-XOR bond, connected to switch #1
NIC 1 port 1 is part of CLUSTER balance-XOR bond, connected to switch #1
NIC 2 port 0 is part of CEPH balance-XOR bond
, connected to switch #2
NIC 2 port 1 is part of CLUSTER balance-XOR bond, connected to switch #2
so - two or three vmbridges depending on when I built the cluster - I used to keep NAS/backup traffic on separate NICs, but once I started using 100g instead of 10g I found that backups cannot congest the cluster network, so why not take advantage of the speed? the overhead of the backup process seems to limit it to about 25g
BOND0 for CEPH
BOND1 for CLUSTER and possibly backup/NAS and public VM traffic
or
also BOND 2 for backup/nas traffic
OR - if you want to separate the cluster and backup traffic then instead of a cluster bond you use two cluster rings - ring 0 (primary cluster) on NIC 1 port 1, ring 1 (cluster failover) on NIC 2 port 1. Ring 0 and ring 1 have DIFFERENT SUBNETS AND IPS. i.e. Proxmox is handling the failover instead on Linux.
But now you can create an extra vmbr for backup traffic on the same NIC as the failover cluster.
Then vmbr0 uses BOND1
possibly also a vmbr1 for public IPs and a vmbr2 for backups/NAS.
each server has 14 samsung PM1643 12G SAS SSDs (960gb). 2 for proxmox mirrored, 12 for CEPH. last few bays are filled on two servers with 4TB drives for NAS use.
So I have great performance and NO single point of failure.
CEPH distributed file system delivers about 11GB (i.e same speed as 12G SAS SSD used locally), but I can lose a drive, NIC, switch, router, uplink port and nothing goes down.
But - to make any of the above work you really need to get into the hosts table and rename a few things. by making "node05" an alias on a host entry with ips on the NICs/bonds you want. Plus manually add host entries for the nodes on all the nics/vmbrs.
It takes some thought and planning, but it is not actually complicated.
Thanks for the detailed info! You should make a request to update their documentation!
@@TechnoTim I have...several times. I have been using Proxmox since...2007? whenever proxmox 1 came out.
Ping me if you want to see more detail. It is possible to tune a cluster to give VMs the same performance they would have on dedicated machines with fast SSDs. We were upgrading RAM last night, so bulk migrate from node 1 to node 2 and 3, shutdown 1, pull and add ram, put back in , wait for CEPH to recover (maybe a minute), put the Vms back, do the next one. Zero down time, upgraded 5 servers in a couple of hours.
About to spin up a Proxmox box on an HP ProDesk 400 G4 Mini I just got for 200 bucks with a 6 core i7. This video was inmensly useful! Many thanks!
Nice!
Definitely going to book mark this for the future. Such an informative video in ~20 minutes. Thanks for the beast content!
I really appreciate this list. I've already used it a few times.
Excellent content! Immediate subscription :) Was just looking at hardware for home lab. Did not know about the IOMMU. Gonna check it out. You saved me a lot of headache!
I've setup Proxmox several times and used it in a test environment, and all went well in that application. What I'm not sure is, is how to "re-create" the machine if I had a hardware failure, or the system got corrupted in some form or shape. I would love to see a video on how to recovery quickly in such said event if I had a Proxmox server running multiple host in a production environment. I probably know just enough to be dangerous... :)
For #2 using a nice utility 'wipefs' to remove existing partition table is more convenient than running fdisk manually. It also removes LVM and RAID signatures from the disk