I understand what you're trying to say, but I'm afraid your point of view is 2 years behind, sure we can talk about containers being hypervisor-agnostic, but we definitely can't avoid the fact they aren't hardware-agnostic, stuff like x86 vs arm, NIC passthrough, GPU/NPU passthrough and such, also containers are merely an application layer so you still need a proper storage/snapshot/network/security backend, take a simple example - run docker on your casual desktop PC, but allocate full dedicated gpu capability to your docker environment - it will be more tedious than running whole docker stack of apps later :D
Yeah, at least when I ran containers they were easy enough to setup... initially. But then when it came to data backups, networking, and AD integration, some of it I got to work, others, not at all.
You're running services on arm CPUs? Also, for running GPUs in containers, you install the container toolkit from Nvidia (or your vendor of choice, and make sure your compose file has the right GPU flags
@@gatolibero8329 That's the point. Taking time to troubleshoot things that should be simple is not worth it. Everything you listed are things that should work seamlessly. Stopped trying to figure out LXCs in Proxmox because of networking. It was too much of a hassle.
...storage/network/security, etc.? This is implicit in containers (and most modern applications), and does not really need to be pointed out - even so he does make mention of the file system in the video.
This is closer to the truth, it's in the implementation of it and hypervisors provide a hard boundary that can simplify edge cases, like GPU passthrough for example. llms love GPU passthrough, just ask one.
The cons I've run into containerizing prod is more points of failure, more monitoring and health-checks, name-space confusion and sometimes collision, and of course the dreaded lack of realistic bench marking to scale lab prototypes to prod. Sometimes in really big clients, it just made more since for bare metal silos and cluster based on topology requirements and/or region. The big gain is security, but as we've seen in the news, not too many are leveraging that lately😆
Thanks for bringing out some of the cons. I have seen a lot of misuse of containers. They are not replacements for processes or threads. They require a set of skills to manage. It would be great to see a video of when you need containers and when you can skip them.
I think there are different points of view. I do use a VM for development. Because the company assign me the resources depending on needs i can get a machine with 4-40 CPUs depending on needs . For production it is a different subject. Like deployments and delivery CI/CD times
I have been working on PC hardware for decades, but never really had a need for elaborate networking until like 5 years ago or so. I am very thankful to have docker available when trying to patch together a decent lab with low cost, older hardware. It made it really easy for me to keep up. It has also made the Zimaboard more useful imo.
Thank you for the informative video. You've convinced me that it is probably time to to start looking at containerization more in-depth. My main concern with them is security and isolation. I've typically been one to build from the ground up in a VM so that I know the integrity of the build. Pulling various containers from various sources just doesn't give me that warm fuzzy feeling as not having anything extra in it. I do agree that the benefits you mentioned are very compelling and have convinced me to start small with an additional pi-hole instance and go from there. IF you think it worthwhile, I'd love to see a video with your take on container vs hypervisor security. Thanks again.
I work with containers all the time. They are great in small lab environments but have seen very few work great at scale. It’s so much harder to debug the dependencies that are packaged with the containerized applications. I can provide several times we got a container from a vendor and it had issues running with production security requirements. We even had a well known vendor stop supporting docker as they did not want to help fix an issue. I cannot go to leadership and say well we spent all the money on a tool license and now have to change our tech stack to deploy the tool. Containers tech is still maturing and works only when the company that provides them actually understands them.
I have used docker on my Openmediavault nas, i hope to use it in more of a seperate sever dedicated enviroment in the future once i setup proxmox or Nutanix in my home lab😊
I use both paradigms. Containers for “narrow” services, and VMs where compute, GPU and/or storage matters. Some are “mildly” clustered, and some have failover - some hosts and some services. ZFS is the common storage & cloning backbone, as it is “ignorant” of my various architectures, FS, OS and sharing shenanigans. A homelab mess - but very stable over ca four years (please, no jinxing…) 😅
VMs migrate with exact memory state within seconds with virtually zero interruption to a service from host to another in a hypervisor cluster. How do you approach migration from host to another using containers in cases where you need to provide highly available service?
If you need to keep the server up to have high availability... You don't have HA. The container approach would be to have multiple containers sharing the load with the system being able to tolerate nodes going in and out (eg the nodes share nothing or have some way of dealing with dead nodes and synching with live nodes)
There are some who, maybe rightfully, brag about not using containers at all. They have extremely lightweight VMs and they do it for isolation. Containers share a hosts kernel, and so technically if an application in a container is vulnerable to a stack overflow or had some other vulnerability that can lead to root access of the container host then all over containers on the host can be compromised. VMs are completely isolated except for in a very few cases like spectre and meltdown. If your BIOS/UEFI, firmware, and hypervisor OS is up to date and patched, using VMs would be a safer and more stable option.
I've worked for 3 midsize (1000+ employee) companies since 2013, and yet to see an actual use case for containerization...they have been traditional verticals; Healthcare, Insurance and Banking...so other than development houses, who is actually using it? I've discussed it with all of my contemporaries over the years, but not a single production system has been moved over from VM to containers....
Same. In the last place I worked, deployed Xibo and CheckMK in a docker swarm. Nothing crazy. It was all good and well, but then I was going to leave, and they realized they didn't have anyone who knew how the stuff worked. So, I re-deployed on VMs. Which made me realize, we should have just done that in the first place. Containers are a cool technology, but require very unique use cases and we were struggling to come up with scenarios that would work, long term. Even running Xibo and CheckMK on containers was kind of silly at the end of the day. 😅
In my last two jobs, we used containers everywhere. I’ve even moved multiple entire systems into containers. Systems move slowly, because who really wants to reinvent the wheel. However, if you were to remake your network. I would highly recommend exploring containerisation.
I work in Industrial Automation and Controls. No one is using Docker at all. Docker is for applications and services, through. We need to run programming software, historical trending, alarm servers, and SCADA. I dont think any of that will ever be on Docker.
I don't understand why people use a hypervisor to deploy a container enviornment on top of it. I work with container enviornment for years now and we just use the bare-metal servers with Ubuntu to join in (or build) a cluster. No need of a hv at all. For us this is working just fine.
Hallo, 1) mixed mode VMs and containers on one host 2) If you have no KVM like HP ILO etc. it is more simple to solve problems like failed booting remote 3) Backup and restore is super simple yes, I know terraform, ansible and tons of tools but a lot of users out there aren t so well trained .
@@wiziek man, I don't think you understand why the comment section exists at all. I got some doubts and someone got the answers. Life goes on. Get some help.
We run only DCs (and only some of them on dedicated physical machines) everything else is virtual so even as we move into containers they will be virtual machines whether more natively on ESX or some other hypervisor implementation. We have plenty of windows and linux vms so we need somewhere to run them for the foreseeable future.
Lots of sound tips and suggestions. Containers are on my list of things to learn. One thing holding me back in the work environment is whether the pre-built containers you get from a distribution place (not sure of the correct term) on the web are safe. For example, can/should I run Veeam in a container I download from someplace. Are building containers easy so I don't have to rely/trust someone else. Are government contractors using containers successfully? Thanks.
in work environment, you should have proper network, with right firewall rules, IDS and IPS, of course any serious deployment should be done only after proper testing in isolated environment first
I started using Virtualbox in 2009, during my last year for retirement it allowed me to run Windows XP and MS-Office in a VM for compatibility with work. After retirement I used Virtualbox for distro hopping. I now use Virtualbox to separate application areas into secure areas and areas more vulnerable for hacking. For example I have a VM for email, (a)social media and another one exclusively for banking, which is encrypted by VBox. My Host OS runs OpenZFS and when I received an infected Email from an ex-colleague I simply restored the snapshot from before the hack. I don't think containers add much to my security. The main purpose of a container is that you can run its latest stable version in each (Linux) VM. So I do run the latest stable snaps of Firefox, Thunderbird and LibreOffice in Ubuntu 16.04 ESM and sometimes the almost 9-year old Ubuntu runs newer versions than many other Linux distros. Ubuntu 16.04's LibreOffice version even beats occasionally Ubuntu 24.04's LibreOffice deb file version. The snap in Ubuntu has of course a security advantage, because of its integration with Ubuntu's firewall.
@@Unselfless docker-compose is about as "easy" as it gets. I don't like docker on principle. The convoluted overlays and weird networking hide how services really work.
Problem with containers is they are not live migratable. You can "migrate" a container, but it will disconnect any active connections. This is problem if you are streaming or something similarly sensitive. Also VMs can be built similarly as docker images, you can use Ansible or scripts and prepare the VM similarly as docker image.
Live migration is a vestige of grandpas virtualization and a hack at that. It was a gimmick designed as a stopgap for what we do in the modern sense of application containerization, high availability, scalability, within the CI/CD pipeline. Whatever application you are running doesn't need live migration... your server does and that's why application containers are better. Now you got to update your early 2000s application codebase... but my guess is that can't be done because the talent is somehow too expensive. I guess you could be like the banks nursing along Cobol applications from the 1960s on big iron at an exorbitant cost.
@@BandanazX Yeah, they have to learn to clone and create multiple instances and then switch/route the user to the newer container -- we've been doing that since 1999 (in unix/solaris). That how Live Migration is supposed to be done, but someone tried to make a gimmick. I was literally doing jails/freebsd & zones/solaris in 2002 like that in web-hosting while in college.
How do I deal with storage for my containers? Let’s say I’ve got a k3s 3 node cluster, or a docker swarm, and I want home assistant to be running in the cluster. If I use local storage on a node, is it tied to that node? Do I need a dedicated nic for ceph storage, if so, does that need to be faster than GE? Should I just use nfs or cifs?
LXC's vs. VM's differences really can best be described by comparing and contrasting, which is done well in this video. Ask an AI to describe the benefits too!
Containerized software offers cleaner deployment than traditional installations. Docker containers isolate services and dependencies, preventing system-wide conflicts and making resource monitoring easier. This approach reduces system pollution compared to direct VM installations of custom scripts.
We are just this week migrating on prem HyperV hosted OpenHPC to EC2. The core issue being a requirement to be on Govcloud. I feel like HPC may be the Achilles heal of containers. At the core of high performance is access to all available cpu features. Abstraction seems to me it cripples core cpu features. The issue I have now is Amazon has pulled Parallel Computing from gov cloud. How effective are containers in massively parallel workloads?
What do you think about manage firewall policy in kubernetes (k8s) ? With hypervisor, VM control by iptables, but in container how to make sure the network policy was deploy smoothly.
Why would docker/kubernetes containers, inside a VM, care about the hypervisor? This video isn't making sense. I am so confused what you're trying to say. These containers talk to the VM, which the containers see as a legit computer, so to speak. The VM, which runs on its own kernel, talks to the hypervisor. So obviously, docker containers are not going to care about the hypervisor. And? You might have a point if you talked about LXCs instead of VMs, as they share a kernel with the hypervisor, and cannot just be any OS you like, like you can with full VMs.
Cheers, Brandon. Your vids and examples are very inspirational! I'm a NW who likes messing around. I've been tinkering around with nginx and powergslb and set up a fake web shop which I load balanced between my home and Digital Ocean (cloud). I set up a Wireguard vpn connection and then ran Docker swarm. Then setup Portainer with agents to monitor. Finally, I brought it all together in a single Docker compose file (Template). Needed to use volume and mount cmds to tailor containers during spin up (eg MariaDB init sql file). I'm now interested in porting this to Kubernettes. From your experience, how simple is this? What's the best/easiest tools? Are they free or trail ones I could use?
Sadly, it's harder than it should be. If you want to do it: start with k3s and start with just creating deployments, which run pods which run containers and services which points to the containers that the deployments deployed. Those are the basics. When you can do that, look at how people use Helm to deploy existing applications and do that and use that a bit. Eventually you'll want to know better how to automate with it GitOps tools like ArgoCD or Flux.
Спасибо за качественные видео. почему docker а не podman? Как считаете когда появится гипервизор подготовленный специально для удобной работы с контейнерами и k8c? Может снимите видео о различиях docker and podman? Для докер много инструментов, для podman только podman desktop
i think your conclusion is wise, hypervisor are here to stay, i would say they are the Z mainframe of our generation. back in the early 2000s, lots of large enterprises tried to migrate their Z applications to Java on distributed, with some successes and some very expensive failures. In my organisation, we started the k8s journey on vmware, but we are already moving to k8s on bare metal for heavy production worloads. There is a very good reason for it, there are different tools for different use cases. By the way, we also migrated some workloads to IBM Z15. As a veteran home laber who started with 3 linux RedHat 7 (not RHEL 7) running vmware GSX 3 i think, to be relevant in IT, you'd better have many arrows in your quiver, don't underestimate infrastructure and network, but don't neglect cutting edge as well, if you chose well, there can be your tomorrow's job. Cheers Brandon, love you channel and your ethos !
Exactly. Old fashioned, full-fat virtualization is a relic of the past. And just like companies did in the late 90s and early 2000 paying IBM a pretty penny so that they didn't have to bring forward their code base, companies will be paying VMware so that they can repeat the failure.
My use-case is downloading and running virus laden software packages. I really need the isolation that a VM provides. I don't know much about containers, but do they provide the same level of isolation? Could the virus have something to break out of a container? Im pretty confident that the software can't break out of the VM.
doesnt matter with the b2b problems from Broadcom, vmware will be dropped regardless. Companies in generally do not like sudden moves and other associated problematic costs both short and long term, regardless how good the products are E.g if you're running say a fortinet firewall with other cisco managed products and VMs, and one of those links in those chains critical gets zero support, zero compensation out of the blue and you have only 1 year to go.... its every admin's worst nightmare once the management gets hit with the news its really impossible to savage apart from throwing more money [ which some idiots do ] and that the costs will go up by 3 folds... you will see how fast those VMs will then be dropped. and from the comments, folks still do not know how bad it was when Broadcom took over its not Containers vs Vmware its your data and your support being swapped, or dropped at anytime.
you forgot to mention LXC, also, with canonical releasing micro-cloud LTS, it seems that offering is ready for prime-time. The open source version of micro-cloud is incus and it can deploy clusters quite easy, just like micro-cloud.
I was also unaware of this. Thanks for mentioning that, the timing is perfect. I’m going through a “rebuilding my home lab” phase, and this may fill a use case for me if it is what I’m thinking it is.
Good video, but you're way behind the curve here. I first started running containers in production AT SCALE on geo-distributed systems in 2016. That's now nine years ago. And there was never any war for the reasons that you mentioned. The only reason to use a VM any more is as a host for your containers. One thing that I didn't hear you mention is the ephemeral nature of containers. Because containers are deployed and instantiated as self-contained units, the need for an upgrade path in your deployment system no longer exists. That means that tools like Ansible are useless. When it comes to upgrading a machine, simply change your Dockerfile, rebuild the container, and deploy. When done right, a single container can be deployed in any environment, so test, staging, and production run the same container with only config changes. The days of "it works on my machine" are long since over since all of your dependencies come in the container. And speaking of dependencies, tools like Python's venv are history too. Every container has it's own copy of your tools, so it doesn't matter which version of your toolchain is installed. In fact, the version of Linux doesn't matter much either. I've run Ubuntu back to 16.04 on some systems that couldn't be upgraded for whatever reason and they continue to run. This means that critical software can be kept current on the OS without requiring a dedicated VM and the resources to run it.
@toddbu-WK7L I appreciate your comment and insights here. However, i respectfully disagree. I believe you were way ahead of the curve compared to most average enterprise SMB clients. Most medium sized enterprise clients that I see in consulting don't know much about containers in 2024-2025. Many in the home lab realms are only beginning to see what containers can do in my opinion from what i see. Many still run VMs and want to make the switch. I think many have had the mindset that it is so easy to run things in VMs in VMware, why switch? I think the price hikes though are now pushing the issue at a massive scale. To your point as an argument, it really depends on what circles you are in. Some orgs have been on the cutting edge, while many are far behind.
@@VirtualizationHowto kudos to you for being able to find customers who need your services. You clearly have good understanding of containerization to help them make the transition. Where I think that we may be disagreeing is in what it means for a war to be over (per your video title). Once a vastly superior technology comes along, it's no longer a competition but rather a done deal that takes time to deliver. Just like git supplanted CVS and SVN, or Linux supplanted Windows on the server side, it's just a matter of time before adoption of the new technology is the norm. I remember the "good old days" when scaling your web infrastructure meant buying a faster machine. That is, until vertical scaling no longer worked. There were some false starts on how to scale web sites until stateless, horizontal scaling using load balancers became the norm. That doesn't mean that there aren't small web sites that aren't single-server installations that will buy a bigger server if needed. But to say that a war still exists between vertical and horizontal scaling paradigms just because there a handful of sites that scale vertically seems to me to be a mischaracterization. They do compete with each other but it's really no contest when it comes to scalable architectures. The word "scaling" is synonymous with horizontal scaling.
Honestly the two goes hand and hand esp in the cloud since the Middleware is virtualized hardware. Everything in the cloud is running onto of a VM that docker runs on.
It's important to understand the limitations of containers though, you can't run a different kernel to the host of course. The fact that you're running on the same kernel as the host also means that a vulnerability in the host kernel allows containerized processes to escape the container and infect the host. Rootless containers at least help a bit.
Here in my homelab the count is 15/4 for LXC containeres... they are a good choice between VMs and containers, and it is enought for my needs... if I decide to run docker or kubernetes here will be only for testing/training/learning and of course, inside a LXC... Proxmox is a good tool and I thinking in stick to it for the time being...
The fact that docker runs as root makes me feel like it’s just waiting to be broken out of and take over the host. I use podman instead so it can run at the user level.
I think hypervisors are going to morph more into the "kernels" of modern secure operating systems. If you look at the architecture of Qubes OS where each application runs in its own virtual machine, independent of everything else. Applications and core OS applications can be isolated from the network and/or other applications. Additionally, this allows users to choose the appropriate OS to run a specific application that might be better supported.
My favorite thing about containers is the ability to containerize the network as well! The additional network abstraction adds another layer of security and isolation. For example; Being about to deploy a full fledged ELK stack in any environment is just so cool to me.
Yes, being able to spin up an app in its own little container universe... with its own VPN out to the world... is pretty slick. Hooray for Gluetun! Sure VMs can do that to... but in a much heavier way.
@autohmae Thanks for the comment! However, the video is not specific to VMware and more a general statement around VMs and containers. The VMware situation just helps to illustrate the need to start thinking towards modern application infrastructure.
@@VirtualizationHowto I was responding to one of the first things you said in the video: you said the situation with vmware is uncertain, I think it's very clear, so the opposite of uncertain: for homelab, smaller and medium businesses it has no future, Broadcom does not want your business or you will be paying more than you can probably afford. Anyway... my other comment is much more interesting/on topic, it's about containers.
I think in the video he was trying to be somewhat diplomatic and not offend some people that are VMWare mega-fans. The main message is that the hypervisor is not as important as it used to be. It’s a commodity really. Honestly, there are plenty of companies just using Ubuntu on bare metal with Kubernetes. Excellent performance and significantly cheaper.
i always laugh at the container hype. its such a small niche thats always making the pointed hair bosses salivate. its bloatware to the extreme in the largest sense.
Many companies i have worked for use containers everyday without much issue. But i understand that there are also plenty of folks that like to keep doing things like they have since 2004. No problem, keep on doing what you think works for you.
Legacy users will continue to matter since they invested so much they're hard put to escape Broadcom. Enthusiasts don't need to pay for software so they've no problem either way.
I don't like containers because a lot of vendors are using to make their products that were open source, close source. I have a number of them that forcing you to use containers because they know a lot of folks can't view their code any more.
I'd like to clarify that your containers run on the guest OS, not the host OS. So your HyperVisor/hardware underneath is still there and thus still matters. The main reason why HVs are used is to provide a *managed* split of physical resources. As for "ease of boost", this is utter nonsense. You can create VM images that has everything included. As for containers? Easy go up, easy go down. From plenty of bitter experience. As for the automation part... You know your Ansible code can be executed against a VM directly just fine, right? And what the shit is "legacy workloads" ?? Is this a devops going "Ok Boomer" ?? "MaKe YoUr LiFe So MuCh EaSiEr" Blah. Said by someone that must have never had to manage large mission-critical infrastructure. I never found it easier, half because VMware is just easy already. So much empty container-hyper fart speech in this video. I've heard all of this before from some guy that said "it will be so good! And everything is free and open source! And the resource usage will go down! And it will be so much more stable and easier to manage!" - And then left the company with a hot steaming pile of dogshit container nonsense. When done right, they're fine. I wouldn't call them bad, certainly not as a concept, but stop with this bullshit trend of "containers are the future of everything! They are perfect" yada yada yada. There are companies out there transitioning back from containers for a reason. They are nice, but not the end all be all of how to do infra. You're just selling a nice sounding dream.
@zivunknown First of all, I manage thousands of VMs on a daily basis in production, so well aware of mission-critical infrastructure. Also, containers run from the perspective of its "host," since it shares a kernel with the host. Yes this can be a "guest" VM running in a hypervisor like VMware, but it can also be on bare metal (no guest operating system involved). VMs and containers are both tools for a specific job. The goal of the video is to say that let's use the right tool for the right job. I would much rather containerize a simple app than run this app inside a VM when there is no need to do that. Did you realize that Chick Filet services 2800+ restaurant locations with edge kubernetes clusters? Why did they not choose VMs for this? It is because it was the best tool for the job and provides agility and other advantages over VMs. Generally when I see ones mention that containers fail and are not reliable it is due to the infrastructure not being configured correctly. Containers are solid and run fine when architected that way, no less than a VM infrastructure that is architected correctly.
I hate clickbait. Can containers run Windows? No? Then things stay the same. With the rise of mandated cyber security baselines and 48hr patching requirements, out of date container images are even less likely to be used in production. Couple that with poor isolation, GUI management and robust back systems and no one is looking to use containers where business critical data is kept. For homelab it's fine but for business it's a big no.
@ericneo2 Thanks for the comment! However, no clickbait here. There is much more to enterprise architecture and infrastructure than Windows. Most Fortune 100 and 500 companies are using container-driven apps. I am currently working with companies that are looking to move away from legacy VMs and towards containerization for better development processes, easier deployments, and much more agile operations. I think this will accelerate now that Broadcom has raised licensing across the board and VMware has the lion's share of VMs in the enterprise. However, keep in mind there will always be a need for a few VMs around unless MSFT changes things, domain controllers, etc as well as client operating systems.
Windows is a joke. Nobody should be running that non, maybe in desktop. It’s so ingrained that most people think they have no choice. You have everything you need in Linux containers.
I understand what you're trying to say, but I'm afraid your point of view is 2 years behind,
sure we can talk about containers being hypervisor-agnostic, but we definitely can't avoid the fact they aren't hardware-agnostic,
stuff like x86 vs arm, NIC passthrough, GPU/NPU passthrough and such,
also containers are merely an application layer so you still need a proper storage/snapshot/network/security backend,
take a simple example - run docker on your casual desktop PC, but allocate full dedicated gpu capability to your docker environment - it will be more tedious than running whole docker stack of apps later :D
Yeah, at least when I ran containers they were easy enough to setup... initially. But then when it came to data backups, networking, and AD integration, some of it I got to work, others, not at all.
You're running services on arm CPUs? Also, for running GPUs in containers, you install the container toolkit from Nvidia (or your vendor of choice, and make sure your compose file has the right GPU flags
@@gatolibero8329 That's the point. Taking time to troubleshoot things that should be simple is not worth it. Everything you listed are things that should work seamlessly. Stopped trying to figure out LXCs in Proxmox because of networking. It was too much of a hassle.
...storage/network/security, etc.? This is implicit in containers (and most modern applications), and does not really need to be pointed out - even so he does make mention of the file system in the video.
This is closer to the truth, it's in the implementation of it and hypervisors provide a hard boundary that can simplify edge cases, like GPU passthrough for example.
llms love GPU passthrough, just ask one.
The cons I've run into containerizing prod is more points of failure, more monitoring and health-checks, name-space confusion and sometimes collision, and of course the dreaded lack of realistic bench marking to scale lab prototypes to prod. Sometimes in really big clients, it just made more since for bare metal silos and cluster based on topology requirements and/or region. The big gain is security, but as we've seen in the news, not too many are leveraging that lately😆
Thanks for bringing out some of the cons. I have seen a lot of misuse of containers. They are not replacements for processes or threads. They require a set of skills to manage.
It would be great to see a video of when you need containers and when you can skip them.
I think the LXC containers that proxmox makes use of could have had some attention as well. Great video!
Definitely helpful too for people who are new to containers
Incus
When I nuked my VMware env I moved a huge chunk of my services to lxc containers. It is not a bad way to go
I think there are different points of view. I do use a VM for development. Because the company assign me the resources depending on needs i can get a machine with 4-40 CPUs depending on needs . For production it is a different subject. Like deployments and delivery CI/CD times
I have been working on PC hardware for decades, but never really had a need for elaborate networking until like 5 years ago or so. I am very thankful to have docker available when trying to patch together a decent lab with low cost, older hardware. It made it really easy for me to keep up. It has also made the Zimaboard more useful imo.
Kubernetes is awesome..until you have to troubleshoot something and things go bad!
yes + 1
yeah same here i was suck on small issue fore weeks
+2
true for all complex software
Thank you for the informative video. You've convinced me that it is probably time to to start looking at containerization more in-depth. My main concern with them is security and isolation. I've typically been one to build from the ground up in a VM so that I know the integrity of the build. Pulling various containers from various sources just doesn't give me that warm fuzzy feeling as not having anything extra in it. I do agree that the benefits you mentioned are very compelling and have convinced me to start small with an additional pi-hole instance and go from there. IF you think it worthwhile, I'd love to see a video with your take on container vs hypervisor security. Thanks again.
Might be worth taking a look at podman if security is a prime concern
You could always just build your own container image so you are confident of its build integrity, then you get the benefits of containerization.
Good essay👍 vlog
Thank you, happy new year
I work with containers all the time. They are great in small lab environments but have seen very few work great at scale. It’s so much harder to debug the dependencies that are packaged with the containerized applications. I can provide several times we got a container from a vendor and it had issues running with production security requirements. We even had a well known vendor stop supporting docker as they did not want to help fix an issue. I cannot go to leadership and say well we spent all the money on a tool license and now have to change our tech stack to deploy the tool. Containers tech is still maturing and works only when the company that provides them actually understands them.
I have used docker on my Openmediavault nas, i hope to use it in more of a seperate sever dedicated enviroment in the future once i setup proxmox or Nutanix in my home lab😊
Thank you....looking forward for those projects....
This one of the most insightful video I have seen on these technologies
I use both paradigms. Containers for “narrow” services, and VMs where compute, GPU and/or storage matters. Some are “mildly” clustered, and some have failover - some hosts and some services.
ZFS is the common storage & cloning backbone, as it is “ignorant” of my various architectures, FS, OS and sharing shenanigans.
A homelab mess - but very stable over ca four years (please, no jinxing…) 😅
VMs migrate with exact memory state within seconds with virtually zero interruption to a service from host to another in a hypervisor cluster. How do you approach migration from host to another using containers in cases where you need to provide highly available service?
Well, usually it's just another container taking the load. Containers by definition can't migrate with memory state.
If you need to keep the server up to have high availability... You don't have HA.
The container approach would be to have multiple containers sharing the load with the system being able to tolerate nodes going in and out (eg the nodes share nothing or have some way of dealing with dead nodes and synching with live nodes)
BS on that migration without it impacting production. I carry the scars to prove otherwise.
Linux based containers all do that with criu. the kernel added several features to support it, e.g. new prctl operations for setting procfs state.
There are some who, maybe rightfully, brag about not using containers at all. They have extremely lightweight VMs and they do it for isolation. Containers share a hosts kernel, and so technically if an application in a container is vulnerable to a stack overflow or had some other vulnerability that can lead to root access of the container host then all over containers on the host can be compromised.
VMs are completely isolated except for in a very few cases like spectre and meltdown. If your BIOS/UEFI, firmware, and hypervisor OS is up to date and patched, using VMs would be a safer and more stable option.
Thanks for this talk sir. I am a total noob and maybe I will learn containerisation in the coming months.
I've worked for 3 midsize (1000+ employee) companies since 2013, and yet to see an actual use case for containerization...they have been traditional verticals; Healthcare, Insurance and Banking...so other than development houses, who is actually using it? I've discussed it with all of my contemporaries over the years, but not a single production system has been moved over from VM to containers....
Same. In the last place I worked, deployed Xibo and CheckMK in a docker swarm. Nothing crazy. It was all good and well, but then I was going to leave, and they realized they didn't have anyone who knew how the stuff worked. So, I re-deployed on VMs. Which made me realize, we should have just done that in the first place. Containers are a cool technology, but require very unique use cases and we were struggling to come up with scenarios that would work, long term. Even running Xibo and CheckMK on containers was kind of silly at the end of the day. 😅
Netflix uses containers apparently
@ I would expect all the FAANG companies to use containers…I’m curious if your local xyz mid to large size company is.
@druxpack8531 I use them for memcached behind a load balancer
In my last two jobs, we used containers everywhere. I’ve even moved multiple entire systems into containers.
Systems move slowly, because who really wants to reinvent the wheel. However, if you were to remake your network. I would highly recommend exploring containerisation.
I would suggest nomad over kubernetes (especially for homeland enthusiasts) as it's better in just about every way, but other than that solid vid.
@kspfan I'm glad to hear this...I have this on my list this year. Let me know what you feel the benefits of nomad are in your experience.
I work in Industrial Automation and Controls. No one is using Docker at all. Docker is for applications and services, through. We need to run programming software, historical trending, alarm servers, and SCADA. I dont think any of that will ever be on Docker.
what you recomend to mount cehpfs subvolumes in vms or docker containers?
thanks in advance
I don't understand why people use a hypervisor to deploy a container enviornment on top of it. I work with container enviornment for years now and we just use the bare-metal servers with Ubuntu to join in (or build) a cluster. No need of a hv at all. For us this is working just fine.
Hallo,
1) mixed mode VMs and containers on one host
2) If you have no KVM like HP ILO etc. it is more simple to solve problems like failed booting remote
3) Backup and restore is super simple
yes,
I know terraform, ansible and tons of tools but a lot of users out there aren t so well trained .
@RalfP-v3s got it! Thanks!
Then you shouldn't comment if you can't think about simple situations or use google.
@@wiziek man, I don't think you understand why the comment section exists at all. I got some doubts and someone got the answers. Life goes on. Get some help.
We run only DCs (and only some of them on dedicated physical machines) everything else is virtual so even as we move into containers they will be virtual machines whether more natively on ESX or some other hypervisor implementation. We have plenty of windows and linux vms so we need somewhere to run them for the foreseeable future.
Lots of sound tips and suggestions. Containers are on my list of things to learn. One thing holding me back in the work environment is whether the pre-built containers you get from a distribution place (not sure of the correct term) on the web are safe. For example, can/should I run Veeam in a container I download from someplace. Are building containers easy so I don't have to rely/trust someone else. Are government contractors using containers successfully? Thanks.
in work environment, you should have proper network, with right firewall rules, IDS and IPS,
of course any serious deployment should be done only after proper testing in isolated environment first
I started using Virtualbox in 2009, during my last year for retirement it allowed me to run Windows XP and MS-Office in a VM for compatibility with work. After retirement I used Virtualbox for distro hopping. I now use Virtualbox to separate application areas into secure areas and areas more vulnerable for hacking. For example I have a VM for email, (a)social media and another one exclusively for banking, which is encrypted by VBox. My Host OS runs OpenZFS and when I received an infected Email from an ex-colleague I simply restored the snapshot from before the hack. I don't think containers add much to my security.
The main purpose of a container is that you can run its latest stable version in each (Linux) VM. So I do run the latest stable snaps of Firefox, Thunderbird and LibreOffice in Ubuntu 16.04 ESM and sometimes the almost 9-year old Ubuntu runs newer versions than many other Linux distros. Ubuntu 16.04's LibreOffice version even beats occasionally Ubuntu 24.04's LibreOffice deb file version. The snap in Ubuntu has of course a security advantage, because of its integration with Ubuntu's firewall.
Do keep in mind docker compose has its own OS requirements for running too
Are you working with Podman as well?
You are mostly right... but nah, I'm alergic to docker.
Docker allergies are common. I presume you also break out in hives when considering kubernetes?
LXC might be worth considering if you need something easier to get into
@@Unselfless docker-compose is about as "easy" as it gets. I don't like docker on principle. The convoluted overlays and weird networking hide how services really work.
Problem with containers is they are not live migratable. You can "migrate" a container, but it will disconnect any active connections. This is problem if you are streaming or something similarly sensitive. Also VMs can be built similarly as docker images, you can use Ansible or scripts and prepare the VM similarly as docker image.
or NixOS
Live migration is a vestige of grandpas virtualization and a hack at that. It was a gimmick designed as a stopgap for what we do in the modern sense of application containerization, high availability, scalability, within the CI/CD pipeline. Whatever application you are running doesn't need live migration... your server does and that's why application containers are better. Now you got to update your early 2000s application codebase... but my guess is that can't be done because the talent is somehow too expensive. I guess you could be like the banks nursing along Cobol applications from the 1960s on big iron at an exorbitant cost.
@@BandanazX Yeah, they have to learn to clone and create multiple instances and then switch/route the user to the newer container -- we've been doing that since 1999 (in unix/solaris). That how Live Migration is supposed to be done, but someone tried to make a gimmick. I was literally doing jails/freebsd & zones/solaris in 2002 like that in web-hosting while in college.
How do I deal with storage for my containers? Let’s say I’ve got a k3s 3 node cluster, or a docker swarm, and I want home assistant to be running in the cluster. If I use local storage on a node, is it tied to that node? Do I need a dedicated nic for ceph storage, if so, does that need to be faster than GE? Should I just use nfs or cifs?
What a controversial yet brave thing to say
LXC's vs. VM's differences really can best be described by comparing and contrasting, which is done well in this video. Ask an AI to describe the benefits too!
Containerized software offers cleaner deployment than traditional installations. Docker containers isolate services and dependencies, preventing system-wide conflicts and making resource monitoring easier. This approach reduces system pollution compared to direct VM installations of custom scripts.
We are just this week migrating on prem HyperV hosted OpenHPC to EC2. The core issue being a requirement to be on Govcloud. I feel like HPC may be the Achilles heal of containers. At the core of high performance is access to all available cpu features. Abstraction seems to me it cripples core cpu features. The issue I have now is Amazon has pulled Parallel Computing from gov cloud. How effective are containers in massively parallel workloads?
Thank you
Can please tell us what's the prerequisites to learn container ( docker or kubernets ) for an network administrator?
What do you think about manage firewall policy in kubernetes (k8s) ?
With hypervisor, VM control by iptables, but in container how to make sure the network policy was deploy smoothly.
Why would docker/kubernetes containers, inside a VM, care about the hypervisor? This video isn't making sense. I am so confused what you're trying to say. These containers talk to the VM, which the containers see as a legit computer, so to speak. The VM, which runs on its own kernel, talks to the hypervisor. So obviously, docker containers are not going to care about the hypervisor. And? You might have a point if you talked about LXCs instead of VMs, as they share a kernel with the hypervisor, and cannot just be any OS you like, like you can with full VMs.
Cheers, Brandon. Your vids and examples are very inspirational!
I'm a NW who likes messing around. I've been tinkering around with nginx and powergslb and set up a fake web shop which I load balanced between my home and Digital Ocean (cloud). I set up a Wireguard vpn connection and then ran Docker swarm. Then setup Portainer with agents to monitor. Finally, I brought it all together in a single Docker compose file (Template). Needed to use volume and mount cmds to tailor containers during spin up (eg MariaDB init sql file).
I'm now interested in porting this to Kubernettes. From your experience, how simple is this? What's the best/easiest tools? Are they free or trail ones I could use?
Sadly, it's harder than it should be. If you want to do it: start with k3s and start with just creating deployments, which run pods which run containers and services which points to the containers that the deployments deployed. Those are the basics. When you can do that, look at how people use Helm to deploy existing applications and do that and use that a bit. Eventually you'll want to know better how to automate with it GitOps tools like ArgoCD or Flux.
Спасибо за качественные видео. почему docker а не podman? Как считаете когда появится гипервизор подготовленный специально для удобной работы с контейнерами и k8c? Может снимите видео о различиях docker and podman? Для докер много инструментов, для podman только podman desktop
i think your conclusion is wise, hypervisor are here to stay, i would say they are the Z mainframe of our generation. back in the early 2000s, lots of large enterprises tried to migrate their Z applications to Java on distributed, with some successes and some very expensive failures.
In my organisation, we started the k8s journey on vmware, but we are already moving to k8s on bare metal for heavy production worloads. There is a very good reason for it, there are different tools for different use cases. By the way, we also migrated some workloads to IBM Z15.
As a veteran home laber who started with 3 linux RedHat 7 (not RHEL 7) running vmware GSX 3 i think, to be relevant in IT, you'd better have many arrows in your quiver, don't underestimate infrastructure and network, but don't neglect cutting edge as well, if you chose well, there can be your tomorrow's job.
Cheers Brandon, love you channel and your ethos !
Exactly. Old fashioned, full-fat virtualization is a relic of the past. And just like companies did in the late 90s and early 2000 paying IBM a pretty penny so that they didn't have to bring forward their code base, companies will be paying VMware so that they can repeat the failure.
Open nebula is where it's at
My use-case is downloading and running virus laden software packages. I really need the isolation that a VM provides. I don't know much about containers, but do they provide the same level of isolation? Could the virus have something to break out of a container? Im pretty confident that the software can't break out of the VM.
Containers are generally more insecure than VMs. Malware can "break out" of a container easier than from a VM.
doesnt matter with the b2b problems from Broadcom, vmware will be dropped regardless.
Companies in generally do not like sudden moves and other associated problematic costs both short and long term, regardless how good the products are
E.g if you're running say a fortinet firewall with other cisco managed products and VMs, and one of those links in those chains critical gets zero support, zero compensation out of the blue and you have only 1 year to go....
its every admin's worst nightmare
once the management gets hit with the news its really impossible to savage apart from throwing more money [ which some idiots do ] and that the costs will go up by 3 folds... you will see how fast those VMs will then be dropped.
and from the comments, folks still do not know how bad it was when Broadcom took over
its not Containers vs Vmware
its your data and your support being swapped, or dropped at anytime.
Hi, if i have an app which runs only on windows 7, if i conectorize it, would i be able to run it on windows 11?
you forgot to mention LXC, also, with canonical releasing micro-cloud LTS, it seems that offering is ready for prime-time. The open source version of micro-cloud is incus and it can deploy clusters quite easy, just like micro-cloud.
I was also unaware of this. Thanks for mentioning that, the timing is perfect.
I’m going through a “rebuilding my home lab” phase, and this may fill a use case for me if it is what I’m thinking it is.
Have to check it out ty
as ever the job defines the tool
Good video, but you're way behind the curve here. I first started running containers in production AT SCALE on geo-distributed systems in 2016. That's now nine years ago. And there was never any war for the reasons that you mentioned. The only reason to use a VM any more is as a host for your containers.
One thing that I didn't hear you mention is the ephemeral nature of containers. Because containers are deployed and instantiated as self-contained units, the need for an upgrade path in your deployment system no longer exists. That means that tools like Ansible are useless. When it comes to upgrading a machine, simply change your Dockerfile, rebuild the container, and deploy. When done right, a single container can be deployed in any environment, so test, staging, and production run the same container with only config changes. The days of "it works on my machine" are long since over since all of your dependencies come in the container. And speaking of dependencies, tools like Python's venv are history too. Every container has it's own copy of your tools, so it doesn't matter which version of your toolchain is installed. In fact, the version of Linux doesn't matter much either. I've run Ubuntu back to 16.04 on some systems that couldn't be upgraded for whatever reason and they continue to run. This means that critical software can be kept current on the OS without requiring a dedicated VM and the resources to run it.
@toddbu-WK7L I appreciate your comment and insights here. However, i respectfully disagree. I believe you were way ahead of the curve compared to most average enterprise SMB clients. Most medium sized enterprise clients that I see in consulting don't know much about containers in 2024-2025. Many in the home lab realms are only beginning to see what containers can do in my opinion from what i see. Many still run VMs and want to make the switch. I think many have had the mindset that it is so easy to run things in VMs in VMware, why switch? I think the price hikes though are now pushing the issue at a massive scale. To your point as an argument, it really depends on what circles you are in. Some orgs have been on the cutting edge, while many are far behind.
@@VirtualizationHowto kudos to you for being able to find customers who need your services. You clearly have good understanding of containerization to help them make the transition.
Where I think that we may be disagreeing is in what it means for a war to be over (per your video title). Once a vastly superior technology comes along, it's no longer a competition but rather a done deal that takes time to deliver. Just like git supplanted CVS and SVN, or Linux supplanted Windows on the server side, it's just a matter of time before adoption of the new technology is the norm. I remember the "good old days" when scaling your web infrastructure meant buying a faster machine. That is, until vertical scaling no longer worked. There were some false starts on how to scale web sites until stateless, horizontal scaling using load balancers became the norm. That doesn't mean that there aren't small web sites that aren't single-server installations that will buy a bigger server if needed. But to say that a war still exists between vertical and horizontal scaling paradigms just because there a handful of sites that scale vertically seems to me to be a mischaracterization. They do compete with each other but it's really no contest when it comes to scalable architectures. The word "scaling" is synonymous with horizontal scaling.
You don’t need containers if you statically link your binaries
Honestly the two goes hand and hand esp in the cloud since the Middleware is virtualized hardware. Everything in the cloud is running onto of a VM that docker runs on.
It's important to understand the limitations of containers though, you can't run a different kernel to the host of course. The fact that you're running on the same kernel as the host also means that a vulnerability in the host kernel allows containerized processes to escape the container and infect the host. Rootless containers at least help a bit.
@BeOnlyChaos great points. Security is always a factor and must be considered. Securing container hosts is a must.
Here in my homelab the count is 15/4 for LXC containeres... they are a good choice between VMs and containers, and it is enought for my needs... if I decide to run docker or kubernetes here will be only for testing/training/learning and of course, inside a LXC... Proxmox is a good tool and I thinking in stick to it for the time being...
VMs Docker LXC. I use everything, with an emphasis on LXC.
Is it possible to containerise pfsense?
I’m using LXC’s in unraid 🎉
The fact that docker runs as root makes me feel like it’s just waiting to be broken out of and take over the host. I use podman instead so it can run at the user level.
You can run docker containers rootlessly, but the prime benefit of podman is it's forking process model
I think hypervisors are going to morph more into the "kernels" of modern secure operating systems. If you look at the architecture of Qubes OS where each application runs in its own virtual machine, independent of everything else. Applications and core OS applications can be isolated from the network and/or other applications. Additionally, this allows users to choose the appropriate OS to run a specific application that might be better supported.
Thanks Brandon.
My favorite thing about containers is the ability to containerize the network as well!
The additional network abstraction adds another layer of security and isolation.
For example; Being about to deploy a full fledged ELK stack in any environment is just so cool to me.
Yes, being able to spin up an app in its own little container universe... with its own VPN out to the world... is pretty slick. Hooray for Gluetun! Sure VMs can do that to... but in a much heavier way.
Not sure what you are trying to say, there is nothing uncertain about the future of VMware, it's very clear where it's going.
@autohmae Thanks for the comment! However, the video is not specific to VMware and more a general statement around VMs and containers. The VMware situation just helps to illustrate the need to start thinking towards modern application infrastructure.
@@VirtualizationHowto I was responding to one of the first things you said in the video: you said the situation with vmware is uncertain, I think it's very clear, so the opposite of uncertain: for homelab, smaller and medium businesses it has no future, Broadcom does not want your business or you will be paying more than you can probably afford. Anyway... my other comment is much more interesting/on topic, it's about containers.
I think in the video he was trying to be somewhat diplomatic and not offend some people that are VMWare mega-fans. The main message is that the hypervisor is not as important as it used to be. It’s a commodity really. Honestly, there are plenty of companies just using Ubuntu on bare metal with Kubernetes. Excellent performance and significantly cheaper.
Why not containers on baremetal OS install? Running containers in VMs is not as performant.
i always laugh at the container hype. its such a small niche thats always making the pointed hair bosses salivate. its bloatware to the extreme in the largest sense.
Many companies i have worked for use containers everyday without much issue. But i understand that there are also plenty of folks that like to keep doing things like they have since 2004. No problem, keep on doing what you think works for you.
literally launched pihole using qnap's container station after my raspberry pi died. now I'm wondering what to do next lol
Containers are hypervisor independent yes, but isn't that obvious why was this video even required?
Kubernetes is a Google originated open source project. Which means it's dead by design.
huh? :)
Legacy users will continue to matter since they invested so much they're hard put to escape Broadcom. Enthusiasts don't need to pay for software so they've no problem either way.
LXCs 4tw 🎉
I hate Docker
I don't like containers because a lot of vendors are using to make their products that were open source, close source. I have a number of them that forcing you to use containers because they know a lot of folks can't view their code any more.
LXCs yes, docker/kubes hell no.
Next stage Cloud-init
I'd like to clarify that your containers run on the guest OS, not the host OS. So your HyperVisor/hardware underneath is still there and thus still matters.
The main reason why HVs are used is to provide a *managed* split of physical resources.
As for "ease of boost", this is utter nonsense. You can create VM images that has everything included. As for containers? Easy go up, easy go down. From plenty of bitter experience.
As for the automation part... You know your Ansible code can be executed against a VM directly just fine, right?
And what the shit is "legacy workloads" ?? Is this a devops going "Ok Boomer" ??
"MaKe YoUr LiFe So MuCh EaSiEr" Blah. Said by someone that must have never had to manage large mission-critical infrastructure. I never found it easier, half because VMware is just easy already.
So much empty container-hyper fart speech in this video. I've heard all of this before from some guy that said "it will be so good! And everything is free and open source! And the resource usage will go down! And it will be so much more stable and easier to manage!" - And then left the company with a hot steaming pile of dogshit container nonsense.
When done right, they're fine. I wouldn't call them bad, certainly not as a concept, but stop with this bullshit trend of "containers are the future of everything! They are perfect" yada yada yada.
There are companies out there transitioning back from containers for a reason.
They are nice, but not the end all be all of how to do infra. You're just selling a nice sounding dream.
@zivunknown First of all, I manage thousands of VMs on a daily basis in production, so well aware of mission-critical infrastructure. Also, containers run from the perspective of its "host," since it shares a kernel with the host. Yes this can be a "guest" VM running in a hypervisor like VMware, but it can also be on bare metal (no guest operating system involved). VMs and containers are both tools for a specific job. The goal of the video is to say that let's use the right tool for the right job. I would much rather containerize a simple app than run this app inside a VM when there is no need to do that. Did you realize that Chick Filet services 2800+ restaurant locations with edge kubernetes clusters? Why did they not choose VMs for this? It is because it was the best tool for the job and provides agility and other advantages over VMs. Generally when I see ones mention that containers fail and are not reliable it is due to the infrastructure not being configured correctly. Containers are solid and run fine when architected that way, no less than a VM infrastructure that is architected correctly.
I hate clickbait. Can containers run Windows? No? Then things stay the same. With the rise of mandated cyber security baselines and 48hr patching requirements, out of date container images are even less likely to be used in production. Couple that with poor isolation, GUI management and robust back systems and no one is looking to use containers where business critical data is kept. For homelab it's fine but for business it's a big no.
@ericneo2 Thanks for the comment! However, no clickbait here. There is much more to enterprise architecture and infrastructure than Windows. Most Fortune 100 and 500 companies are using container-driven apps. I am currently working with companies that are looking to move away from legacy VMs and towards containerization for better development processes, easier deployments, and much more agile operations. I think this will accelerate now that Broadcom has raised licensing across the board and VMware has the lion's share of VMs in the enterprise. However, keep in mind there will always be a need for a few VMs around unless MSFT changes things, domain controllers, etc as well as client operating systems.
Containers can run Windows. Virtualized desktops as a service have been around for years.
Windows is a joke. Nobody should be running that non, maybe in desktop. It’s so ingrained that most people think they have no choice. You have everything you need in Linux containers.
1st🎉
Lol 2nd