Nice video. Something that should be brought up is that sometimes containers don't "just work" on different machines for hardware related reasons. I had a case where everything worked in the development and testing environments, but it kept crashing on startup on an environment another company provided us. Long story short, they had given us access to a VM which didn't have AVX instructions available.
Shouldn't there be safeguards to detect for AVX support? Some computers don't support certain SIMD extensions. At least, I've found out the hard way that mine doesn't support AVX512. But maybe there is safeguarding, and that VM doesn't override CPUID correctly, in which case that would be a fault of the VM. If the VM decides not to support something, they should at least make sure they indicate it.
Please note that there are a lot of simplifications and some inaccuracies in this video. The main benefit that containers provide compared to VMs is that they start much faster. There's also some overhead when VMs access hardware like the disk and networking. However, if you're running the VM directly on a hypervisor as opposed to inside your normal OS, then this overhead is relatively small. VMs on the other hand can run various operating systems, while containers run Linux inside a Linux host. (There are other options too, but this is the most common case.) If you want to run a Linux container on a MacOS or Windows host, the docker desktop actually runs a Linux Virtual Machine, and it runs the containers inside it. And this is an option even on Linux, if you need a different kernel for example. The video seem to suggest to put many things inside the same container. E.g. Postgres and NGINX. This is generally considered a bad practice. Normally you'd put those two in separate containers and let them talk to each other. This allows you for example to scale the two parts independently and to reuse the same containers for different projects. Speaking of scaling horizontally, the video mentions that as if it's something that is magically provided by the containers. They do help in this regard, but your application needs to be written appropriately. For example horizontally scaling SQL databases is hard. The DBMS needs to support it itself. You can't just make many copies of it, unless the DB is used only for reading. You're saying (4:30 on) that the "COPY . ." copies the things installed by the previous command inside the image. This is not true. The "COPY . ." command copies the rest of the source code, pictures, etc. from your local folder to the container. It doesn't copy the things installed by "RUN npm install" as they are already installed inside the image. Overall, to get the most out of containers one would need to understand their benefits and shortcomings and to design the application appropriately. Making it sound like they are better than they are only creates problems.
Thanks for the additional information! I was trying to explain a high level / basic approach on the benefits of docker. Your comment is more in depth and a great addition!
@@anonymousalexander6005 I'm not sure I understand your point about the type 1 and type 2 hypervisors and containers. Can you please elaborate? P.S. I find the "bare metal hypervisor" and "hosted hypervisor" more descriptive, and thus better, than "type 1" and "type 2". I always have to check which was which :) Putting several services in the same container only makes sense if thy do scale together. It might be better sometimes, but it is the special case. That is, if you put them together you can only scale them together. If you put them separately you have more options (including a 1 to 1 scaling). The special case might have some benefits, but the general one is still better fit for this video IMO.
Sure for local it's fine, but when you have 50 different software by 50 different devs, all needing port mapping and volume mapping, docker still kicks butt
@@ARandomUserOfThisWorldarch is just linux with a package manager which recieves updates everyday. nix os has a whole philosophy around reproducibility.
@@ARandomUserOfThisWorld I've been using arch for a while now and I find it really easy, and I came from Ubuntu. I also gave nixOS a shot a while ago, it has a install ui, so it is easy to install. But I found the config language difficult, and that might cause people to find it more impressive.
@@patrickkdev ah, makes sense. Also, you probably expected this, but i am legally obligated to say: 'I use Arch BTW'. Also yeah arch does seem better at least for my needs
I know it’s not identical, but I had to deal with Python venvs for an API we had at my old job. One thing that we ran into quickly was that you couldn’t push the whole venv by default from the dev to prod servers. Turns out it defaulted to absolute paths for EVERYTHING. So if your python installs were in different places or were 0.0.1 version difference, it would break. Finding all those spots and fixing them was a pain in the butt. So we just only committed the actual code and had it populate INTO the venv on each server respectively
The whole point of the JVM is that you compile to it and it runs anywhere there's a JVM. I've never fully understood why people bother put spring apps in containers when they don't need the isolation
I'm working at an AEM project at work, and it too is Java/JVM based. I wish a little sometimes I could just use containers to reproduce our AEMaaCS (Cloud AEM) 😅, because by some configuration - most times permissions and occasionally because of IP locks, stuff works differently locally, in cloud testing and in cloud production. Also our application is segregated with more pieces of different versions and code we can't access, just call, and that causes issues, too :/.
Containers are NOT ALWAYS reproducible, so the "doesn't work on my machine" happens. In the moment you use a `apt update` or similar you just lost you reproducibility altogether. (The container has a different hash) Nix is a better way to solve this issue and can be used with container. Container is a way to isolate your environment and Nix resolves the problem of building the container image deterministically (yes also nix can be have reproducibility issues but it's basically the exception than the rule)
The point of eliminating of “it woks on my machine” is talking about the container itself, it is not the image creation process. So if you trying to build an image from docker file, it is the plain old process outside of container protection.
OR you could just use a "nontrash" language with cross compilation and static linking. This will ACTUALLY work on any machine. WITHOUT any virtualization, container, vm, whatever else.
This won't fix the issue completely, as you can see from the following answer on this video: "@catnt6511 5 hours ago Nice video. Something that should be brought up is that sometimes containers don't "just work" on different machines for hardware related reasons. I had a case where everything worked in the development and testing environments, but it kept crashing on startup on an environment another company provided us. Long story short, they had given us access to a VM which didn't have AVX instructions available." Your case still would have failed, because you might have set the AVX bit on your environment, and it would just crash in production.
how are you getting code completion in your command line? is it copiilot cli? trying to find a free or cheap alternative but couldn't for windows. idk why codexcli doesnt work
Im just using zsh with zsh-autosuggestions. You can look into oh-my-zsh along with zsh to get a similar setup. You may have to use WSL if your on windows, but I am not 100% sure
Thats why containers are so great! You can create a container on your computer and as long as the other computer supports containers (they almost always can), the code will run the same on both your local environment and the external hardware! Containerization has streamlined a lot of production environments allowing devs to develop and test code locally on an environment that is a 1 to 1 copy of the production environment
@@codepersist mostly true but slightly inaccurate, if your program is dependent for CPU extensions like AVX, and your host machine or VM does not support it it's not going to work But for "normal mortals" different hardware isn't something that is gonna cause issues :) Happy coding
Great now I have to make the containers work on my machine. Imagine spending weeks learning a system rather than just reading the instructions on the readme?
containers can only run linux applications on a linux host. mac and windows need to spin up a vm and use the docker containers inside that vm. also containerizing every single application component just to get it to run on different hosts is like the pinnacle of what is wrong with modern software engineering. something like nix is a much superior concept to containers.
do NOT use containers for node development because it will waste tons of your time... if you want to make sure everything is working, use ci/cd and deploy your app to your dev environment before deploying on production
Nice video. Something that should be brought up is that sometimes containers don't "just work" on different machines for hardware related reasons. I had a case where everything worked in the development and testing environments, but it kept crashing on startup on an environment another company provided us. Long story short, they had given us access to a VM which didn't have AVX instructions available.
Shouldn't there be safeguards to detect for AVX support? Some computers don't support certain SIMD extensions. At least, I've found out the hard way that mine doesn't support AVX512. But maybe there is safeguarding, and that VM doesn't override CPUID correctly, in which case that would be a fault of the VM. If the VM decides not to support something, they should at least make sure they indicate it.
Please note that there are a lot of simplifications and some inaccuracies in this video.
The main benefit that containers provide compared to VMs is that they start much faster. There's also some overhead when VMs access hardware like the disk and networking. However, if you're running the VM directly on a hypervisor as opposed to inside your normal OS, then this overhead is relatively small. VMs on the other hand can run various operating systems, while containers run Linux inside a Linux host. (There are other options too, but this is the most common case.) If you want to run a Linux container on a MacOS or Windows host, the docker desktop actually runs a Linux Virtual Machine, and it runs the containers inside it. And this is an option even on Linux, if you need a different kernel for example.
The video seem to suggest to put many things inside the same container. E.g. Postgres and NGINX. This is generally considered a bad practice. Normally you'd put those two in separate containers and let them talk to each other. This allows you for example to scale the two parts independently and to reuse the same containers for different projects.
Speaking of scaling horizontally, the video mentions that as if it's something that is magically provided by the containers. They do help in this regard, but your application needs to be written appropriately. For example horizontally scaling SQL databases is hard. The DBMS needs to support it itself. You can't just make many copies of it, unless the DB is used only for reading.
You're saying (4:30 on) that the "COPY . ." copies the things installed by the previous command inside the image. This is not true. The "COPY . ." command copies the rest of the source code, pictures, etc. from your local folder to the container. It doesn't copy the things installed by "RUN npm install" as they are already installed inside the image.
Overall, to get the most out of containers one would need to understand their benefits and shortcomings and to design the application appropriately. Making it sound like they are better than they are only creates problems.
Thanks for the additional information! I was trying to explain a high level / basic approach on the benefits of docker. Your comment is more in depth and a great addition!
@@codepersist I know how hard it is to include just the right amount of info. I think you did a pretty good job with that.
@@anonymousalexander6005 I'm not sure I understand your point about the type 1 and type 2 hypervisors and containers. Can you please elaborate? P.S. I find the "bare metal hypervisor" and "hosted hypervisor" more descriptive, and thus better, than "type 1" and "type 2". I always have to check which was which :)
Putting several services in the same container only makes sense if thy do scale together. It might be better sometimes, but it is the special case. That is, if you put them together you can only scale them together. If you put them separately you have more options (including a 1 to 1 scaling). The special case might have some benefits, but the general one is still better fit for this video IMO.
I love your channel, even as a cs student, it’s such a refreshing way of learning these things. Very interesting. Keep up the good work 🙏
I prefere to use nix, because you can specify dependencies and tools that will be locally installed without virtualization.
But nice video anyway
what's nix?! I've never heard of it!
Sure for local it's fine, but when you have 50 different software by 50 different devs, all needing port mapping and volume mapping, docker still kicks butt
"works on my machine" ---- ok so we will ship your machine
i use nixOS btw
I’ve long wondered if arch or nixos is more impressive
@@ARandomUserOfThisWorldarch is just linux with a package manager which recieves updates everyday. nix os has a whole philosophy around reproducibility.
Nix if you want your software to just work @@ARandomUserOfThisWorld
@@ARandomUserOfThisWorld I've been using arch for a while now and I find it really easy, and I came from Ubuntu. I also gave nixOS a shot a while ago, it has a install ui, so it is easy to install. But I found the config language difficult, and that might cause people to find it more impressive.
@@patrickkdev ah, makes sense. Also, you probably expected this, but i am legally obligated to say:
'I use Arch BTW'.
Also yeah arch does seem better at least for my needs
I know it’s not identical, but I had to deal with Python venvs for an API we had at my old job. One thing that we ran into quickly was that you couldn’t push the whole venv by default from the dev to prod servers. Turns out it defaulted to absolute paths for EVERYTHING. So if your python installs were in different places or were 0.0.1 version difference, it would break. Finding all those spots and fixing them was a pain in the butt. So we just only committed the actual code and had it populate INTO the venv on each server respectively
can you talk about Test Driven Development and Game Programming?
The whole point of the JVM is that you compile to it and it runs anywhere there's a JVM. I've never fully understood why people bother put spring apps in containers when they don't need the isolation
I'm working at an AEM project at work, and it too is Java/JVM based. I wish a little sometimes I could just use containers to reproduce our AEMaaCS (Cloud AEM) 😅, because by some configuration - most times permissions and occasionally because of IP locks, stuff works differently locally, in cloud testing and in cloud production.
Also our application is segregated with more pieces of different versions and code we can't access, just call, and that causes issues, too :/.
Because Kubernetes.
You’re back!! Awesome video!
please make the video about kubernetes
It worked on my machine.
We have the same container!
Virtualization software just casually stops working properly.
the next video needs to be about nix :D
What software do you use to make these videos?
4:32 when you are running npm install, where are you installing the packages? on you machine?
5:40 what are you running in the other tab with ffmpeg? are you recording this video lol
Containers are NOT ALWAYS reproducible, so the "doesn't work on my machine" happens.
In the moment you use a `apt update` or similar you just lost you reproducibility altogether.
(The container has a different hash)
Nix is a better way to solve this issue and can be used with container.
Container is a way to isolate your environment and Nix resolves the problem of building the container image deterministically (yes also nix can be have reproducibility issues but it's basically the exception than the rule)
do you mean Nix Flakes?
Nix itself doesn't solve update problems.
But tagged containers are the same, if you test on the tag, it will work on same tag on the server
The point of eliminating of “it woks on my machine” is talking about the container itself, it is not the image creation process. So if you trying to build an image from docker file, it is the plain old process outside of container protection.
_To err is human; to contain, divine_
thank u
OR you could just use a "nontrash" language with cross compilation and static linking.
This will ACTUALLY work on any machine.
WITHOUT any virtualization, container, vm, whatever else.
This won't fix the issue completely, as you can see from the following answer on this video: "@catnt6511
5 hours ago
Nice video. Something that should be brought up is that sometimes containers don't "just work" on different machines for hardware related reasons. I had a case where everything worked in the development and testing environments, but it kept crashing on startup on an environment another company provided us. Long story short, they had given us access to a VM which didn't have AVX instructions available."
Your case still would have failed, because you might have set the AVX bit on your environment, and it would just crash in production.
Me, when I start open source: "It works on my machine!"
My fellow contributor: "It doesn't on mine!"
Also me: *UNINSTALLS GITHUB*
how are you getting code completion in your command line? is it copiilot cli? trying to find a free or cheap alternative but couldn't for windows. idk why codexcli doesnt work
Im just using zsh with zsh-autosuggestions. You can look into oh-my-zsh along with zsh to get a similar setup. You may have to use WSL if your on windows, but I am not 100% sure
What if the hardware is different?
Thats why containers are so great! You can create a container on your computer and as long as the other computer supports containers (they almost always can), the code will run the same on both your local environment and the external hardware! Containerization has streamlined a lot of production environments allowing devs to develop and test code locally on an environment that is a 1 to 1 copy of the production environment
@@codepersist mostly true but slightly inaccurate, if your program is dependent for CPU extensions like AVX, and your host machine or VM does not support it it's not going to work
But for "normal mortals" different hardware isn't something that is gonna cause issues :)
Happy coding
What is your terminal setup? It’s gorgeous
Great now I have to make the containers work on my machine. Imagine spending weeks learning a system rather than just reading the instructions on the readme?
just write good cross platform code, no need containers... easy.
containers can only run linux applications on a linux host. mac and windows need to spin up a vm and use the docker containers inside that vm. also containerizing every single application component just to get it to run on different hosts is like the pinnacle of what is wrong with modern software engineering. something like nix is a much superior concept to containers.
"It works on my machine "
- "Send me your marchine !"
do NOT use containers for node development because it will waste tons of your time... if you want to make sure everything is working, use ci/cd and deploy your app to your dev environment before deploying on production
so no more excuses?
Or you can just fix your code…
fix the code, bruh 😂. It's not the code but the environment