I'm a firmware engineer. We use docker for our 'automated test' stations - the devices under test are all connected to a host machine which is running multiple copies of the same docker - one for each set of hardware, with devices passed into the docker (together with the station number). Each instance is a Jenkins slave node, and has labels for the capabilities/configuration of each station. Out automated tests (which number in the 1000s), run on the first available node with the required labels. We can now run our tests overnight rather than the manual testing which took weeks. Doesn't completely eliminate manual testing, but if a release candidate passes all the unit tests, integration tests, and automated tests, it only rarely has an issue beyond something esthetic.
@@robinpipslayertekprofitsfa2644 we've got 30 stations split over 3 hosts - we found there was a limitation on the number of USB devices a single host can handle, it really doesn't need the horsepower. The hardware for each station (including the device under test) runs about $500NZ, so not cheap, but we built up gradually. The ROI is reliability and reduced QA time - we have 1 QA part time, when it was manual we needed 2 full time QAs, and it took a week
I do the same and its so simple. However, this will only work for applications like node or others that will autorestart the server on code change. When working withsql servers its a bit harder to do, so in this case docker watch would be pretty cool...
I think the point was for the image to be self-contained and not reliant on the host data so that the image can then be deployed as-is without any additional dependencies.
@@vike1705moving away from a monolithic architecture might help. Make the SQL server another container that see's very little changes and keep the main app stateless. The SQL server can stay up whenever you make code changes to the app container.
I think the most odd use of docker I've come across was in embedded systems. I worked for a company that manufactured single board computers and they needed to be tested during manufacturing. Normally that means making a specific test image for an SD card, burning it and shipping that out to the manufacturer. Instead, we made a single test image which phones home with the board serial number and it then looks up and pulls down a specific test docker image to run the tests. Now no more burning and the SD cards don't wear out. The manufacturer can then use the same image for all the boards.
I'm confused. Why did they use SD cards in the 1st place? Is the shipping distance short enough that it's faster than ISP bandwidth? Why is Docker used instead of just letting them download+flash the image with an automated tool?
@@binaryblade2 oh, like a Raspberry Pi? I understand that, but I'm still confused. I've read it again rn (this is the 5th cumulative time 💀), and I think I understand some more: So each board model needs some specific files to do the tests, but all of the boards can use the same "base image". This means that instead of building+flashing custom images for each model, they release a "base" img for all boards and that img (when run) downloads model-specific files. Am I right?
One odd use of Docker I've written is publishing versioned images containing only files that another service in a much larger compose file needs (since docker doesn't have any concept of publishing a volume). That image simply copies its assets to a volume mount and exits, making the files accessible to other processes with access to that volume
We've just started using test containers at my work, its really nice for testing the integration with the database. In the past we tried to mock the database connections in the unit test and only integration test the whole service, but this lets us easily use a real database for the unit tests of the data access layer, which we have running live as we type. You can type SQL into a string and watch the test go green as you type, with no mocking.
I don't see how many of these are odd. I've been doing a lot of them since I learned how to use Docker. The "watch" feature is super nice though. You no longer have to rely on the various watch features of the different build tools.
I personally use containers as my main Work machine. Meaning i have my dev tools in it and just mount a volume that is my home drive. The reason is the same as you mentioned in your video, reliability and reproducability. I dont have to wonder if an update screws over my work envirnonment. Also it can run everywhere so all i have to do is to make sure i have ssh and the image on any of my hosts.
That is really neat, I liked using proxmox for that same purpose years ago. I wonder are there are any snags with using docker? As I recall my hardware was not up to the task of network raid so I had to schedule regular syncing (that was surprisingly fast with rsync), and docker sounds almost ideal for that.
@@jameslynch8738 well docker in docker is supported. The only thing I had to do was move my home folder in a volume because at first I mounted a home folder from My host. Depending on the company antivirus that can be a problem 😅. And me using nvim with plugins and lazy being written in node. You have a lot of files and the antivirus freaked out. But after that it’s pretty easy. I havethe same image on my machine and the 2 other hosts I use the most. Have a docker-compose file with „restart always“ and so I always have a workstation and need only ssh.
Same point I just add compose to new projects when I need to look thought some code for someone. Then remove the container once I'm done. Puff dev environment back to prestine
@@elmalleable yep also helps with version management. The version management in python for example is horrible. Even things like conda are just the good fixes for a still bad situation. Way easier to have different images for python different version. Devcontainer are specifically made for that. But I also use them in our CI/CD pipeline. Ie: I reuse the same image I use for my workstation as a pipeline target. The pipeline doesn’t have my home directory so it doesn’t have all my crap that accumulates over time. And I have reproducible builds . If something breaks in the build step I can be almost 100% certain the error has something to do with the home dir. So much less headache 🫡
@@badrequest403 Wow, the simplicity of that just blew my mind. I had a Delta backup system with one month of daily and weekly, but after an update something messed with the uid/gid numbering convention, then had a lightning storm that during recovery each drive failed one after the next. Ten years of R&D records and essential accounting information, I think I had a nervous breakdown at some point, caught pneumonia and just kept working. That was over ten years ago though 😕😅
Great video! My oddest docker use case is, we have an application which works API first, and as part of that workflow we want to generate libraries for the various services and clients to match the API specification. One of those applications is a flutter mobile application. What was challenging was to ensure we had method of building the flutter library on any machine or pipeline. Our solution was that we have a docker file create an environment where we can build the dart code and then we copy the volume/output from the container back into the project so it can be versioned.
I often use Docker containers as an isolated development environment. I’m currently taking a CS class that requires a set of Python modules provided by the textbook, but they don’t install properly in macOS Sonoma and Python 3.12 and haven’t been updated since 3.6 was current. Thankfully, Docker was there to save the day! A bit of scripting to automate the module installation, system updates, and setting my preferred PS1, and it works great. Plus, not having to install development dependencies and SDKs for Python, Java, Rust, Node, and C (gotta try ‘em all!) all on one device is nice. If I’m done with a language, I can just nuke its image and volume!
On a side note: so far, I’m not a big fan of Rust. It’s fine, but it’s more explicit than C. I know it helps with code safety, but it’s a bit much IMO. And why on earth does it have two different string types!? (…at least it has strings, C.)
One of the rare moments where I subscribed instantly. Really nice video! Nice editing, nice story telling and great content. Docker could not have chosen a better partner on this video!
I discovered testcontainers after watching this video. Would love to see more of how you use are using it! I really like working with devcontainers and this seems like a natural companion. What are your thoughts?
Really really cool to learn about these commands - I've been getting into docker more and more recently and I am SO excited to try docker init for some of my old projects that I want to run in containers!
1:39 there is an amazing feature you gloss over that you can run GUI applications in Docker and connect to them via VNC. in the video it looks like it opens up automatically, I assumed that you would manually start your VNC client but maybe this is somehow automated as well?
I think doing this with xpra would be more efficient. Its designed exactly for that kind of use case, you can automate with bashcript and .desktop files fot whatever app your'e using.
@@justahumanwithamask4089 looks like he is using this repo: jlesage/docker-firefox, which says to connect via vnc (or via a web interface, that is probably also a vnc wrapper website)
One cool use case is ripping blu-ray's contents using Docker. You can have it automated by simply inserting a blu-ray disc into a drive on the machine the docker container is running on
I use docker as a Linux desktop vm. On linuxserver they have something called a webtop and it basically creates a Linux machine that your browser can connect to. So if I’m messing around in arch Linux and I break something, I can just do a redeploy and it’s no longer broken
@@jake8217 why not? I don’t do anything crazy, I don’t need bare metal performance, it definitely feels faster then virtualbox, and I can wipe the state in seconds if something goes wrong, I usually just mess around with themes and checking app compatibility for when I eventually make the switch to full Linux
@@jake8217 even if I believe you it would be interesting to hear the argument for that and where the line is. There must be a use-case when containers shouldn't be recommendable anymore.
i've used docker for programming languages and tools for a while. so much that i've made myself some aliases to run those images with mounted volumes. but the whole `docker compose run ` brings this to the next level, so i'll probably do some sort of asdf.
We deploy our high speed low level cpp software to a lot of different platforms ranging from embedded systems to cloud infrastructure. To ensure our stuff works for all cases we have our CI/CD pipelines spin up containers for all possible OS, architectures, and compilers we could possibly work with. Then it will build and run our test cases in all of those containers to make sure our core libraries will behave for whatever deployment scenario we run into.
the last 3 ways are fairly standard usages of Docker in any IT company with more than 5 employees (that actually write tests, for that specific use case). But the details are still pretty interesting nonetheless
'docker compose watch' is pretty neat. It reminds me of 'inotifywait', which is what I used when I would invoke shell scripts instead of using 'docker compose'
The first use case has saved me a couple of times. The best, though, was when I needed to remote debug an old Brightsign player which I needed to run Chrome v48 in order to run the debug tools.
"Running old versions of software" is not an unusual use of docker at all that's honestly it's whole purpose, as if you start a new project now, it gets old next week already
In 4:43, the define-all-tooling-in-a-Dockerfile approach can give you one container at the end. But if you use docker compose, wouldn't that give you multiple containers instead? But what if you want one container with all your dependencies?
I started using docker since 2014 and the reason was NodeJs 😄 Node was so heavy on my mac, even after uninstalling it. So I came up with an idea 💡 After a clean installation of macOS, I installed Docker and used NodeJs inside Docker and never installed it on my Mac
I’ve been using Docker for years and especially the legacy code case is great. I maintain a legacy webapp that’s stuck at PHP 5.2 (the horror) and having crafted an PHP 5.2 image (patched with PHP FPM) I can still do some (minor) development on this app. The same goes for some legacy apps that require Node.js tools (such as an old webpack for example) that I’ve put into a container. The JavaScript world is hell bent on breaking stuff with every release of a package or Node.js, so this way I don’t have to fight the system every once in a while I need to change something.
This is awesome. Every time you are bringing up a use case, I’m going through want I am doing currently in my mind. And your suggestions are so much more elegant. Thank you.
A use that put Docker on my radar back in 2013 was the ability to manage dependencies independently of the working image. In my case, I was actively working on a Perl app. Perl’s package management is not what I would call “modern”, and was very picky about versioning. I was able to create a base image, which my working image inherited from, so that the versions of packages were very consistent, unless I tested and pushed a new version of the base container.
I use vs code dev containers for dev purposes, you can also combine them with docker compose and there’s no need to rebuild. If you make changes to the docker image you can always commit the changes with docker commit, although defining the changes in the file is always a must as well.
Great video! I had no idea about `watch`. Is there an action to just restart the container when files change? There are some services that need to be restarted when the code changes and that don't have their own file watchers. And I don't need to sync anything because I'm using bind mounts.
I haven't used docker before, but this helps me understand it a bit more. Sounds kinda like the assembly thing Unity has. But I might be misunderstanding.
What we need is a high def version of 2:30 publicly available for all devs around the globe to tag in their profiles. Should be a must for all devs who use Docker atleast
We use docker in embedded systems where I work and honestly I don't know why it isn't more common in the embedded space. Embedded systems have a really long life and the companies deploying them tend to minimize long term changes. Being able to archive the build environment is a matter of necessity. Example, in the middle 2010s we had to fix a defect in a product released in the 90s, the compiler company didn't even exist anymore. It also makes deploying the build environment to a distributed team easier. "Install docker, run scripts from inside this specific container." This makes on boarding, project hopping, and unit testing faster (no more "it works at my desk"). It's been night and day for us.
When I need to use some open source software that doesn't have any binary release available, I create a new fresh Debian container, install the necessary dependencies and compile it. When it finishes, I just copy the binary to my host machine using docker cp. That way I don't bloat my pc with huge build requirements some project have.
So, u can also create a wsl distro out of the docker container, just export the container as zip and import it as wsl distro. It's not as connected but still.
you can just run the tests in docker as well, having a target tests in your dockerfile, then just set depends_on in the compose and docker will check for you, then a Makefile to just “make tests” and done
Using docker as local stack or developing environment is ok but if it creates or modify files you need to match the uids of host and container. Otherwise files with root owner would appear on your host file system.
Is it just me because I find that to be very cumbersome to use Docker just to develop. I might use it to run integration and deployment tests, but that's about all.
have never seen someone be sponsored by docker damn
Damn boss 😂😂
A 300x bigger yt channel (fireship) got sponsored as well
but I'm sticking with podman
@@NastyWicked yet to meet all the production demands but would love to switch to it
Someone give this editor a raise
Maybe he's The Editor. 😂😅
It's all me still 🥹
@@dreamsofcodeit's beautiful
@@dreamsofcode That's the first time I've seen someone good with low level programming + good with graphics. Fantastic!
@@clusterdriven thank you! I'm still woefully inefficient (this one took a long time). Im hoping to take a course to improve my workflow.
I'm a firmware engineer. We use docker for our 'automated test' stations - the devices under test are all connected to a host machine which is running multiple copies of the same docker - one for each set of hardware, with devices passed into the docker (together with the station number).
Each instance is a Jenkins slave node, and has labels for the capabilities/configuration of each station.
Out automated tests (which number in the 1000s), run on the first available node with the required labels.
We can now run our tests overnight rather than the manual testing which took weeks.
Doesn't completely eliminate manual testing, but if a release candidate passes all the unit tests, integration tests, and automated tests, it only rarely has an issue beyond something esthetic.
Wow! That is deeeeep!! May I ask, how many units are tested in Your process?
sounds expensive lol
@@robinpipslayertekprofitsfa2644 we've got 30 stations split over 3 hosts - we found there was a limitation on the number of USB devices a single host can handle, it really doesn't need the horsepower.
The hardware for each station (including the device under test) runs about $500NZ, so not cheap, but we built up gradually. The ROI is reliability and reduced QA time - we have 1 QA part time, when it was manual we needed 2 full time QAs, and it took a week
Interesting
@@Insomniatic9988How does it sound expensive?
Mounting the source directory as a volume also works so you don't need to rebuild the image any time code changes.
I do the same and its so simple. However, this will only work for applications like node or others that will autorestart the server on code change. When working withsql servers its a bit harder to do, so in this case docker watch would be pretty cool...
I think the point was for the image to be self-contained and not reliant on the host data so that the image can then be deployed as-is without any additional dependencies.
@@vike1705moving away from a monolithic architecture might help. Make the SQL server another container that see's very little changes and keep the main app stateless. The SQL server can stay up whenever you make code changes to the app container.
Good point!
This feature was likely created because of mac os slow fs sync between virtual machine and host system. On linux it is not a problem.
I think the most odd use of docker I've come across was in embedded systems. I worked for a company that manufactured single board computers and they needed to be tested during manufacturing. Normally that means making a specific test image for an SD card, burning it and shipping that out to the manufacturer.
Instead, we made a single test image which phones home with the board serial number and it then looks up and pulls down a specific test docker image to run the tests. Now no more burning and the SD cards don't wear out. The manufacturer can then use the same image for all the boards.
Is the image pulled for the board run as docker in docker?
This is quite interesting. Thanks for sharing this.
I'm confused. Why did they use SD cards in the 1st place? Is the shipping distance short enough that it's faster than ISP bandwidth? Why is Docker used instead of just letting them download+flash the image with an automated tool?
@Rudxain it's a sbc, SD cards are the primary disk
@@binaryblade2 oh, like a Raspberry Pi? I understand that, but I'm still confused.
I've read it again rn (this is the 5th cumulative time 💀), and I think I understand some more:
So each board model needs some specific files to do the tests, but all of the boards can use the same "base image". This means that instead of building+flashing custom images for each model, they release a "base" img for all boards and that img (when run) downloads model-specific files. Am I right?
One odd use of Docker I've written is publishing versioned images containing only files that another service in a much larger compose file needs (since docker doesn't have any concept of publishing a volume). That image simply copies its assets to a volume mount and exits, making the files accessible to other processes with access to that volume
Init containers are pretty normal in the kubernetes world.
VSCode Devcontainers are my favourite use-case.
This is something I've been researching using docker for. This video has inspired me to go full send.
We've just started using test containers at my work, its really nice for testing the integration with the database. In the past we tried to mock the database connections in the unit test and only integration test the whole service, but this lets us easily use a real database for the unit tests of the data access layer, which we have running live as we type. You can type SQL into a string and watch the test go green as you type, with no mocking.
Lol once you have seeders and migrations you and up refreshing the test dB anytime you want especially to test new code behaviour
If you use database in tests then those are not "unit tests"
I don't see how many of these are odd. I've been doing a lot of them since I learned how to use Docker.
The "watch" feature is super nice though. You no longer have to rely on the various watch features of the different build tools.
Agreed, pretty sure we’ve all been using socket like this for yeaaars
I personally use containers as my main Work machine. Meaning i have my dev tools in it and just mount a volume that is my home drive.
The reason is the same as you mentioned in your video, reliability and reproducability. I dont have to wonder if an update screws over my work envirnonment.
Also it can run everywhere so all i have to do is to make sure i have ssh and the image on any of my hosts.
That is really neat, I liked using proxmox for that same purpose years ago. I wonder are there are any snags with using docker?
As I recall my hardware was not up to the task of network raid so I had to schedule regular syncing (that was surprisingly fast with rsync), and docker sounds almost ideal for that.
@@jameslynch8738 well docker in docker is supported. The only thing I had to do was move my home folder in a volume because at first I mounted a home folder from
My host. Depending on the company antivirus that can be a problem 😅. And me using nvim with plugins and lazy being written in node. You have a lot of files and the antivirus freaked out. But after that it’s pretty easy. I havethe same image on my machine and the 2 other hosts I use the most. Have a docker-compose file with „restart always“ and so I always have a workstation and need only ssh.
Same point I just add compose to new projects when I need to look thought some code for someone. Then remove the container once I'm done. Puff dev environment back to prestine
@@elmalleable yep also helps with version management. The version management in python for example is horrible. Even things like conda are just the good fixes for a still bad situation. Way easier to have different images for python different version. Devcontainer are specifically made for that. But I also use them in our CI/CD pipeline. Ie: I reuse the same image I use for my workstation as a pipeline target. The pipeline doesn’t have my home directory so it doesn’t have all my crap that accumulates over time. And I have reproducible builds . If something breaks in the build step I can be almost 100% certain the error has something to do with the home dir. So much less headache 🫡
@@badrequest403 Wow, the simplicity of that just blew my mind. I had a Delta backup system with one month of daily and weekly, but after an update something messed with the uid/gid numbering convention, then had a lightning storm that during recovery each drive failed one after the next. Ten years of R&D records and essential accounting information, I think I had a nervous breakdown at some point, caught pneumonia and just kept working. That was over ten years ago though 😕😅
Great video!
My oddest docker use case is, we have an application which works API first, and as part of that workflow we want to generate libraries for the various services and clients to match the API specification.
One of those applications is a flutter mobile application. What was challenging was to ensure we had method of building the flutter library on any machine or pipeline.
Our solution was that we have a docker file create an environment where we can build the dart code and then we copy the volume/output from the container back into the project so it can be versioned.
I often use Docker containers as an isolated development environment. I’m currently taking a CS class that requires a set of Python modules provided by the textbook, but they don’t install properly in macOS Sonoma and Python 3.12 and haven’t been updated since 3.6 was current. Thankfully, Docker was there to save the day! A bit of scripting to automate the module installation, system updates, and setting my preferred PS1, and it works great.
Plus, not having to install development dependencies and SDKs for Python, Java, Rust, Node, and C (gotta try ‘em all!) all on one device is nice. If I’m done with a language, I can just nuke its image and volume!
On a side note: so far, I’m not a big fan of Rust. It’s fine, but it’s more explicit than C. I know it helps with code safety, but it’s a bit much IMO. And why on earth does it have two different string types!? (…at least it has strings, C.)
One of the rare moments where I subscribed instantly. Really nice video! Nice editing, nice story telling and great content.
Docker could not have chosen a better partner on this video!
Clear and concise tutorial, every second is worth watching. Thanks a lot!
Amazing, the testcontainers package is super useful. As well as this whole video. Thank you!
I discovered testcontainers after watching this video. Would love to see more of how you use are using it! I really like working with devcontainers and this seems like a natural companion. What are your thoughts?
what is devcontainers?
Really really cool to learn about these commands - I've been getting into docker more and more recently and I am SO excited to try docker init for some of my old projects that I want to run in containers!
Compose watch is an eye opener ❤
Yeah I really got used to only using docker compose for backend and services, not for the frontend because of the hot reload
1:39 there is an amazing feature you gloss over that you can run GUI applications in Docker and connect to them via VNC. in the video it looks like it opens up automatically, I assumed that you would manually start your VNC client but maybe this is somehow automated as well?
You mean the Firefox example, right?
Ooooh!! So is Docker a Linux system?! 😵@@smthngsmthngsmthngdarkside
installed podman just to try that, I don't know how he got this shit to work tbh
I think doing this with xpra would be more efficient. Its designed exactly for that kind of use case, you can automate with bashcript and .desktop files fot whatever app your'e using.
@@justahumanwithamask4089 looks like he is using this repo: jlesage/docker-firefox, which says to connect via vnc (or via a web interface, that is probably also a vnc wrapper website)
One cool use case is ripping blu-ray's contents using Docker. You can have it automated by simply inserting a blu-ray disc into a drive on the machine the docker container is running on
Could you please explain more? How did you do that, and why use docker for this tssk? What tool did you use, something like MakeMKV?
@@guitaripod we need answers
I second everyone’s curiosity, what container(s) are you using?
Docker is great but your editing is even better 🔥
There was way too much going on. I found it to be a bit overwhelming and too fast in terms of visual effects.
@@catfan5618 I'm familiar with Docker and I understood everything he explained, and I liked the editing. What bothered you? Too much information?
@@NeoDarkEther I was talking about the editing
I use docker as a Linux desktop vm. On linuxserver they have something called a webtop and it basically creates a Linux machine that your browser can connect to. So if I’m messing around in arch Linux and I break something, I can just do a redeploy and it’s no longer broken
Only a matter of time before that backfires. You shouldn't use docker containers as if they are VMs.
@@jake8217 why not? I don’t do anything crazy, I don’t need bare metal performance, it definitely feels faster then virtualbox, and I can wipe the state in seconds if something goes wrong, I usually just mess around with themes and checking app compatibility for when I eventually make the switch to full Linux
@@jake8217 even if I believe you it would be interesting to hear the argument for that and where the line is. There must be a use-case when containers shouldn't be recommendable anymore.
Such a well produced and well edited video. I didn't really learn anything new, but I loved every second of watching it.
😂😂😂😂😂😂🤭
Rename this to "Using docker in usual ways"
i've used docker for programming languages and tools for a while. so much that i've made myself some aliases to run those images with mounted volumes. but the whole `docker compose run ` brings this to the next level, so i'll probably do some sort of asdf.
We deploy our high speed low level cpp software to a lot of different platforms ranging from embedded systems to cloud infrastructure.
To ensure our stuff works for all cases we have our CI/CD pipelines spin up containers for all possible OS, architectures, and compilers we could possibly work with. Then it will build and run our test cases in all of those containers to make sure our core libraries will behave for whatever deployment scenario we run into.
the last 3 ways are fairly standard usages of Docker in any IT company with more than 5 employees (that actually write tests, for that specific use case). But the details are still pretty interesting nonetheless
Instant subscription given after seeing few seconds of the video paried with the clear voice explaining the video
docker-compose watch looks amazing to use :D
This is a great breakdown of docker new features. I look forward to your video on test container tutorial
The test cointainer is pretty amazing, this is something that I would never expect from the Docker.
It's only a recent move to join with Docker, they've been around for a long time.
This is one of the best RUclips thumbnail I ever seen
'docker compose watch' is pretty neat. It reminds me of 'inotifywait', which is what I used when I would invoke shell scripts instead of using 'docker compose'
Been using docker in my day to day since it's early days and TIL it can run GUI apps 🤯 😊
The quality of these videos is amazing. Awesome job 😎
(From the Docker DevRel team) Great job with the video! I love all of the unique use cases you highlighted! Fantastic work!
You guys are awesome!! 🎉
2:27 Achieved badge "Works on my machine" 💪
Test containers definitely sounds like a good subject for a full video.
damn bro, this is my second or third video related to docker and I fckng understood evrything! this is an underrated channel!!
Thanks!
Thank you so much! I really appreciate your support
The first use case has saved me a couple of times. The best, though, was when I needed to remote debug an old Brightsign player which I needed to run Chrome v48 in order to run the debug tools.
I need to use docker to containerize legacy codes...
Gotta
deep dive into this.
Testcontainers seems like a great option for my test suite.
Didn't know about docker watch, very cool and useful
"Running old versions of software" is not an unusual use of docker at all
that's honestly it's whole purpose, as if you start a new project now, it gets old next week already
TestContainers is absolutely game changing!
great video! lots of valueable info
Please make a video on test containers, thank you.
Using testcontainers with pytest would make for an awesome video!
I really liked this video! I'm definitely interested in another one on test containers
Nice video ! I learned some tricks, and saw how to use some of the latest features bring on by Docker
This is a really good video, and I am excited to apply these to my workflows!
Please make a video about test containers, it sounds really interesting.
This video was quite insightful :)
p.s. You got a new subscriber.
I would love to see more about testcontainers!
In 4:43, the define-all-tooling-in-a-Dockerfile approach can give you one container at the end. But if you use docker compose, wouldn't that give you multiple containers instead? But what if you want one container with all your dependencies?
Hey, nice video. Was wondering if you could compare docker with nix flakes and the tradeoffs of using one over another.
The beneficts of having programs installed alongside your running shell it's a plus.
Thanks
Thank you so much!
Excelente usos de Docker. Debo revisarlo con más calma luego.
That time machine use is a great idea
Yes please do a video on testcontainers 🙏
Your videos are great man, keep it up!
Great vid, would deffo be interest in seeing Test Containers in action
Very impressive, thank you for giving me this alternative view on docker❤
I started using docker since 2014 and the reason was NodeJs 😄
Node was so heavy on my mac, even after uninstalling it. So I came up with an idea 💡
After a clean installation of macOS, I installed Docker and used NodeJs inside Docker and never installed it on my Mac
I’ve been using Docker for years and especially the legacy code case is great. I maintain a legacy webapp that’s stuck at PHP 5.2 (the horror) and having crafted an PHP 5.2 image (patched with PHP FPM) I can still do some (minor) development on this app.
The same goes for some legacy apps that require Node.js tools (such as an old webpack for example) that I’ve put into a container. The JavaScript world is hell bent on breaking stuff with every release of a package or Node.js, so this way I don’t have to fight the system every once in a while I need to change something.
Yes please make a video on testcontainers and how to use in CI if possible
This is awesome. Every time you are bringing up a use case, I’m going through want I am doing currently in my mind. And your suggestions are so much more elegant. Thank you.
A use that put Docker on my radar back in 2013 was the ability to manage dependencies independently of the working image. In my case, I was actively working on a Perl app. Perl’s package management is not what I would call “modern”, and was very picky about versioning. I was able to create a base image, which my working image inherited from, so that the versions of packages were very consistent, unless I tested and pushed a new version of the base container.
love your explanation, just on point. please also share more about test containers
My second most recent video is on test containers!
@@dreamsofcode thxxx
Hey man, subscribed just for the next episode "Test Containers". Please bring it up :) Good job with the video too
I use vs code dev containers for dev purposes, you can also combine them with docker compose and there’s no need to rebuild. If you make changes to the docker image you can always commit the changes with docker commit, although defining the changes in the file is always a must as well.
Great video! I had no idea about `watch`. Is there an action to just restart the container when files change? There are some services that need to be restarted when the code changes and that don't have their own file watchers. And I don't need to sync anything because I'm using bind mounts.
If you haven't done so already, could you make a video that shows how to use a GUI from within a Docker instance, please?
I haven't used docker before, but this helps me understand it a bit more.
Sounds kinda like the assembly thing Unity has. But I might be misunderstanding.
What we need is a high def version of 2:30 publicly available for all devs around the globe to tag in their profiles.
Should be a must for all devs who use Docker atleast
Didn't know about "docker compose watch", thanks 🙂 Now maybe I can ditch Tilt, which has been buggy for me
Really high quality and amazing video 👌
Love it! Waiting for the Testcontainer video 🤞🏽
How would docker help with testing on older browsers? An X server wouldn't be able to talk to the host environment would it?
We use docker in embedded systems where I work and honestly I don't know why it isn't more common in the embedded space. Embedded systems have a really long life and the companies deploying them tend to minimize long term changes. Being able to archive the build environment is a matter of necessity. Example, in the middle 2010s we had to fix a defect in a product released in the 90s, the compiler company didn't even exist anymore. It also makes deploying the build environment to a distributed team easier. "Install docker, run scripts from inside this specific container." This makes on boarding, project hopping, and unit testing faster (no more "it works at my desk").
It's been night and day for us.
what terminal emulator is that? or is that just how docker looks? i dig the working directory and shell info
When I need to use some open source software that doesn't have any binary release available, I create a new fresh Debian container, install the necessary dependencies and compile it. When it finishes, I just copy the binary to my host machine using docker cp. That way I don't bloat my pc with huge build requirements some project have.
This is the way 🫡
Those features are amazing, docker looks much easier now
So, u can also create a wsl distro out of the docker container, just export the container as zip and import it as wsl distro. It's not as connected but still.
Real men daily drive linux 😂
Love your content. It's fast and easy to grasp.
you can just run the tests in docker as well, having a target tests in your dockerfile, then just set depends_on in the compose and docker will check for you, then a Makefile to just “make tests” and done
Bro really thought to himself: "hmmmm, let's not rewrite this in the logical next step aka Python 3, rewrite it in Rust" 💀
Bro, could you please tell what do you use to create those beautiful animations? your stack.
Which software are you using for the animations ? they really look good
Nice video as always.
Using docker as local stack or developing environment is ok but if it creates or modify files you need to match the uids of host and container. Otherwise files with root owner would appear on your host file system.
Is it just me because I find that to be very cumbersome to use Docker just to develop. I might use it to run integration and deployment tests, but that's about all.
Great video. Not sure how is this unusual though
Thank you so much for this very informative video.
I'd love to hear more about your test containers
Video so great I had to run it back from the top one more time
to execute UI applications with audio. e.g. a browser or movie player.
for isolation and testing over different distros and versions versions.
Amazing video - Docker is the best thing since sliced bread.
Man, I just love Docker
Outstanding and informative video