Backing up your Docker Configurations and Data.

Поделиться
HTML-код
  • Опубликовано: 21 окт 2024

Комментарии • 127

  • @ui4lh
    @ui4lh Год назад +4

    I have syncthing pulling up my docker folders in my server to a raspberry pi with an external HDD on my folks house as a external location backup and since sync thing doing a one-way sync it keeps the back up synced in real time.
    A lightning struck a couple months ago and fried a whole bunch of equipment, My ISP's ONT, my router, and my servers Mobo and CPU. The backup served me well because I was able to quickly get it all back and running.
    I had images of my HDD backed up, restored the image, restored the docker files to latest and boom as if nothing happened :)

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      First, I'm sorry that happened to you at all, but I'm glad you are ok, and what a great testament to the value and importance of backups, but also the easy of use of Docker for quick recovery.

  • @tack-support
    @tack-support Год назад +7

    Been wanting to do something like this for a while. Thanks for laying the groundwork. I can integrate this into my rsync/restic backups that I already do.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +3

      Absolutely, if you're already using a tool for backups, adding this to that workflow should be pretty straight forward I hope.

    • @toastytodd
      @toastytodd Год назад +1

      I've been using timeshift for my servers back ups.. for some reason it wont back up my docker folders... i believe timeshift uses rsync..
      So maybe i'll do this ^^ instead.

  • @DocBrown101
    @DocBrown101 Год назад +3

    My backup strategy looks almost exactly like the one you showed us here. The only big difference is that I use restic instead of tar and I can recommend restic to everyone here to have a look at it!

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      Nice. I have Restic on my list to look into. Thanks for the info!

  • @mehdiyahiacherif2326
    @mehdiyahiacherif2326 Год назад +7

    Small adjustment , you can make a list with a loop in bash so you can add / remove containers from backups easily since it will be the same list used to stop and then start the compose files

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      Great tip! Didn't even thing of doing it in a loop.

    • @laukhengsoon
      @laukhengsoon 10 месяцев назад

      @@AwesomeOpenSource
      #!/bin/bash
      # Specify the root directory where you want to start the loop
      root_directory="/var/dockers"
      backup_folder="/var/lib/docker/backup"
      backupDate=$(date +'%y%m%d%H%M%S')
      ipAddr=$(ip addr show $(ip route | awk '/default/ { print $5 }') | grep "inet" | head -n 1 | awk '/inet/ {print $2}' | cut -d'/' -f1)
      fileName="$ipAddr""_""docker_backup""_""$backupDate"".tar.gz"
      # Loop through each subdirectory in the root directory to stop Docker container
      find "$root_directory" -mindepth 1 -type d | while read -r directory; do
      echo "Processing directory: $directory"
      # Run your script here
      # For example, if your script is named "myscript.sh":
      # ./myscript.sh "$directory"
      # Add your script execution command here
      cd $directory
      docker compose stop
      echo "Finished processing directory: $directory"
      done
      cd $backup_folder
      tar -czvf $fileName /var/lib/docker/volumes/
      # Loop through each subdirectory in the root directory start docker container
      find "$root_directory" -mindepth 1 -type d | while read -r directory; do
      echo "Processing directory: $directory"
      # Run your script here
      # For example, if your script is named "myscript.sh":
      # ./myscript.sh "$directory"
      # Add your script execution command here
      cd $directory
      docker compose start
      echo "Finished processing directory: $directory"
      done

  • @Robertjaymercer
    @Robertjaymercer Год назад +1

    Thank you for that tutorial! It s always a pleasure to learn things with you! I Ve been following your channel for quite some time now!

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      Thansk for following along with me, and I love to have folks learning with me!

  • @tonydematteis5029
    @tonydematteis5029 Год назад +4

    Appreciate the straight forward reviews and avocation for Open Source.. Thank you! I offer a slight correction on the tar command options you gave. The -czvf options actually are c for create, not compress. z for compress, not zip. v for verbose and f for target file. You are creating a compressed tar file ... .tar.gz. If you omitted the z option you'd leave off .gz. You don't technically have to add or omit the .gz to the filename. Either way, it has no bearing on whether the file is compressed. That's all in use or not of the z option.. And tar -xzvf equals x for extract, z for a compressed file (uncompress a compressed file), v for verbose and f for source file. Keep up the great work!

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I learn something new every day, and you sir have absolutely taught me something new. Thank you!

  • @i-am-you-tube
    @i-am-you-tube Год назад +1

    Thanks a lot Brian, for making this video and handout for the docker backup. I always like your clear explanation of doing things and more importantly, why you do things. Greetings and keep up the good work👍

  • @florealucianm
    @florealucianm Год назад +5

    Add this line at the end of the script to delete archives older than 3 days:
    find /backup/folder -type f -mtime +3 -delete

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      Thanks. I added that to my shownotes, just not in the video.

  • @nathanmcfarlane317
    @nathanmcfarlane317 Год назад +1

    Thanks, I've been wanting to do something like this myself. I would love to know about deleting certain old backups as part of the process.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I'll do a follow up and talk about that part. It's a one liner in the script. I run it at the beginning of the script to make room before I run the backup and move it to long term storage, but you can really run it anywhere in the script.

  • @gotothemoon5108
    @gotothemoon5108 Год назад +4

    My backup script is quite similar to yours. Except that I use 'restic' instead of 'tar'. As 'restic' backup is in incremental style. The backup is a lot faster than using 'tar'. The 'restic' is also doing a lot better in term of version control and storage usage.

  • @HelloHelloXD
    @HelloHelloXD Год назад +8

    To stop and restart all containers quickly use
    docker stop $(docker ps -a -q)
    docker restart $(docker ps -a -q)

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +3

      Great tip! Thanks for that!

    • @nicolasotero6424
      @nicolasotero6424 Год назад

      Why don't you stop the docker service? (systemctl stop docker docker.socket).

    • @HelloHelloXD
      @HelloHelloXD Год назад +1

      @@nicolasotero6424 About a year ago when I asked for a solution to stop all containers that what I was given. It works so I use it...

    • @nigelholland24
      @nigelholland24 Год назад +2

      Great video. I am new to all this. Question how would you automatic this at 3 am when it asks for you Suso password when you run your script. Thank you

    • @josemarialabarta
      @josemarialabarta Год назад +1

      @@nigelholland24 You can solve it using "public and private keys"

  • @samuelchampigny3481
    @samuelchampigny3481 Год назад +3

    Hey that is an awesome video! one thing thought, I have a backup script very similar to yours. I did find it helped with time and space to exclude Temp and cache folder in some containers. plex/jellyfin/emby have a lot of it and they tend to be the most time consuming. same with the Art and some metadata. I have the bandwidth and cpu power to rebuild pretty fast so I also exude that from backups. but that is just me being greedy with my storage space.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      Totally valid things to do. You can definitely cut down on size by excluding logs, tmp, images, and so on that can be easily rebuilt / re-pulled from some other location if needed.

    • @samuelchampigny3481
      @samuelchampigny3481 Год назад

      @Awesome Open Source yep. Also, my backups are in 2 locations. One in a crypt and the other on an older server that can take one some of the tasks in case my main server goes down

  • @LampJustin
    @LampJustin Год назад +1

    Instead of doing the start and stopping of all container, you could also do a sync and then pause the containers. It's not perfect but faster and easier. If you want to be cool, you could even just use a fancy FS like BTRFS or ZFS or Thin LVM to snapshot the whole directory/filesystem. Then you can do a backup of that snapshot. Since snaps are atomic you have all the time to do your complimentary backup off of that.

    • @LampJustin
      @LampJustin Год назад

      I also like to suggest using Restic or Borg instead of tar. They are incredibly more space efficient and can even mount the backup via FUSE so you can do file restoring.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I actually do backups of all my VMs every night, and move those to my NAS. I just wanted to show how to backup Docker specifically for those that may not have that ability or functionality today. But 100%, those are all fantastic backup options.

  • @PapaSharmaJi
    @PapaSharmaJi Год назад +1

    I will give it a try. I had backups problem earlier. I used aapanel, stopping webserver and other shitty things and used backup cron to upload to my GDrive account. This method is also cool. Thank You

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      I hope it will help.

    • @PapaSharmaJi
      @PapaSharmaJi Год назад

      @@AwesomeOpenSource Running bash script like this gives you less option for remote backups but if you want to save vm resources from continuous usage, your method is effective. Thank You. Big Fan

  • @Scranny
    @Scranny 5 месяцев назад +1

    If I understood, you demonstrated how to back up the volume dirs, but is it necessary to also backup the internal file system within the container? In other words, does the container state also need to be backed up?
    For example, you back up Jellyfin's media and config folders but does the Jellyfin service save anything important within the virtual file system?
    I've seen somewhere the suggestion to run "docker cp / my_backup_folder" or something similar.

    • @AwesomeOpenSource
      @AwesomeOpenSource  5 месяцев назад +1

      No, no need to backup the inner part of the container. It will pull down and startup on a new system as needed, so you only need to backup data you want to persist, and your compose file just as a reference if you need to bring it up fresh on a new system.

    • @Scranny
      @Scranny 5 месяцев назад

      @@AwesomeOpenSource thank you very much!

  • @FrontLineNerd
    @FrontLineNerd Год назад +1

    This was very helpful. Thank you. I will modify this script and try it. You explanation is helping me understand why my duplicati backups of my docker volume haven’t been working. This is a big problem though and we haven’t solved it. What if I had people truly relying on these services? I can’t tell them we’re shutting down for a break every 24 hours so we can back up.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      Now you're moving toward clustered containers (Swarm, Kubernetes, etc) so you backup from one node at a time, and update in a staged manner that rolls through so there is no perceivable down time.

    • @FrontLineNerd
      @FrontLineNerd Год назад

      @@AwesomeOpenSource A video on that would be really great. I'm about to spin up my third Portainer node. I would love to learn how to cluster them.

    • @FrontLineNerd
      @FrontLineNerd Год назад

      @@AwesomeOpenSource I have two more machines coming in the mail. Looking forward to building a cluster next week.
      I'm very confused about your backup script however. If you have a moment, can you help me understand? Why are you stopping each container individually like that? Why can't we just shutdown compose and docker entirely with two simple commands. Or even one simple command to simply stop docker? The way this is scripted, I'll have to change the backup script every time I add or remove a container, right? Am I misunderstanding?

  • @lpseem3770
    @lpseem3770 Год назад +1

    That is a perfectly valid backup plan. You can go fancy with incremental backups of Borg, or Bacula, but never use backup tools, which are smarter than You. (-: The easier, the better.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      Absolutely. There are tons of options, just pick the one that works best for you, and grow from there.

  • @AcielGamingTech
    @AcielGamingTech Год назад +2

    I have been using bash script tar style since more than 3 years. There is few things to keep in mind that it won't backup socket files.
    Few tips, and reducing downtime of containers is:
    1. Add 'p' to tar arguments. p = persevere permissions and ownership of all the files.
    2. When doing "tar -cpzvf ..." don't include the 'z' so that it will not take time by compressing the files.
    3. When your containers are up, you can later compress the tar into tar.gz.
    (Keep in mind steps 2-3 will take additional read/write off the SSD/NVMe)

  • @kharmastreams8319
    @kharmastreams8319 Год назад +7

    I did a _very very_ dirty version of this :)
    docker stop $(docker ps -a -q)
    tar -cvzf /data/backup/docker-backup-$backupDate.tar.gz -X exclude.txt /volume1/docker
    docker start $(docker ps -a -q)
    At least it works :)

    • @HelloHelloXD
      @HelloHelloXD Год назад +3

      #!/bin/bash
      FOLDER=/home/$USER
      now="$(date)"
      echo "Stopping containers started at $now, YAY!"
      docker stop $(docker ps -a -q)
      sudo tar -cvzf $FOLDER/backups/docker.tar $FOLDER/docker/
      echo "Restarting containers started at $now, YAY!"
      docker restart $(docker ps -a -q)
      That's what I have been using for a while now

    • @kharmastreams8319
      @kharmastreams8319 Год назад +4

      @@HelloHelloXD I added an exclude file to exclude a few folder with content I have backed up elsewhere.
      (All the book and comic bundles from humble bundle. They really do take up a lot of space and make the backup way too slow🤣) static content I can easily copy back from the source etc

    • @BB-mq3nn
      @BB-mq3nn Год назад +1

      @@HelloHelloXD Also, for those using (non-rootful) podman, the script should largely be the same except replace sudo tar with "podman unshare tar" so it will perform the backup using the podman userspace. Also obviously replace all the "docker" instances with "podman" too

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      As long as it works, then well done my friend!

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      Very nice!

  • @josemarialabarta
    @josemarialabarta Год назад +1

    Very good video - Thanks for the information - Regards

  • @pctechdr
    @pctechdr Год назад +2

    Thanks for this great video.. would it be possible to go through a restore in a future video.?

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      Sure. Let me try to setup a test system to restore to. The thing about the resotre is if you have NGinX setup for apps like I do, then you have to change those IPs for the different machine, whereas in the case of a failure, in theory you should be able to assign the same IP to the new machine. Just FYI, I have done this before, I literally hosed a VM, and used the backup that I had from this same method, and was back up and running in less than an hour.

  • @donny_bahama
    @donny_bahama Год назад +1

    Slightly stupid/noob question but as I understand things (and I could be totally wrong since my understanding of this is very limited), there are 2 ways of doing your Docker volumes - one way points them toward a local directory (I usually use something like ./data). The other way puts them out in the Docker ether somewhere. (Yeah, my understanding of this breaks down at this point.) If I use the latter, does your backup method still work for that? Does the latter volume configuration still put them in (e.g.) ~/docker/nextcloud ?

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      So, if you don't specify that the mapped volume should be in the current folder (the ./ portion of the left side of the mapping), then docker generally puts the files in a standard place, but that place will potentially differ from distro to distro. You just need to find that place, and backup those volumes from there.

  • @phobes
    @phobes Год назад +2

    Portainer is a great tool for this, it's a crying shame it doesn't have a backup feature for compose files and volumes/binds.
    If it did, Docker would absolutely be my go-to vs. LXC.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +3

      Agreed, if portainer could do the volume backup and config / compose backup, it would be the perfect tool for this.

    • @rado415
      @rado415 Год назад +1

      make next step and use swarm with stacks

  • @ydiadi_
    @ydiadi_ 10 месяцев назад +1

    this is absolutely amazing brian thankyou soooo much , i just started using linux and commands are scary for me....could you please do a restore video too ?

    • @AwesomeOpenSource
      @AwesomeOpenSource  10 месяцев назад +1

      Let me see what I can do.

    • @ydiadi_
      @ydiadi_ 10 месяцев назад +1

      @@AwesomeOpenSource ur awesome brian , il be waiting amazing videos and content

    • @ydiadi_
      @ydiadi_ 10 месяцев назад +1

      @@AwesomeOpenSource please reply.... i deployed my stuff using stacks/docker compose files and when i try to run "docker compose stop" eg for jellyfin it doesnt stop
      Please guide what should u do to make the script work

    • @AwesomeOpenSource
      @AwesomeOpenSource  10 месяцев назад

      If portainer is managing those containers it may be a little different, but maybe try just using "docker stop jellyfin" and see if that works instead of using the docker compose command.

  • @donny_bahama
    @donny_bahama Год назад +1

    I’m developing a python script that verifies the backup by extracting the gzip file to a temp directory then compares the source files to the ones in the temp directory. (Point being, that _having_ a backup doesn’t really do you any good if the backup is no good.) Core functionality works; I’m just working on the fancy stuff like parsing the log files and sending myself a push notification to let me know that the backup completed and verified successfully. Also need to run some tests to see if there’s a significant (time) difference between using diff, comp, and sha256 checksums. (I may do it the fastest way on a daily basis and the most thorough/effective way weekly.)

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      Nice. Maybe you'll open source it, and I can cover it on my channel one of these days!

  • @dubas1974
    @dubas1974 Год назад +1

    I don't store my docker volumes on the server. I use an NFS mount to a Synology NAS /mnt and all my docker compose files and volumes are there. I then backup my Synology to USB attached storage and also do snapshots. This way I can fire up any ubuntu server, install docker the add my NFS mount to the Synology and just bring up my containers with my docker compose files. It's great to have backups but test recovery to make sure you can restore from backups. I found this to be best solution... a central location for docker volumes on RAID and backed up multiple ways. I also keep a recovery document with all the commands to bring up a new Ubuntu server and then attach centra storage and fire em up.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      That's a great way to handle it. I do something similar on my Proxmox setup. I actually backup my CTs and VMs each night to my NAS.

  • @dmibnmg
    @dmibnmg Год назад +1

    I like the new music❤

  • @thomaspeelen9111
    @thomaspeelen9111 Год назад +1

    I really needed this for my future homelab, thanks! :)
    Question: Would there be any downsides of having all those containers in the same compose-file, so that you would only need to run the stop command and the run command once?

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I separate mine just for the organization I showed, but also so I can run docker-compose pull without having to specify different applications, which could be disastrous with all of them in one file. I'm sure there are mitigations either way, so just make sure you are good with how it all works with everything in one file.

  • @dslynx
    @dslynx Год назад +1

    I have another container for backing up my docker containers. I use offen/docker-volume-backup:latest and have it setup to do each container's volumes to individual .tar.gz to files on an smb share on my data server as well as autobackup it's configs. It's been the best (and easiest) solution I have found.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I definitely need to check it out further, a few have suggested it.

    • @JeyZlp
      @JeyZlp Год назад

      I am interested. How does your env. look like

  • @alexsinbb
    @alexsinbb Год назад +1

    This is where Unraid 6.12 really shines. This is all taken care of by the Appdata plug-in.

    • @KarlMeyer
      @KarlMeyer Год назад +1

      Still no encryption though sadly

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      Very cool.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      For encryption of the backups using my method, you can store the final zip file in an encrypted partition or vault for sure.

  • @Lunolux
    @Lunolux 5 месяцев назад +1

    good idea, but since my docker is in lxc container (proxmox) i just backup the lxc, i prefere using an lxc for each app if possible, but i have some app in docker ,
    why dont you just do your script like this ? :
    command to stop all the container
    create the zip backup
    copy to external location
    command to start all the container
    thx for the video

    • @AwesomeOpenSource
      @AwesomeOpenSource  5 месяцев назад +1

      I don't back up all of my containers. Only most of them, so for me it's easier to edit and update the script as I have it, but your method would probably work fine.

  • @vamsikolluri29
    @vamsikolluri29 Год назад +1

    looks good but not feasible solution for production environment ... can we have next video on log rotation and then backup ? thanks !!

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      For production, a tool like Kubernetes would probably be ideal, and then pulling down a node on the cluster for a backup is likely less interruptive. I know there are some specific tools for doing these types of things for Kubernetes as well.

  • @tye595
    @tye595 10 месяцев назад +1

    is there away to hide the tar output so you don't have a massive list on the terminal?

    • @AwesomeOpenSource
      @AwesomeOpenSource  10 месяцев назад

      You can leave off the 'v' in the flags for the tar command, so it would be 'tar -czf' instead. The 'v' means verbose.

    • @tye595
      @tye595 10 месяцев назад

      @@AwesomeOpenSource you are a champion. I am now using your script to backup volumes created in portainer.

  • @Flackon
    @Flackon Год назад

    I migrated a lot of my docker container data to named volumes vs binds, since they are more robust. Unfortunately this stretegy seems to be aimed at backing up bind mounts, so I can't really use it

  • @cicievie
    @cicievie Год назад +1

    hi, i have a question about nextcloud aio .. im using backup from nextcloud aio , how can i restore my data to new machine? is it also from aio webpage or do i need to run command?

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I believe you'll do it through the AIO interface, but I haven't had to restore yet.

    • @cicievie
      @cicievie Год назад

      @@AwesomeOpenSource how about ssl ? will it reconfigure automatically??

  • @fcamarota
    @fcamarota Год назад +1

    Thanks bro!

  • @Misi8
    @Misi8 2 месяца назад +1

    Simply use Volumerize.

  • @jwspock1690
    @jwspock1690 Год назад +1

    tnx for the vid

  • @autohmae
    @autohmae Год назад

    1:37 you might be surprised !

  • @pepeshopping
    @pepeshopping 11 месяцев назад +1

    NO! Absolutely NO!
    A PROPER database backup will NOT be corrupted, regardless if the DB is running or not!
    You may simply NOT get the latest updates AFTER the backup, but "corrupted" it WILL NOT BE!!!

    • @AwesomeOpenSource
      @AwesomeOpenSource  10 месяцев назад

      You are right. In this case, I'm not saying to make a db dump (proper) backup, but instead, talking about fully backing up the mapped volume where the db data is stored for the container, and in that case, we don't want writes to be happening when we make that copy if at all possible.

  • @SKtheGEEK
    @SKtheGEEK Месяц назад

    how to restore this volumes. i have the tar in s3. my docker container are corrupted. removed the current volume files to another location and pulled the s3 tar file, extracted it. then what, will it automatically start working or i have to do some setup after downloading the backup volume files.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Месяц назад

      @@SKtheGEEK if you pull down the lates zip and extract it into the same location where the docker container is looking, just start the container up and see if it’s working. If your container data got corrupted, it’s worth investigating how that happened though, so it doesn’t repeat.

    • @SKtheGEEK
      @SKtheGEEK Месяц назад

      @@AwesomeOpenSource for example I had 2 postgres containers. And they were creating volumes with the random numbers string on their name. Now thats my problem. I don't really know which volume is of which container as they are auto created by the docker. So even if I have those volume I will not be able to mount them.

  • @SimionChis
    @SimionChis Год назад

    Is there a specific reason for why you have changed the rsync with scp? Thanks!

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +1

      I was just trying different things as I was making the video, and the last one was scp. Rsync works well too, but in honesty, since we are talking about an archive in my case, the rsync won't do an incremental backup, but would just move the new file over (same as scp).

  • @RedVelocityTV
    @RedVelocityTV Год назад +1

    Hi how is this better than duplicati?

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      It's not better than any backup tool. The important information to have is that when backing something up, you should try to make sure there is no writing of information during the backup process. If you can schedule that to coincide with the duplicati backup, then duplicati is a perfectly fine tool for the backup portion. I'm just trying to show folks how to 1. stop their containers, 2. create an archive to save space, and 3. restart their containers. The last step of backing up the archive can be done with a whole host of tools.

  • @DigitalIndependent
    @DigitalIndependent Год назад +1

    Since Perun everybody seems to be going all-in on PowerPoint…

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад +2

      This is one of my only videos where I did any presentation. I just felt it was a simple way to get the points across before getting into the work, and that wasn't PowerPoint, it was LibreOffice Impress (open source).

    • @DigitalIndependent
      @DigitalIndependent Год назад

      @@AwesomeOpenSource I wasn’t criticising:) 0.0 criticism here, just the joke. Loved it. Continue however you want to and I’ll watch. Thanks for inspiring and point towards meshcentral, that’s saved my buttocks multiple times.

  • @zyghom
    @zyghom Год назад +2

    nice but sometimes stopping database takes ages

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      It can, for sure. But generally it's a fairly fast operation. The alternative is to add a db dump to the script. A bit more involved, and depending on the database, maybe a bit more tricky to restore to a running container (mongodb, I'm looking at you), but it is an option.

  • @louisshade8624
    @louisshade8624 Год назад +1

    can u make a video on pfsense ddwrt vpn vlan setup

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I have one on pfSense and DD-WRT VLAN setup now. Is that not what you are looking for?

  • @mk15895
    @mk15895 2 месяца назад +1

    how about restore

    • @AwesomeOpenSource
      @AwesomeOpenSource  2 месяца назад

      You just move your sipped backup over, extract it anywhere you want, and you can move over 1 folder, or the entire docker folder with everything in it.

  • @squalazzo
    @squalazzo 4 месяца назад +1

    works? yes... good bash programming? hell no... no cycles, no real automation in case folder names change or are added or removed, no knowledge of useful docker command line switches... no retention policies, no remove of old backups, no rotation, no error handling, nothing, just plain zip file moved out the system... no thanks

    • @AwesomeOpenSource
      @AwesomeOpenSource  4 месяца назад

      I'm sure there are lots better options out there. It suits my needs, and is a place to start for anyone who wants to backup their stuff. There rae full programs out there that do this far better, no doubt.

  • @berrabe3917
    @berrabe3917 Год назад +1

    maybe you can simplified compose up and down with "find" command, then pass to "xargs". and you can speed up the process by using pararel processing (multiprocess) on xargs with "-P0"
    find -type f -regextype egrep -iregex '.*/docker-compose.yml$' | xargs -IX -n1 docker-compose -f X down

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I like where you're going, but I would definitely have to research this more to fully understand it all.

  • @Clarence-Homelab
    @Clarence-Homelab Год назад +1

    Great video!
    I can only recommend offen/docker-volume-backup
    Happy to answer any questions if someone wants to know how it works.

    • @AwesomeOpenSource
      @AwesomeOpenSource  Год назад

      I saw that option, but wasn't sure it did everything I wanted, which is configs, volumes, and compose files.