Borg in the Terminal and OMV GUI - Setting Up Our Backup Solution

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • 0:18 - Auto Logout
    0:35 - More on mergerfs and shared folders
    3:21 - User/File Permissions
    9:18 - Using Borg in the Terminal
    12:29 - Create the Borg Repo
    14:02 - Create a Simple Borg Script
    24:29 - Schedule the Borg Script
    25:29 - Restoring Data from a Borg Backup
    29:41 - Using Borg in the GUI Instead of the Terminal

Комментарии • 25

  • @mrhoratiu
    @mrhoratiu 6 месяцев назад

    Yup! This video needs to be backed up! 😁 Thank you!

  • @SomeDutchGuy66
    @SomeDutchGuy66 4 месяца назад

    Thanks for the great tutorials! What I especially like is the additional info you give - info only gained by experience.
    One thing though... If I'm not mistaken, you're exporting the borgkey to a sub folder on your data drive. What happens if your data disk crashes and for whatever reason you need the borgkey to access your backup? For that reason I keep a copy in my password manager - accessible outside the backup and data disk(s). Or am I overthinking?

    • @somedaysoon33
      @somedaysoon33  4 месяца назад

      Good question. The borg key already exists on the backup drive that you created the repo on, so if the data drive with the exported key died you could still use your borg repo normally. Exporting it to the data drive or any other drive is done just in case the key would get corrupted or lost for some reason. Basically you want it on two separate drives, the borg repo drive and another. But I wouldn't say you are overthinking it... I think putting it in your password manager is a good idea too.

  • @Franceyou
    @Franceyou 4 месяца назад

    Have you heard about the project called backrest?it is based on restic. I am interesting to hear what you think. Perhaps it could be the topic of a video

    • @somedaysoon33
      @somedaysoon33  4 месяца назад +2

      I hadn't heard of it, but I just checked it out now. It looks good, but personally, I avoid these types of UI wrappers and prefer the command line and creating my own scripts for backups. This is just my personal preference for myself because I find it easier and more efficient this way. But if people are intimidated by the terminal or scripting something, these wrappers can be a great way to get them to use them. Because I think borg and restic are some of the best backup solutions and highly recommend both of them. Borg also has a few UI wrappers available for it. I might do another video on backups and include these types of things and also another way of doing proper backups for folks that might be using zfs or btrfs with snapshots. Thanks for commenting!

  • @zeljko2874
    @zeljko2874 7 месяцев назад

    If I understood correctly following your instructions if I just install and set up timeshift as in the video OpenMediaVault for a Selfhosting Environment (part 2) and skip the backup steps from the next videos timeshift will backup only the operating system? Does this mean that I will have docker, portainer with added containers and their settings in the backup? And of course I won't have other data backed up, such as media and others?

    • @somedaysoon33
      @somedaysoon33  7 месяцев назад +1

      Hi, good question. Timeshift only backs up the OS, it won't backup any of the data like docker containers or media or home directory files. You would want to setup something like borg or restic to back up your containers and data. Thanks for watching!

    • @zeljko2874
      @zeljko2874 7 месяцев назад

      @@somedaysoon33 I will continue to watch and hope you will continue to make new videos about omv. Great work so far and easy to follow. Keep up the good work.

  • @Franceyou
    @Franceyou 4 месяца назад

    Thanks again for your great video and videos, a lot helpful. I am struggling to make a good plan for my server. I have a microserver HP gen 8 with 4 HDDs. I am using EXT4 for all disks. I am using rysinc to backup the data from disk 1 (6TB) to disk 2 (6TB) and for disk 3 (4TB) to disk 4 (4TB). I have bought another HDD of 8TB for backup (essentially disk 1 or 2 which I have important stuff), I am planning to connect via USB using HDD Drive Enclosure. Please may you help me to understand I am doing the right thing? 1) Is EXT4 the right choice for the HDD inside the server? 2) What file system should I use for the external HDD? is EXT4 right? the problem is that I cannot see EXT4 when connected into the laptop (Macbook) 3) Can rysic be a good solution or should I use a backup software like Borg to backup disk 1 to external disk? Maybe the best advantage is the checksum using the backup software? 4) Do you advice to use snapraid for the backup from disk1 to 2 and from disk 3 to 4 in order to have the checksum available? Or do you advice others solutions? Maybe rysic is good enough? Just afraid about bit rot. Thanks so much again for your tutorials

    • @somedaysoon33
      @somedaysoon33  4 месяца назад

      You should use a backup solution like borg or restic for proper backups. Because if you have rsync just syncing changes on a schedule then what happens if you accidentally delete files or a service accidentally deletes files. Those deletions will be synced and you will have lost your data. This is also why people say RAID is not a backup, because it also doesn't protect against accidental file deletions.
      I would setup shares on the server instead of trying to move the drive to your Macbook. Your server should have the physical access to the drive and all the other devices can access the files across the network then. You can mount/map a drive on your Macbook and other devices.
      It's hard to say what you should do with your drives but because you have mismatched drives, you might consider doing snapraid and using one of the 6tb drives as parity and then using the 8tb for a borg repo to make legitimate backups. Depends on how much free space you have on each drive and how quickly you think they will fill up. Good questions. Thanks for commenting.

    • @Franceyou
      @Franceyou 4 месяца назад

      @@somedaysoon33 thanks for such good answer. It is more clear now. You are right, my plan is to connect the 8tb disk to the sever via USB. However I was thinking it could be useful to be able to connect the backup disk to the laptop too in case my server goes down (just a further backup). Is there any way to connect ext4 to Mac, as far as you know? Or do you have a better idea? I had a read to snapraid but I did not understood as it is working: if I create data with my 6+4+4tb disks and the parity with 6tb, how can I be protected from 1 disk failure with 6 tb of parity vs 14 tb of data? Am I missing something? Thanks again for your great work. I am planning to start watching from your first video and watch all of them. I am very sure I will learn a lot, your explorations are very clear to me 👌

    • @somedaysoon33
      @somedaysoon33  4 месяца назад +1

      @@Franceyou I just do everything over the network, no need to move the external drives. I run all my backups, media, documents, everything from the server to the laptops/desktops/firesticks. That's one of the great benefits of having a NAS. The only thing to consider is the speed of your LAN, best if everything is at least 1Gbps.
      As far as snapraid, as long as your parity disk is as big as your biggest drive you are okay. And 1 parity drive will be good for up to 4 disks but it really depends on how full your drives are. The parity drive isn't just copying the extra data from the other drives. If you lose a single disk, then snapraid will use all the rest of the disks to restore the lost disk, not just the parity drive. Hope this helps, :)

    • @Franceyou
      @Franceyou 4 месяца назад

      @@somedaysoon33 thanks again. Much more clear, thanks for confirming I understood properly what I have read. I cannot understand how much space for data I have with 6+4+4tb disks for data and 6tb of parity. I cannot understand how 6tb parity can recover the failure of one of three hdd failure, as the total is still 14tb. My apologies, but I cannot get it. Does snapraid make the parity into data hdds too? What if the parity space will finish and still free space in the data disks? Sorry, but I found this contempt hard to understand, I am missing something for sure

    • @somedaysoon33
      @somedaysoon33  4 месяца назад +1

      @@Franceyou Parity drives don't actually hold the entirety of data from all the other drives. They only hold the parity information which are like calculations on how to restore data. If you search, how does disk parity work should give you a better idea. It's complicated but that's why the parity drive doesn't have to be bigger than all the drives combined, just equal to or bigger than the biggest data drive

  • @Franceyou
    @Franceyou 3 месяца назад

    May I suggest a video tutorial for offsite backup. I am really struggling to set it up. I SM trying to use twingate, but not success so far. Any advice.? Thanks

    • @somedaysoon33
      @somedaysoon33  3 месяца назад +1

      I don't know anything about twingate, but basically if you can SSH to the remote host then it's really easy to use borg to make backups to it. An example in my backup script that creates a backup to another server is this:
      borg create --stats root@192.168.1.125:/shared/storage/borg::common.$NOW /srv/dev-disk-by-uuid-3a8447fe-a660-4ac7-ae46-2de19b6d59c1/
      So you can see it does the borg create command to make the backup, the only difference is instead of giving it the local directory to the repo, you give it user@ip(or domain) to your remote server.
      You might also pass it the key in the script through a variable. I give the SSH key to it with, export BORG_RSH="ssh -i ~/.ssh/yoursshkey"
      More information can be found here:
      borgbackup.readthedocs.io/en/stable/quickstart.html#remote-repositories

    • @Franceyou
      @Franceyou 3 месяца назад

      @@somedaysoon33thanks again very much. In order to use ssh to remote server should I open any port? Is it OK to open ports?

    • @somedaysoon33
      @somedaysoon33  3 месяца назад +1

      @@Franceyou Your remote server would need to have the SSH access. You wouldn't need to open any ports on your network, but the remote server would need to have that port open to SSH to it. It's safe to open the SSH port as long as you turn off password authentication on the SSH service and use key based authentication. What service are you using for an offsite backup? There are some services that make it easy to host a remote borg repo like BorgBase.

    • @Franceyou
      @Franceyou 3 месяца назад

      @@somedaysoon33 i am trying to use restic using the container backrest. But I am still working to make it working properly..... Doing baby steps. What does mean setting SSH key based authentication? Is it in one of your videos? Thanks again

    • @somedaysoon33
      @somedaysoon33  3 месяца назад +1

      I show how in the OpenMediaVault setup videos, 👍. ruclips.net/video/wHMrptwNz2I/видео.html

  • @zeljko2874
    @zeljko2874 7 месяцев назад

    For other noobs like me: In here is needed to say that all shared folders need to be readded same as you show for media folder to use mergerfs stack and to be added in disk with most free space. If wee leave all just like you show in here only media files will be distributed to merged disks and rest will bee added on selected disk in the begging when wee firs set the shares.

    • @somedaysoon33
      @somedaysoon33  7 месяцев назад +1

      This is true, if you want other folders set by OMV to use mergerfs, but you might not want it that way and you might want to leave some folders to use the drives directly. It's a good idea to think about what kind of data is in the directory, and how it is used, and if you really need to use mergerfs with it and span drives. For instance, I would recommend not using it on the docker directory where docker data is stored. And here is why, I don't like the idea of the extra layer and extra latency on the docker directory because it will have a lot more random I/O from services that are using it while running. And then for the home directory, it will likely have smaller files, so you are really not gaining anything from using mergerfs with it. Just something to think about... good comment, I wish I would have mentioned something along those lines in the video about the why or why not you might want to use mergerfs on different directories. Thanks again for commenting, it helps to know what people are seeing in my videos to make better videos in the future!

    • @zeljko2874
      @zeljko2874 7 месяцев назад +1

      @@somedaysoon33 In start I did just that put home and docker on specific drive and rest on mergerfs.