My backup strategy: Don't get attached to data. I feel like it's unhealthy to be afraid about what happens to these fiddly ones and zeros. Not only that, you can't even define what one and zero is. Only when you provide a specific context like, how does one look like on a particular ssd. But we do not worry about the ssd, but about the data. What is it actually? Therefore it's inherently flimsy. If you can't build a stonehenge out of it, it will get lost. Now, I do not levitate above others, I do feel afraid about my data. That being said, every time I have lost data, I was sad about a week, and then a refreshing calm flooded my mind. Every time I uninstall another social network app from my live. Every time I loose account to some online game. Every time, I loose PC. And I get to build anew. The only thing I deem worthy of storing is real life memories. Photos and stuff. And legal documents. That is quite small ammount, you can just print it to a photo book, burn it to a few disks and store it in some shed, basement and attic or something. (Talking about personal data, like personal personal. Not talking about corporate data, not talking about corporate data about people. Am talking only about our own personal data) Maybe we should start a blockchain that would store data eternally, encrypted by the private key used as the wallet itself. That private key can be then etched to stone underneath your house or something.
Nothing funner than taking a day to back EVERYTHING up and then somehow misplace that back up when needed the most, orrrr find out that the backup is dated and of little practical use. BUT HEY! I plan on getting to it ASAP
You have 2 options: 1. divide 'big stuff' and important stuff. Backup everything big rarely locally and backup important encrypted on multitude of places. 2nd option is the reason you posted this video.
I'm looking into moving to ZFS cloning, and as far as I understand ZFS and BTRFS backups is the "end-game" when you have a lot of data to backup. The benefits is that the filesystem doesnt have to evaluate every file in the list to backup, it just compares harddisk sectors of the source and the target. If there is a changed sector, it just copies the sector, everything else is left alone. Until you have large amounts of files, rsync works fine. It's when rsync literally takes hours just to parse the list of files to backup, that you wanna move to filesystem cloning.
With AWS -- You can store backups there in a tiered fashion. So you can store them initally in more accessible (more expensive) storage. Then once it's of a certain age, it can be moved to Glacier. You can use the Glacier for old backups, or stuff you won't want to access super frequently, or when you need it it's not an IMMEDIATE usage.
Just write a shell program that runs fleet of scripts that execute rsync -rva --delete with all of your directories. And add documentation. That command does: recursive, verbose, archive and --delete removes stuff that isn't on the directory to copy (if file a is not anymore found in location b, it gets removed from location b, getting the things synced)
@@Fedor_Dokuchaev_Color This. The -a flag is a pretty failsafe method to get everything backed up. I usually add -x to avoid leakage from other filesystems which might be present as symlinks. Also PSA: NEVER mix up --delete and --del. Read up on it, they are VERY different things. There's also versioning with rsync. Instead of overwriting it can move an old file to a name with a suffix.
I love this kind of video because this is how my mine works too. I’ve been using restic and rclone to back up to a bunch of different cloud services, raid, etc. via an automated regular cron job. It has version control, encryption, open source, and is very stable.
Due to recent recurring Fedora issues when there were kernel updates I have switched to OpenSUSE based on your videos. To date I love it. We will see longer term. Now to the point of this response Backups. 1) Start with setup your own NAS (I have a Synology NAS). 2) Don't keep data/archive files on your local device. In my case I have an SD card in my laptop for my ISO's, etc... All data, archive files are on the NAS. 3) In my NAS the drives are mirrored. 4) Backup routine on my Synology NAS is done via two different USB drives one that is online all the time and one that is periodically attached for offline backups. 5) The utility used is Synology Hyper Backup. With Synology, on my devices I map to multiple shares. So I save files saved directly to the device. Which in turn is automatically backed up nightly. With Synology you can backup to the cloud or other Synology devices. My whole point here is that all is automated. I hope this is informational.
I only see one valid option, more server HDDs and OpenZFS. (read-only snapshots; compression; encryption; extra CRC checks on all your data; all RAID configurations work; replication of data to other disks; servers or cloud solution). Replication only transfers the changed records not complete files and, for large files like a VM disk file it is an huge advantage. I use it since 2018.
A backup failure on my part brought me to Linux, since after a hard drive crash, I couldn't find the driver that I'd downloaded to force Windows 7 to acknowledge my 1920 x 1080 monitor. But I store everything that matters on flash drives and in free cloud storage, saved anew whenever I make a change to the file, and I haven't had a problem with this so far.
My backup strategy is similar, but I also do a full backup to an external drive once a quarter or so, and rotate backup drives to a safe deposit box. But I don't have terabytes of RUclips videos to back up, so the backup itself goes fairly fast. And yes, I would love to see a dedicated Borg/Vorta/Pika Backup video. I recently started using Borg, but I have not really looked at the pros and cons of Vorta vs Pika.
Matt, I hear you. I also tried using rsync. When I tested it, I had the same problem. I have been using rclone for a few years now with zero problems. I use rclone to backup to my pcloud account. My backup data is not as huge as yours, not even close. It is mostly my config files and some documents. Rclone works. I built scripts to do my backups. Everytime I do any kind of change I back it up. Due to the size of my individual backups being so small, the time it takes is minimal. I would be at a loss for what to do with terabytes of data.
For local copies I use BTRFS, so I can make a snapshot and use rsync with the option that deletes the missing things without worrying about actually deleting important stuff. Worst case scenario there's some duplicated stuff that can be deduplicated later. For remote backups, I use S3QL, which is a FUSE filesystem that uses an S3 compatible storage as backing. It has compression, encryption and most important of all, deduplication. Every file that is copied is hashed to detect duplicates, so two equal files are never uploaded twice. Therefore you can just "copy" everything in a new folder and you don't have to worry about syncing.
Pretty good rant. I've been through many backups and restores over the past 40 odd years, on windows, mac and currently linux. I''ve pretty much landed on timeshift for the system stuff, and vorta/borg for everything else. I use vorta for backups of selcted directories to the cloud, as well as a nas and a couple usb drives directly connected to the pc. I also do a yearly backup to a 4tb drive that I store offsite. Excluded directories on vorta include /downloads/iso and system files. I also keep the vorta connection information backed up to a Joplin notes card backed up in the cloud. A copy is also on bitwarden. This is for my home workstation. My notebook is backed up to the NAS when home, and an external drive while traveling. My new laptop will be running fedora on btrfs, I'll need to see how this will all work out. It's never ending. Good luck finding your solution. I did find using vorta for everything but the sytem made life easier.
I backup my data encrypted with deja-dup to a local NAS. Then I mirror the encrypted backup files with freefilesync weekly to pCloud. In addition I use timeshift to create snapshots to a separate hard drive. I like deja-dup, because it does not only encrypt the backup files but also makes automated restore tests. The backups are also incremental and I can choose how long the system will keep older versions (3 month, 6 month, 1 year or forever) . I had to restore my data once after an hard drive failure and it worked perfectly fine, but to be honest, my data are very 'clean' and I have in total less than about 250 GB to backup.
Been backing up data since 1995. Yes, been backing up since my Windows days, as well. Not once trusted a third-party app to do this task. I only backup one way and that's manually. Copy and Paste or Drag n' Drop. I also organize it as I go. Categorize them, organize them and date them. I can go back at any time and find exactly what I'm looking for. A photo, document, music, video, game files, FOSS applications that are abandon now, etc. Everything is backup that I want to backup. And yes, I never every backup anything like my whole system. I only save(backup) data I can't afford to lose only. That's how I know exactly the date I switch to Linux. July 15, 2003. Because I backup all data I can't afford to loose. Nothing is encrypted either, I don't trust that either as access to it with zero problems. Just like I don't third-parties software to do this task. I only trust myself and yes, I backup more than one place. Never have everything in one basket.
I made a python script that watches over my "Important" directory and every time there's a change in that directory, it copies the new item to my "Important" directory to an external drive and overwrites the old one. I'm trying to make it watch several directories, but that part is still in the making. I have it running as a daemon, so I know that if I place something in that directory, it's automatically backed up. Granted, it's a very small directory.
pika is great. even when i tried alpine and it had trouble mounting the backups i still could use borg to mount them manually through the command line, and it still seemed to back up correctly. i agree with having backups 100% automated being a requirement, in my windows machine i even set up a batch script to back up my art folder with rsync with just a click but i still neglect it lmao
I have four machines with identical synced file structures. I operate them in rotation linked to a server. If an ssd goes down, I can just replace it reinstall and restore the synced directories. Tadaaaa! It’s identical to cloning the disk. Mail data, project files, general docs vids photo’s n all that. Love the rumbling ramble 😂. Ha!
I had a 4tb drive fail a while back, I even knew it was failing but didn't get around to backing it up before it failed, it's easy to forget about this stuff
I solved my backup problem by investing in a NAS running ZFS. From there everything can be snapshotted and synced automatically to everywhere, the cloud and manually rotated hard drives connected via USB.
Haven't yet watched the video - I will - but my strategy is to manually move important files and directories to an external hard drive and use Git on more "system critical" stuff (e.g: bashrc and the like). It's not perfect but I use a laptop and don't want to mess with automated backup.....yet.
My important personal data doesn't live on my PC, it lives on a NAS with redundant storage that I occasionally duplicate to external media that I keep in a fire safe. That said, nothing in my life is even remotely important enough to require that, but it gives me a mildly decent gut feeling about it.
Sorry for the silly question. What is the program/ app / desktop_env being used that is creating the neat red rectangle around the terminal and showing the time temperature and everything on top? TIA.
Can't complain about rsync myself, kept me on the same install since 2015. Basically I backup clients full disk to a server (rsyncd because network and with "chroot" option stores paths for links correctly). The server then gets backed up weekly to 2 luks/btrfs external drives, and 1 staggered by a week.
Why not have a separate machine as a backup server ? Similar to how proxmox does have a proxmox backup server. I use Borg and vorta myself to backup to external drives and to my home lab backup server, and also truenas to truenas clone.
OMG, LOL, was setting up a NAS when clicking this video. I like Borg, but prefer Restic as it is written in Go instead of Python. About the 3-2-1 strategy, I would back up the PC to the NAS, that makes 2 copies and then use Restic to push a deduplicated and encrypted chunk of data to a cloud provider for the off-site 3rd copy. Don't most Linux desktop/distro come with a backup utility? Those could be used to backup to a USB drive or NAS.
As far as your crucial family photos and whatnot....the Samsung T5 solid state external drive has crashed in price. 8TB model goes for well under $600. Solid state reliability and Samsung reliability
I publish a weekly exclusive podcast for all of my supporters. Check it out patreon.com/thelinuxcast
My backup strategy: Don't get attached to data. I feel like it's unhealthy to be afraid about what happens to these fiddly ones and zeros. Not only that, you can't even define what one and zero is. Only when you provide a specific context like, how does one look like on a particular ssd. But we do not worry about the ssd, but about the data. What is it actually? Therefore it's inherently flimsy. If you can't build a stonehenge out of it, it will get lost. Now, I do not levitate above others, I do feel afraid about my data. That being said, every time I have lost data, I was sad about a week, and then a refreshing calm flooded my mind. Every time I uninstall another social network app from my live. Every time I loose account to some online game. Every time, I loose PC. And I get to build anew. The only thing I deem worthy of storing is real life memories. Photos and stuff. And legal documents. That is quite small ammount, you can just print it to a photo book, burn it to a few disks and store it in some shed, basement and attic or something. (Talking about personal data, like personal personal. Not talking about corporate data, not talking about corporate data about people. Am talking only about our own personal data)
Maybe we should start a blockchain that would store data eternally, encrypted by the private key used as the wallet itself. That private key can be then etched to stone underneath your house or something.
I'm in 💪
Backup. That thing I should've done yesterday but I'll definitely do it tomorrow.
Nothing funner than taking a day to back EVERYTHING up and then
somehow misplace that back up when needed the most, orrrr
find out that the backup is dated and of little practical use.
BUT HEY! I plan on getting to it ASAP
"An incomplete ramble" ...isn't that...every video? :D :D :D
He likes to ramble😋
Jeez that must burn
Honestly, that's why I'm watching these vids regardless of the topic. No clickbait needed. Love it. Go ramble 😊
@@bk2bsc to ramble or not to ramble☺️
Ramble more, that's why we watch
You have 2 options: 1. divide 'big stuff' and important stuff. Backup everything big rarely locally and backup important encrypted on multitude of places. 2nd option is the reason you posted this video.
I'm looking into moving to ZFS cloning, and as far as I understand ZFS and BTRFS backups is the "end-game" when you have a lot of data to backup. The benefits is that the filesystem doesnt have to evaluate every file in the list to backup, it just compares harddisk sectors of the source and the target. If there is a changed sector, it just copies the sector, everything else is left alone. Until you have large amounts of files, rsync works fine. It's when rsync literally takes hours just to parse the list of files to backup, that you wanna move to filesystem cloning.
Definitely would love videos on backup options comparing strengths and weaknesses of each!
With AWS -- You can store backups there in a tiered fashion. So you can store them initally in more accessible (more expensive) storage. Then once it's of a certain age, it can be moved to Glacier.
You can use the Glacier for old backups, or stuff you won't want to access super frequently, or when you need it it's not an IMMEDIATE usage.
Just write a shell program that runs fleet of scripts that execute rsync -rva --delete with all of your directories. And add documentation.
That command does: recursive, verbose, archive and --delete removes stuff that isn't on the directory to copy (if file a is not anymore found in location b, it gets removed from location b, getting the things synced)
-a is already recursive, no need to add -r.
@@Fedor_Dokuchaev_Color This. The -a flag is a pretty failsafe method to get everything backed up. I usually add -x to avoid leakage from other filesystems which might be present as symlinks. Also PSA: NEVER mix up --delete and --del. Read up on it, they are VERY different things.
There's also versioning with rsync. Instead of overwriting it can move an old file to a name with a suffix.
I love this kind of video because this is how my mine works too. I’ve been using restic and rclone to back up to a bunch of different cloud services, raid, etc. via an automated regular cron job.
It has version control, encryption, open source, and is very stable.
Due to recent recurring Fedora issues when there were kernel updates I have switched to OpenSUSE based on your videos. To date I love it. We will see longer term. Now to the point of this response Backups.
1) Start with setup your own NAS (I have a Synology NAS).
2) Don't keep data/archive files on your local device. In my case I have an SD card in my laptop for my ISO's, etc... All data, archive files are on the NAS.
3) In my NAS the drives are mirrored.
4) Backup routine on my Synology NAS is done via two different USB drives one that is online all the time and one that is periodically attached for offline backups.
5) The utility used is Synology Hyper Backup.
With Synology, on my devices I map to multiple shares. So I save files saved directly to the device. Which in turn is automatically backed up nightly. With Synology you can backup to the cloud or other Synology devices.
My whole point here is that all is automated.
I hope this is informational.
I only see one valid option, more server HDDs and OpenZFS. (read-only snapshots; compression; encryption; extra CRC checks on all your data; all RAID configurations work; replication of data to other disks; servers or cloud solution). Replication only transfers the changed records not complete files and, for large files like a VM disk file it is an huge advantage. I use it since 2018.
A backup failure on my part brought me to Linux, since after a hard drive crash, I couldn't find the driver that I'd downloaded to force Windows 7 to acknowledge my 1920 x 1080 monitor. But I store everything that matters on flash drives and in free cloud storage, saved anew whenever I make a change to the file, and I haven't had a problem with this so far.
My backup strategy is similar, but I also do a full backup to an external drive once a quarter or so, and rotate backup drives to a safe deposit box. But I don't have terabytes of RUclips videos to back up, so the backup itself goes fairly fast.
And yes, I would love to see a dedicated Borg/Vorta/Pika Backup video. I recently started using Borg, but I have not really looked at the pros and cons of Vorta vs Pika.
Matt, I hear you. I also tried using rsync. When I tested it, I had the same problem. I have been using rclone for a few years now with zero problems. I use rclone to backup to my pcloud account. My backup data is not as huge as yours, not even close. It is mostly my config files and some documents. Rclone works. I built scripts to do my backups. Everytime I do any kind of change I back it up. Due to the size of my individual backups being so small, the time it takes is minimal. I would be at a loss for what to do with terabytes of data.
For local copies I use BTRFS, so I can make a snapshot and use rsync with the option that deletes the missing things without worrying about actually deleting important stuff. Worst case scenario there's some duplicated stuff that can be deduplicated later.
For remote backups, I use S3QL, which is a FUSE filesystem that uses an S3 compatible storage as backing. It has compression, encryption and most important of all, deduplication. Every file that is copied is hashed to detect duplicates, so two equal files are never uploaded twice. Therefore you can just "copy" everything in a new folder and you don't have to worry about syncing.
Pretty good rant. I've been through many backups and restores over the past 40 odd years, on windows, mac and currently linux. I''ve pretty much landed on timeshift for the system stuff, and vorta/borg for everything else. I use vorta for backups of selcted directories to the cloud, as well as a nas and a couple usb drives directly connected to the pc. I also do a yearly backup to a 4tb drive that I store offsite.
Excluded directories on vorta include /downloads/iso and system files.
I also keep the vorta connection information backed up to a Joplin notes card backed up in the cloud. A copy is also on bitwarden.
This is for my home workstation. My notebook is backed up to the NAS when home, and an external drive while traveling. My new laptop will be running fedora on btrfs, I'll need to see how this will all work out. It's never ending. Good luck finding your solution. I did find using vorta for everything but the sytem made life easier.
I backup my data encrypted with deja-dup to a local NAS. Then I mirror the encrypted backup files with freefilesync weekly to pCloud. In addition I use timeshift to create snapshots to a separate hard drive. I like deja-dup, because it does not only encrypt the backup files but also makes automated restore tests. The backups are also incremental and I can choose how long the system will keep older versions (3 month, 6 month, 1 year or forever) .
I had to restore my data once after an hard drive failure and it worked perfectly fine, but to be honest, my data are very 'clean' and I have in total less than about 250 GB to backup.
Been backing up data since 1995. Yes, been backing up since my Windows days, as well. Not once trusted a third-party app to do this task. I only backup one way and that's manually. Copy and Paste or Drag n' Drop. I also organize it as I go. Categorize them, organize them and date them. I can go back at any time and find exactly what I'm looking for. A photo, document, music, video, game files, FOSS applications that are abandon now, etc. Everything is backup that I want to backup. And yes, I never every backup anything like my whole system. I only save(backup) data I can't afford to lose only. That's how I know exactly the date I switch to Linux. July 15, 2003. Because I backup all data I can't afford to loose. Nothing is encrypted either, I don't trust that either as access to it with zero problems. Just like I don't third-parties software to do this task. I only trust myself and yes, I backup more than one place. Never have everything in one basket.
My system as well for both Linux & Windows.
I just back up to a separate hard drive, although they aren't always up to date.
I realized I don’t need half the files I have not opened in five years recently. A giant purge helped my back up anxiety.
I appreciate these rambles, more people should do it.
Matt, I personally use five backups, three physical backups and two cloud-based backups. All encrypted of course.
I've replaced grub with systemd-boot and it's really great
I made a python script that watches over my "Important" directory and every time there's a change in that directory, it copies the new item to my "Important" directory to an external drive and overwrites the old one. I'm trying to make it watch several directories, but that part is still in the making. I have it running as a daemon, so I know that if I place something in that directory, it's automatically backed up. Granted, it's a very small directory.
pika is great. even when i tried alpine and it had trouble mounting the backups i still could use borg to mount them manually through the command line, and it still seemed to back up correctly. i agree with having backups 100% automated being a requirement, in my windows machine i even set up a batch script to back up my art folder with rsync with just a click but i still neglect it lmao
I have four machines with identical synced file structures. I operate them in rotation linked to a server. If an ssd goes down, I can just replace it reinstall and restore the synced directories. Tadaaaa! It’s identical to cloning the disk. Mail data, project files, general docs vids photo’s n all that. Love the rumbling ramble 😂. Ha!
I had a 4tb drive fail a while back, I even knew it was failing but didn't get around to backing it up before it failed, it's easy to forget about this stuff
I solved my backup problem by investing in a NAS running ZFS. From there everything can be snapshotted and synced automatically to everywhere, the cloud and manually rotated hard drives connected via USB.
I don't even do a back up but at the same time the stuff i do i don't need a backup as i mainly use the pc for gaming
@TheLinuxCast I know a lot about computers, and started using NixoS coming from Win11. Do you know if it has a backup builtin?
Haven't yet watched the video - I will - but my strategy is to manually move important files and directories to an external hard drive and use Git on more "system critical" stuff (e.g: bashrc and the like). It's not perfect but I use a laptop and don't want to mess with automated backup.....yet.
Love the mugshot on the thumbnail Matt!🤣
My important personal data doesn't live on my PC, it lives on a NAS with redundant storage that I occasionally duplicate to external media that I keep in a fire safe. That said, nothing in my life is even remotely important enough to require that, but it gives me a mildly decent gut feeling about it.
Sorry for the silly question. What is the program/ app / desktop_env being used that is creating the neat red rectangle around the terminal and showing the time temperature and everything on top? TIA.
Can't complain about rsync myself, kept me on the same install since 2015. Basically I backup clients full disk to a server (rsyncd because network and with "chroot" option stores paths for links correctly). The server then gets backed up weekly to 2 luks/btrfs external drives, and 1 staggered by a week.
Why not have a separate machine as a backup server ? Similar to how proxmox does have a proxmox backup server. I use Borg and vorta myself to backup to external drives and to my home lab backup server, and also truenas to truenas clone.
Hears the word ISOs and backup. Checks backup drive. Awwww damn it! 426GB of ISOs. Oldest one Ubuntu 14.04. Bugger!
Update: Found Slackware 11. Ouch!
rsync has a switch to mirror the folders your backing up so if you delete a file in your home dir it will be removed from the backup
OMG, LOL, was setting up a NAS when clicking this video.
I like Borg, but prefer Restic as it is written in Go instead of Python.
About the 3-2-1 strategy, I would back up the PC to the NAS, that makes 2 copies and then use Restic to push a deduplicated and encrypted chunk of data to a cloud provider for the off-site 3rd copy.
Don't most Linux desktop/distro come with a backup utility? Those could be used to backup to a USB drive or NAS.
Oh, and you don't have a backup until you test the restore of that backup. After setting the backup up, test doing a restore.
Hey bro what. Is your daily Linux distro ❤
Timeshift all the way
Agreed
Hey Matt, I'll tell ya, nobody has a good backup strategy. :D Mines are also a mess. But at least a mess is better than nothing.
You guys have a backup???
Can we get your dot files?
I live on the cloud until it betrays me.
Once again, my non-spam comment got deleted... this is the only channel it happens on.
Stick with a stable build, and don't mess with what works.
i back up to telegram 😂
thanks algorithm for reminding me to block this channel
Simplicity is key. Just get some hdds and save periodically to it with dates and descriptions. Don't make it hard.
Agree.
As far as your crucial family photos and whatnot....the Samsung T5 solid state external drive has crashed in price. 8TB model goes for well under $600. Solid state reliability and Samsung reliability
I use multiple T7 (1 and 2 TB) where I manually store different data on.