@@JoseOcampo-g5mno need for that since "fick" in German is exactly the word you though of. "F*k what?" is exactly what came to my mind when I heard of using a cheap Chinese SSD for a boot drive of a machine running stuff that might be important. Now it could work, who knows, but proxmox is not esxi and tends to write a bit more to the bit drive. A large overprovisioning area might help. Still wouldn't do that. I'd use brand name SSD for that, and preferably in a mirror.
Good enough for a test rig. Think I'll stick with established brands for anything that I put into production. Especially when you can get Western Digital, Crucial, and Samsung for less or just a tiny bit more money. At least you know the company will be around if and when you need to ever make a warranty claim. Now that being said the migration info was solid!
Exactly why would you risk it unless it was huge discounted price and for non important use. The only thing that matters is the NAND quality and controller. Not that its new or big.
Most of the bad Chinese shit has been found to be label stripped and replaced with flat out fraudulence. Storage has become ridiculously cheap. BUT you still gotta stay vigilant. Used to be you could trust WD benchmarking. Now... not so much. And Intel? WTF's goin' on over there. That big green splash NVid about to roll out a tsunami bigger than the other big green wave of LnxMint 22 (aka hurricane Wilma) Is rapidly corroding the stupidity of MS-snapshotting your every keystroke. Wow....
Always feels like talking to a friend when watching your videos. Thanks for the explanation. Don't need this now but good to have for future reference.
Recently I bought couple of older unused U.2 800GB SK Hynix SSDs for 40$ each along with 15$ aliexpress PCIe adapters, not the fastest SSDs out there but ~4PBW endurance and power loss protection are nice to have on a server.
I actually just made a similar move from my 500gb M.2 boot drive to a 2tb 980 evo pro. I booted into a live CD and used dd to clone my boot drive to the new drive, after the clone was complete booted into gparted to expand the local-lvm partition. Once booted back into pve on the new drive i expanded the fs of local-lvm from the cli
Boy oh boy would this video have helped me a couple of months back when I had my primary drive start to fail in my Proxmox backup server. I tried and tried to use Clonezilla to duplicate the failing drive to my new drive but failed miserably. I ended up backing up some key files from /etc and just doing a complete reinstall of PBS on the new drive.
@@blakecasimir It seemed to complete OK when I cloned them, but no matter what I tried, the cloned drive would not boot. It's like the grub stuff didn't transfer over or something.
One of the reasons why I have a pair of 500 gig SSDs in ZFS mirror for boot in ProxMox. There is even special instructions on how to deal with failed ZFS boot drive.
Three stupid questions: 1) Do you have a blog post with all of the commands? (specifically - the syntax for the zfs detach command)? 2) I am guessing that this really only works if you going from a smaller drive to a bigger drive, but not the other way around? 3) You mentioned that if you are using EFI, to leave the grub part out. But I thought that after the EFI loads, it will still go to the Grub menu in Proxmox, no? Your help is greatly appreciated. Thank you.
Answers: 1. openzfs.github.io/openzfs-docs/man/master/8/zpool-detach.8.html is the man page. The short syntax is 'zpool detach ' where device is exactly what 'zpool status' shows. 2. It works as long as the amount of space consumed by the zpool will fit on the new drive, since it's done by a zfs resilver and not by copying the block device. Similar to replacing a zfs drive with a smaller one. 3. It Depends. For legacy BIOS booting, the grub loader is in the 1M first partition and then the grub loader loads the kernel / initrd from the EFI partition. For EFI booting without secure boot, grub isn't used at all, systemd boot is loaded straight out of the efi partition. For EFI booting with secure boot, grub is stored on the efi partition. Basically, EFI stores the loader (grub or not) as a file in the FAT partition instead of a dedicated partition. In any case I would copy the 1M partition if it's empty or not.
Thanks, good to learn other ways to do things. Just whould it more correct to zpool attach nvme disk by id of the disk (same way as the first was attached)?
Great Video! Thanks for making it. Quick Question, could you teach us how to power your megalab? What parts did you use to MacGyver you way through it. I would love to copy that from you. Since having a real power supply is bulky for a lab setup. Thanks!
It's this - www.mini-box.com/picoPSU-150-XT-150W-Adapter-Power-Kit I don't remember if I got the 120W or 150W but it's one of those. Not very powerful.
prices on ssd jumped up again - they should fall again at some point. my request for you would to be to make dual low power nas but with some special qualities - mega ram, good nvme caching layers and all flash arrays plus 40g dual port card - it is a lil pie in the sky but maybe you could do something with older hw - even a z420 board. megaram for nas is very inportant but the weakest link is probably networking for smb sector and homelabbers/prosumers but having dual nas is worth the time and money too, you can do point to point with dual port cards and also sync up nases quickly - the 56g cards are like 40 bucks
Many thanks for this and was exactly what I was looking for. Main instructions start at 10min 17secs in. Worked very well unlike trying to Clone via various apps including Clonezilla, which always failed to boot once done. This method worked well and could do it Live which was a plus. In my instance, I connected the new drive via USB due to no spare space in tiny PC. It was quick and easy to follow. I didn't worry about the last part i.e. mirroring the ZFS as.my VM's were all contained on 2nd disk. Nice work..
man, I am just about to replace my ssd in my proxmox, will follow your guide, lets see where we are in few minutes ;-) EDIT: job done, all successful. I had a bit more complicated setup because: 1- it was a replacement of broken SSD sata that works in mirror with SSD NVMe in my mini pc 2- 3 partitions belonged to Proxmox but 4th one was passed through to VM and there used as TrueNAS storage 3- because of replacement I had to use: zfs replace pool old_partition new_partition rather than attach 4- exactly the same later in TrueNAS 5- after resilvering all is ok and I checked the boot from the new disk only - works as well one comment to your video: attaching "sda1" or "nvme1" to zfs pool is not the best one - better use diskid or at least partuid - your life can get complicated if you just use /dev/sdaX ;-) perfect video, thank you a million!
Glad it's working well for you! zfs isn't super particular about dev names like some other filesystems on Linux, but using uuid is the best practice still.
@@apalrdsadventures yes, but sdX can be renumbered - my TrueNAS has 6 HDD and every reboot is different sdX. But if we use uuid or diskid it remains the same.
My home-lab and I say thanks. I'd be interested in a deep dive in boot loaders if you are looking for video ideas. I use grub (because that's what Debian installs) but I have a feeling I should really be switching to EFI.
But...Is MegaLab sitting on top of MegaBox? Also, there was the odd SSHD technology, where a mechanical drive had something like 8 GB of flash storage that worked as cache.
MegaLab is on the exact box it came in. Also, Apple sold a 'Fusion Drive' for awhile that did that, but for consumer stuff it's cheaper, smaller, an easier to have a flash only drive now.
When I created the partition table, the partitions are expanded to fill the whole drive (since sfdisk was instructed to not use the last-lba and partition 3 size from the old drive). So zfs sees the full space. ZFS will then limit to the space of the smallest mirror in the pool when both are attached, but as soon as I detach the smaller drive the full space is available (even without rebooting). I just rebooted to physically remove the old drive and make sure the new drive is properly bootable (it is).
You didn't touch on over-provisioning. I typically leave some NVMe free space, either by NS or partition to give some more spare space, although I mostly use old enterprise drives with high endurance where that isn't as important as they usually have plenty of reserved spare space.
ZFS will properly use discard/trim, so unused space will be free for the wear leveling algorithm to use. In my case, the drive was less than half full before, so now it's less than 1/8 full, and has plenty of empty space for flash endurance.
The wearout is a bit annoying with proxmox. I wish they would implement a ram disk for logs like in openmediavault, assume proxmox is too professional😉 meanwhile i use 2 very cheap small sata ssds, in a btrfs raid0. Thats quite performant for the os. And i can replace the ssds without regret. My vms reside on my nvme. I am very happy with the setup.
ZFS does 3 things - redundancy (merging disks into one), volume management (splitting the pool into sub-parts), and a filesystem. You can still use the volume manager and filesystem features and all of the benefits of zfs on a single-disk system.
I tried today replacing second disk in TrueNAS - all was OK but when I tried the new disk to boot the system from it, it failed. So if you could make a video how to replace boot-pool disk in TrueNAS it would be great. Probably something with boot/EFI was not done - apparently "zpool replace..." was not enough to boot from new disk. In proxmox there is this command that does the job but now to do it in TrueNAS?
@@apalrdsadventures this part I am not sure - TrueNAS deals nicely with replacing disk that are in user created pools but boot-pool is created by installer and I was not able to find "replace disk" in menu but I might be wrong. I will try again as it is good to try as long as everything works, not when s..t happened already ;-) But I tried from terminal and all was ok except new disk was not bootable
@@apalrdsadventures but magic of TrueNAS is: you download the backup, install from scratch, upload backup and everything is back except ssh keys so 15min job
Ive used Clonezilla in the past with great success copying nvme ssds (dual boot Win/Linux systems sometimes) to each other.... does the zfs partition cause problems with Clonezilla?
Proxmox is such an unpolished product. Given how long Solid State Drives have been around you'd think it wouldn't be as bad as what it is at destroying SSDs. It's like the SSD Terminator
I found that by far the biggest contributors to SSD wear were the HA services, which you can safely disable if you aren't in a cluster or don't need them - pve-ha-crm & pve-ha-lrm
How long did the old SSD last? Ive read some comments say that Proxmox eats consumer grade SATA/NVMe SSD drives. Any tips to prolonging the life of SSD drives used as a Proxmox boot drive? Any issues storing VMs and ISOs on the boot drive?
I don't think it's any more aggressive with boot drives by itself than other server systems. It has usual system logging, which is not a massive amount of data, but add in the VM disks on top of that and it can add up to a lot of background writes. But generally for longer SSD life, using a larger drive and filling it less means each flash cell gets programmed/erased less frequently. The old wisdom was to overprovision (leave empty space in the partition table), but using a modern fs like zfs that supports discard/trim will let the drive know which blocks can be discarded and the free space on the fs is basically the overprovision space. Some zfs tuning can be done (like increasing the block size) as well. Enabling discard support for the VMs also means their free space passes up to the drive as well. I'm using this to store the VMs/CTs on my test system, so this system does see all of the use of the VMs in addition to the Proxmox system itself. It's not doing a whole lot, but the VMs do get created/destroyed often as I often walk through my tutorials on a new VM each time.
@@apalrdsadventuresfrom what I heard proxmox writes quite a lot to the boot drive, as opposed to say esxi (rip), which could be rather safely run from a flash USB. Particularly if you're using it with a couple of other proxmox machines in an HA setup.
If you have no RAIDZ (raidz1/raidz2/raidz3) you can follow a similar process, but it's not identical. First, add the new NVMe SSD, add the partition table (copy it from either of the other ones), copy grub and copy boot partitions. zpool attach it to one of the sata SSDs. Now you'll have a zfs mirror with one sata ssd + the nvme ssd. Now you can detach the first sata ssd. Make sure to set autoexpand=on to expand the pool with the new space of the NVMe SSD. After that, you need to run zpool remove on the second ssd. This will remap all of the data from the second drive to the first one.
Your advise helped out amazingly but I did it just a little bit differently, I was able to remove one of the SSDs from the striped rpool, the zpool resilvered the remaining drive then I followed your video afterwards. Thank you very much!
How to build a reliable virtualization host: 1) Desktop board of unknown hardware & driver provinance running Proxmox. 2) FLIGGIDII 2TB SSD without integrated power loss protection. K.
you can convert single drives to/from mirrors or add more disks to a mirror using zpool attach and zpool detach. In this example I attach the one new drive then detach the one old drive (so it goes single -> 2-way mirror -> single), but you could just as easily prep the 2 new drives (using the same boot / efi partition process on each drive) then attach both (now in a 3-way mirror). Once resilvering is done with both drives, you can detach the first (now in a 2-way mirror).
@@apalrdsadventures thanks for the information I have started to do it now and when I write the partition file back to the first new disk (not tried the other yet) it comeplets but i get this error too Partition 1 does not start on physical sector boundary. is this ok?
hrm I wonder if your existing drive is using 512 byte sectors and the new drive is using 4096 byte sectors? Usually we partition everything assuming 4096 byte even if the drive claims 512 byte. As to the zpool, is the zpool combined with more disks or is it not using zfs at all?
@@apalrdsadventures Yes its using 512 if i recall now when i installed (just installed with the proxmox installer gui) i remember thinking why would i use zfs with only 1 drive so didnt now that im learning im moving over to 2 drives in a zfs pool, do i have a way around this?
I'm rather stuck. I don't know what to host. I have a NAS running TrueNAS with missmatched drives And a singular proxmox node with 16gb of ram a 240gb normal SSD and a 512gb SSD. I also have a VPS. The reason why I am stuck is, I can't open ports and I don't know how I can expose things to my domain on the Internet. Some people have said using a VPN, but I'm not sure.
Clonezilla requires me to keep the system down during the whole process, but also won't expand the partition table unless I do the same process from Clonezilla instead of the booted system.
@@apalrdsadventures Yeap and Clonezilla failed for me no matter how I tried. It does clone BUT it fails to boot. I then stumbled on your method which works a treat. I even did did mine with new drive attached via USB as no there were no spare room in mini PC for the new disk. Once done, swapped it over and booted without an issue. Nice!
I love how you're a tenth dan wizzard in storage tech, but you tape down your SSD and boot by bridging header pins with a twiddler just like skrubs such as I. Also, I feel I need to raise the pedantry by pointing out you said "cat", but never actually ran /usr/bin/cat.
cat is in /bin, not /usr/bin. Actually the recently the norm is a merged /bin and /usr/bin so both work but /bin is the traditional location. IMO if you say "cat" you should show a feline.
FIKWOT: A name you can trust.
I switched the vowels in my hed
@@JoseOcampo-g5mno need for that since "fick" in German is exactly the word you though of. "F*k what?" is exactly what came to my mind when I heard of using a cheap Chinese SSD for a boot drive of a machine running stuff that might be important.
Now it could work, who knows, but proxmox is not esxi and tends to write a bit more to the bit drive. A large overprovisioning area might help. Still wouldn't do that. I'd use brand name SSD for that, and preferably in a mirror.
Good enough for a test rig. Think I'll stick with established brands for anything that I put into production. Especially when you can get Western Digital, Crucial, and Samsung for less or just a tiny bit more money. At least you know the company will be around if and when you need to ever make a warranty claim.
Now that being said the migration info was solid!
The old one was a WD actually, although my Samsung drives have all been doing great in my production Proxmox system.
I'm really not inclined to trust an unknown SSD manufacturer that doesn't even post spec sheets for their products.
Exactly why would you risk it unless it was huge discounted price and for non important use. The only thing that matters is the NAND quality and controller. Not that its new or big.
yeah I just use Kingston KC3000s wherever I can
Most of the bad Chinese shit has been found to be label stripped and replaced with flat out fraudulence.
Storage has become ridiculously cheap. BUT you still gotta stay vigilant.
Used to be you could trust WD benchmarking. Now... not so much.
And Intel? WTF's goin' on over there. That big green splash NVid about to
roll out a tsunami bigger than the other big green wave of LnxMint 22 (aka hurricane Wilma)
Is rapidly corroding the stupidity of MS-snapshotting your every keystroke. Wow....
Very cool of Farquad to send you an NVMe drive !
Meh, it's advertising.
Always feels like talking to a friend when watching your videos. Thanks for the explanation. Don't need this now but good to have for future reference.
Always impressed by your deep knowledge of such niche topics 😮
the way you added new drive to ZFS pool is asking for headaches, add by using GPT UUID as the original was
Recently I bought couple of older unused U.2 800GB SK Hynix SSDs for 40$ each along with 15$ aliexpress PCIe adapters, not the fastest SSDs out there but ~4PBW endurance and power loss protection are nice to have on a server.
I actually just made a similar move from my 500gb M.2 boot drive to a 2tb 980 evo pro. I booted into a live CD and used dd to clone my boot drive to the new drive, after the clone was complete booted into gparted to expand the local-lvm partition. Once booted back into pve on the new drive i expanded the fs of local-lvm from the cli
Dude, 'clone the boot drive' to the NvMe
Set BIOS, boot from it... carry on! Of Course!!
Thank you. Sometimes things really are just that easy?
Boy oh boy would this video have helped me a couple of months back when I had my primary drive start to fail in my Proxmox backup server. I tried and tried to use Clonezilla to duplicate the failing drive to my new drive but failed miserably. I ended up backing up some key files from /etc and just doing a complete reinstall of PBS on the new drive.
This is mostly made possible by using zfs, so you can resilver to the new drive while the old drive is still in the system.
CZ flat out refuses to backup drives with Proxmox on them. It's frustrating.
@@blakecasimir It seemed to complete OK when I cloned them, but no matter what I tried, the cloned drive would not boot. It's like the grub stuff didn't transfer over or something.
One of the reasons why I have a pair of 500 gig SSDs in ZFS mirror for boot in ProxMox. There is even special instructions on how to deal with failed ZFS boot drive.
@@Chris.Wiley. I didn't get that far, for me it failed with an error when trying to create a drive image.
Nice video, as always
Why don't you just clone the drive with something like clonezilla, and resize the ZFS partition ?
This keeps the system online, and keeps zfs aware of the new disk
Three stupid questions:
1) Do you have a blog post with all of the commands? (specifically - the syntax for the zfs detach command)?
2) I am guessing that this really only works if you going from a smaller drive to a bigger drive, but not the other way around?
3) You mentioned that if you are using EFI, to leave the grub part out. But I thought that after the EFI loads, it will still go to the Grub menu in Proxmox, no?
Your help is greatly appreciated.
Thank you.
Answers:
1. openzfs.github.io/openzfs-docs/man/master/8/zpool-detach.8.html is the man page. The short syntax is 'zpool detach ' where device is exactly what 'zpool status' shows.
2. It works as long as the amount of space consumed by the zpool will fit on the new drive, since it's done by a zfs resilver and not by copying the block device. Similar to replacing a zfs drive with a smaller one.
3. It Depends. For legacy BIOS booting, the grub loader is in the 1M first partition and then the grub loader loads the kernel / initrd from the EFI partition. For EFI booting without secure boot, grub isn't used at all, systemd boot is loaded straight out of the efi partition. For EFI booting with secure boot, grub is stored on the efi partition. Basically, EFI stores the loader (grub or not) as a file in the FAT partition instead of a dedicated partition. In any case I would copy the 1M partition if it's empty or not.
@@apalrdsadventures
Thank you.
You help is greatly appreciated.
Thanks, good to learn other ways to do things. Just whould it more correct to zpool attach nvme disk by id of the disk (same way as the first was attached)?
probably yes
Great Video! Thanks for making it. Quick Question, could you teach us how to power your megalab? What parts did you use to MacGyver you way through it. I would love to copy that from you. Since having a real power supply is bulky for a lab setup. Thanks!
It's this - www.mini-box.com/picoPSU-150-XT-150W-Adapter-Power-Kit
I don't remember if I got the 120W or 150W but it's one of those. Not very powerful.
I have to say, I really like the way nano handles long lines… is that the default behavior or a plug-in or setting?
prices on ssd jumped up again - they should fall again at some point. my request for you would to be to make dual low power nas but with some special qualities - mega ram, good nvme caching layers and all flash arrays plus 40g dual port card - it is a lil pie in the sky but maybe you could do something with older hw - even a z420 board. megaram for nas is very inportant but the weakest link is probably networking for smb sector and homelabbers/prosumers but having dual nas is worth the time and money too, you can do point to point with dual port cards and also sync up nases quickly - the 56g cards are like 40 bucks
Tri-level cell. Not layers.
A chinese gong rings comedically on every bootup.
If my drive is a lvm or ext4 instead of zfs, can I use only "dd" to copy the data partition?
Not while the partition is in use, but lvm has a similar mirror copy feature to zfs
Many thanks for this and was exactly what I was looking for. Main instructions start at 10min 17secs in.
Worked very well unlike trying to Clone via various apps including Clonezilla, which always failed to boot once done.
This method worked well and could do it Live which was a plus. In my instance, I connected the new drive via USB due to no spare space in tiny PC. It was quick and easy to follow. I didn't worry about the last part i.e. mirroring the ZFS as.my VM's were all contained on 2nd disk. Nice work..
7:15 Something disappeared 👀
I needed some snacc time
Those 16GB Optane you can get for 5$ make great boot drives, they will last forever and enough to host the system with a dedicated data pool
RIP 3d Xpoint memory
IMO they're better for ZFS special and SLOG devices.
man, I am just about to replace my ssd in my proxmox, will follow your guide, lets see where we are in few minutes ;-)
EDIT: job done, all successful.
I had a bit more complicated setup because:
1- it was a replacement of broken SSD sata that works in mirror with SSD NVMe in my mini pc
2- 3 partitions belonged to Proxmox but 4th one was passed through to VM and there used as TrueNAS storage
3- because of replacement I had to use: zfs replace pool old_partition new_partition rather than attach
4- exactly the same later in TrueNAS
5- after resilvering all is ok and I checked the boot from the new disk only - works as well
one comment to your video: attaching "sda1" or "nvme1" to zfs pool is not the best one - better use diskid or at least partuid - your life can get complicated if you just use /dev/sdaX ;-)
perfect video, thank you a million!
Glad it's working well for you! zfs isn't super particular about dev names like some other filesystems on Linux, but using uuid is the best practice still.
@@apalrdsadventures yes, but sdX can be renumbered - my TrueNAS has 6 HDD and every reboot is different sdX. But if we use uuid or diskid it remains the same.
My home-lab and I say thanks. I'd be interested in a deep dive in boot loaders if you are looking for video ideas. I use grub (because that's what Debian installs) but I have a feeling I should really be switching to EFI.
I really just go with what the installer does for boot loaders, but GRUB is pretty cool (especially theming)
Rocking the shirt from Veronica Explains! 👍
Most* of the shirts I wear in videos are from other channels that I watch
But...Is MegaLab sitting on top of MegaBox? Also, there was the odd SSHD technology, where a mechanical drive had something like 8 GB of flash storage that worked as cache.
Doesn't seagate still make those?
MegaLab is on the exact box it came in. Also, Apple sold a 'Fusion Drive' for awhile that did that, but for consumer stuff it's cheaper, smaller, an easier to have a flash only drive now.
@@apalrdsadventures I happen to have a Seagate one which has 1TB + 8GB of SSD cache. You can still buy those new.
How did you reclaim the extra space on the 2TB NVME after detaching from zfs pool and booting with it?
When I created the partition table, the partitions are expanded to fill the whole drive (since sfdisk was instructed to not use the last-lba and partition 3 size from the old drive). So zfs sees the full space.
ZFS will then limit to the space of the smallest mirror in the pool when both are attached, but as soon as I detach the smaller drive the full space is available (even without rebooting). I just rebooted to physically remove the old drive and make sure the new drive is properly bootable (it is).
Understood.
Really appreciate your Channel. 👍
@@apalrdsadventures
You didn't touch on over-provisioning. I typically leave some NVMe free space, either by NS or partition to give some more spare space, although I mostly use old enterprise drives with high endurance where that isn't as important as they usually have plenty of reserved spare space.
ZFS will properly use discard/trim, so unused space will be free for the wear leveling algorithm to use. In my case, the drive was less than half full before, so now it's less than 1/8 full, and has plenty of empty space for flash endurance.
Great video! Just a slightly off-topic question: What PSU are you using?
It's this - www.mini-box.com/picoPSU-150-XT-150W-Adapter-Power-Kit
Question. Is it (and why) better than sgdisk /dev/disk/by-id/ -R /dev/disk/by-id/ then sgdisk -G /dev/disk/by-id/ and then extend last partition?
The wearout is a bit annoying with proxmox. I wish they would implement a ram disk for logs like in openmediavault, assume proxmox is too professional😉 meanwhile i use 2 very cheap small sata ssds, in a btrfs raid0. Thats quite performant for the os. And i can replace the ssds without regret. My vms reside on my nvme. I am very happy with the setup.
I'd prefer to log via systemd, but the logs aren't that big and the systemd journal is larger than the Proxmox logs
What's the point of running zpool with just one drive? Where is redundancy in that?
To be able to use snapshots maybe?
Send/receive between nodes, you still get all the create a pool for a vm type stuff.
You still get all the other benefits of ZFS
ZFS does 3 things - redundancy (merging disks into one), volume management (splitting the pool into sub-parts), and a filesystem. You can still use the volume manager and filesystem features and all of the benefits of zfs on a single-disk system.
Thank you.
I tried today replacing second disk in TrueNAS - all was OK but when I tried the new disk to boot the system from it, it failed. So if you could make a video how to replace boot-pool disk in TrueNAS it would be great. Probably something with boot/EFI was not done - apparently "zpool replace..." was not enough to boot from new disk. In proxmox there is this command that does the job but now to do it in TrueNAS?
I believe TrueNAS has a system to add drives to the boot pool through their UI, although I haven't used TrueNAS in a few years.
@@apalrdsadventures this part I am not sure - TrueNAS deals nicely with replacing disk that are in user created pools but boot-pool is created by installer and I was not able to find "replace disk" in menu but I might be wrong. I will try again as it is good to try as long as everything works, not when s..t happened already ;-)
But I tried from terminal and all was ok except new disk was not bootable
@@apalrdsadventures but magic of TrueNAS is: you download the backup, install from scratch, upload backup and everything is back except ssh keys so 15min job
well explained - ty
Ive used Clonezilla in the past with great success copying nvme ssds (dual boot Win/Linux systems sometimes) to each other.... does the zfs partition cause problems with Clonezilla?
Proxmox is such an unpolished product. Given how long Solid State Drives have been around you'd think it wouldn't be as bad as what it is at destroying SSDs. It's like the SSD Terminator
Proxmox really isn't doing a lot to the root disk, it's the VMs that are doing a lot of disk IO.
I found that by far the biggest contributors to SSD wear were the HA services, which you can safely disable if you aren't in a cluster or don't need them - pve-ha-crm & pve-ha-lrm
How can I do this without the proxmox-boot-tool, using Ubuntu?
How long did the old SSD last? Ive read some comments say that Proxmox eats consumer grade SATA/NVMe SSD drives. Any tips to prolonging the life of SSD drives used as a Proxmox boot drive? Any issues storing VMs and ISOs on the boot drive?
I don't think it's any more aggressive with boot drives by itself than other server systems. It has usual system logging, which is not a massive amount of data, but add in the VM disks on top of that and it can add up to a lot of background writes.
But generally for longer SSD life, using a larger drive and filling it less means each flash cell gets programmed/erased less frequently. The old wisdom was to overprovision (leave empty space in the partition table), but using a modern fs like zfs that supports discard/trim will let the drive know which blocks can be discarded and the free space on the fs is basically the overprovision space. Some zfs tuning can be done (like increasing the block size) as well. Enabling discard support for the VMs also means their free space passes up to the drive as well.
I'm using this to store the VMs/CTs on my test system, so this system does see all of the use of the VMs in addition to the Proxmox system itself. It's not doing a whole lot, but the VMs do get created/destroyed often as I often walk through my tutorials on a new VM each time.
@@apalrdsadventuresfrom what I heard proxmox writes quite a lot to the boot drive, as opposed to say esxi (rip), which could be rather safely run from a flash USB. Particularly if you're using it with a couple of other proxmox machines in an HA setup.
I am new to your channel, wow, thanks for sharing this - it helps so much making my it knowledge special ;-)
Quick question! I am trying to migrate from 2 sata ssds (setup in raid 0) to 1 NVME SSD, would the video be able to work for my case?
If you have no RAIDZ (raidz1/raidz2/raidz3) you can follow a similar process, but it's not identical.
First, add the new NVMe SSD, add the partition table (copy it from either of the other ones), copy grub and copy boot partitions. zpool attach it to one of the sata SSDs. Now you'll have a zfs mirror with one sata ssd + the nvme ssd. Now you can detach the first sata ssd. Make sure to set autoexpand=on to expand the pool with the new space of the NVMe SSD.
After that, you need to run zpool remove on the second ssd. This will remap all of the data from the second drive to the first one.
Your advise helped out amazingly but I did it just a little bit differently, I was able to remove one of the SSDs from the striped rpool, the zpool resilvered the remaining drive then I followed your video afterwards. Thank you very much!
How to build a reliable virtualization host:
1) Desktop board of unknown hardware & driver provinance running Proxmox.
2) FLIGGIDII 2TB SSD without integrated power loss protection.
K.
how can I do this but my new boot drive I would like to install 2 new mirrored drives?
you can convert single drives to/from mirrors or add more disks to a mirror using zpool attach and zpool detach.
In this example I attach the one new drive then detach the one old drive (so it goes single -> 2-way mirror -> single), but you could just as easily prep the 2 new drives (using the same boot / efi partition process on each drive) then attach both (now in a 3-way mirror). Once resilvering is done with both drives, you can detach the first (now in a 2-way mirror).
@@apalrdsadventures thanks for the information I have started to do it now and when I write the partition file back to the first new disk (not tried the other yet) it comeplets but i get this error too Partition 1 does not start on physical sector boundary. is this ok?
ok also just realized my single boot drive is not in a zpool by its self am i screwed here?
hrm I wonder if your existing drive is using 512 byte sectors and the new drive is using 4096 byte sectors? Usually we partition everything assuming 4096 byte even if the drive claims 512 byte.
As to the zpool, is the zpool combined with more disks or is it not using zfs at all?
@@apalrdsadventures Yes its using 512 if i recall now when i installed (just installed with the proxmox installer gui) i remember thinking why would i use zfs with only 1 drive so didnt now that im learning im moving over to 2 drives in a zfs pool, do i have a way around this?
this is greats tutorial
many thanks sir.
Oh no. He's fallen to the dark side.
I'm rather stuck.
I don't know what to host.
I have a NAS running TrueNAS with missmatched drives
And a singular proxmox node with 16gb of ram a 240gb normal SSD and a 512gb SSD.
I also have a VPS.
The reason why I am stuck is, I can't open ports and I don't know how I can expose things to my domain on the Internet. Some people have said using a VPN, but I'm not sure.
Why not just clone it with Clonezilla?
That's what I would have done.
Wait can i just clone the proxmox boot drive and plug and play?
Clonezilla requires me to keep the system down during the whole process, but also won't expand the partition table unless I do the same process from Clonezilla instead of the booted system.
@@apalrdsadventures Yeap and Clonezilla failed for me no matter how I tried. It does clone BUT it fails to boot.
I then stumbled on your method which works a treat. I even did did mine with new drive attached via USB as no there were no spare room in mini PC for the new disk. Once done, swapped it over and booted without an issue. Nice!
I love how you're a tenth dan wizzard in storage tech, but you tape down your SSD and boot by bridging header pins with a twiddler just like skrubs such as I.
Also, I feel I need to raise the pedantry by pointing out you said "cat", but never actually ran /usr/bin/cat.
cat is in /bin, not /usr/bin. Actually the recently the norm is a merged /bin and /usr/bin so both work but /bin is the traditional location. IMO if you say "cat" you should show a feline.
@@eDoc2020(pedantry increases)
Wot the Fik??? is how you exclaim when you lose data on an SSD... they could literally multiply their sales overnight by just rebranding it.