Clear explanation. Amazing! I have all with 2 boxes. VM/LXC should run fast so they sit inside NVME CEPH pool while ISOs and backups located on Synology NAS.
Using the Turnkey file server you suggested in your previous video. It's the best method I have seen so far. I don't need all the other bloatware the others offer. Turnkey file server offers a lean mean NAS machine lol.
I also tried Turnkey file server, but at some point I missed the filesystem management features of btrfs on which I store the files. Also by then I read pretty deep into smb.conf and missed some options (which I then put into the smb.conf manually. So I now run my NAS on a Debian VM without any GUI.
By far the most comprehensive guide going. Not only did you demonstrate all the various methods, you explained how to them, AND what to be aware of if you do. This easily the BEST guide on RUclips for this particular scenario. If I come across any forum post asking about this, I will be referencing this particular video. Outstanding.
Fantastic video. One note to viewer's if you doing storage and specially zfs or btrfs, always make sure you purchase CMR Drives, else be prepared to never be able to recover your pool if one of the disk goes bad.
So if I just throw in some random (but same size) drives, ie 2x 5TB sata drives in a mirrored zfs, one fails, and i replace it with a new, it will not work/be a waste?
@@monish05m got it. I know, but for personal mediastorage I am fine with it. Hdds are very cheap now so I just rund mirroring for this purpose. My compute server has a proper raid card and parity tho
Thank you! I struggled with this back in August of 2023 setting up my first proxmox PC ever, and it took a few days before I understood it more and figured it out. All I wanted was to slap in some hard drives in the proxmox PC and share them to whatever CT/VM I created. One requires bindmount, the other requires NFS. I had no idea at the time because I'm still fairly new to Linux. But I appreciate this video because I still haven't set up an SMB share but I think I will now! I'm tired of having to use WinSCP to copy files from Linux to Windows when I can just do an SMB share and immediately have easy access. Thank you for useful, insightful videos for Linux n00bs like me. :d
If you need help with deploying Samba quickly, I can copy and paste my deployment notes from my OneNote at home. I think that a lot of guides overcomplicates things, when really, there's only less than 20 lines that you will need to get a simple, basic, SMB share up and running.
@@ewenchan1239I think I'm going to use the turn key template for sharing built right in Proxmox. This guy also did a tutorial on it as he mentioned in this video. But thanks!
@@nirv No problem. For the benefit of everybody else, I'll still post my deployment notes for deploying Samba on Debian 11 (Proxmox 7.4-3). (The instructions should still work for Debian 12 (Proxmox 8.x). //begin Installing and configuring Samba on Debian 11: # apt install -y samba # systemctl status nmbd # cd /etc/samba # cp smb.conf smb.conf.bk # vi smb.conf add to the bottom: [export_myfs] comment = Samba on Debian path = /export/myfs read-only = no browsable = yes valid users = @joe writable = yes # smbpasswd -a joe # systemctl restart smbd; systemctl restart nmbd //end That's it. With these deployment notes, now you can have access to a SMB share, directly on your Proxmox host, without the need for a VM nor a CT. That way, the only way that this SMB share will go down would be if there's a problem with smbd/nmbd itself, and/or if your Proxmox host went down; at which point, you have other issues to deal with.
This video was perfectly timed for me as I’m looking to finally migrate some NTFS Shares from a Windows 2016 VM under Proxmox to something more modern. Time to experiment - Thank you!!
I have another video stored in my playlist that explains exactly how to do this. You're not the only person to recommend this solution. I appreciate this video as it outlines all the other options on a high level whilst still explaining how to go about it.
Thank your amazing videos! My setup is: Supermicro X10SRH-CF 2x120GB SSD boot drive of Proxmox For the fileserver part: 2x400GB SSD (partitioned 300+64GB) 6x4TB Connected via HBA VM/LXC stay on 300GB SSD (zfs mirror) "NAS" is located at 6x4TB raidz2 + 64GB (zfs mirror) special device All is managed by Cockpit plus 45drives plugins for ZFS and Fileshare It's for home use, not HA 😅
I install SAMBA servers in LXC containers. Best mix for performance and flexibility. If it breaks, you can easily revert back if you have a backup and you don't have to worry about polluting the host OS.
Pertinent timing! I was just trying to figure out a way for my Proxmox to detect where it is sharing NFS, so that it can gracefully close them down if I'm powering it off. I can parse journalctl , looked at nfswatch (which is pretty old). I wonder if there's a better way.
I also prefer to use a vm or a container for my file shares. The nice thing with container and the mount points is that if you increase the size it automatically increses the filesystem in the container as well. I once saw apalrds video on using a container and installing 45Drives' cockpit-file-sharing to have a nice GUI. If you create a user it automatically creates a smb password as well. It so easy to use and I still use this for some backups on my main computer. I never liked truenas so using 45Drive tools was a fantastic idea.
4:16 if I have a parity array or RAID Z array in ZFS terminology I can't just add a 6th drive to a 5 drive array for example - Raidz expansion has been implemented in OpenZFS. Although I don't know its current status in Proxmox.
Raid Z expansion seems to be in the 2.3 OpenZFS release that isn't out yet. Once ZFS releases it, it will probably be in the next Proxmox version. I'm guess probably less than a year out now, and I'll make a video when its added.
one thing I noticed is that backuing up LXC mountpoints is pretty slow vs VM disks (qcow? I cannot remember). I imagine it's because the content of a mountpoint might have to be crawled and proxmox checks for the archive bit, versus using some sort of changed block tracking for vm disks so any shares I would run on VMs, even if reads are quick, it's wasted processing time and wear on the disks
I'll need to check the backup time comparison, but I think your right with containers doing a file level backup and being slower. Also if using PBS you can backup only changes to a running VM, making incremental backups much faster.
Following your review on Ugreen's NASync DXP480T I backed the project. My plan is to install Proxmox on it and virtualise a NAS as well as VM with an archlinux or nixOS desktop which I can access from anywhere. I hope that this desktop will have very little latency in order to replace my current desktop installation on bare metal. It would be great you could make a series of videos which show how to achieve this.
You can do that. That's actually pretty easy. I don't know if I have my erasure coded CephFS exposed as a SMB share, but you can absolutely you that. You mount the CephFS to a mount point, and then in your smb.conf file, you can just point you share mount point to the same location, and now you've shared your CephFS as a SMB share. You can absolutely do that.
You have the create the LXC container yourself, which isn't a super trivial task, but it can be done. (Actually, in theory, it would be easier to deploy TrueNAS Scale as a container than UnRAID because TrueNAS Scale also runs on top of Debian, so really, all you would need to do is add the repos, and then find the difference in the list of packages that are installed, write that package delta file to a text file, and then install that package diff text file in a Debian LXC container.) That part isn't super difficult. It might take a little bit of testing to make sure everything works, as it should, but there shouldn't really be any technical reason as to why this method *can't* work.
the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it. but if you plan out nics and networking well its no biggie. i have multiple pve servers, one that is primarily my NAS but with 5 vms running, one which is mostly for VMs but which also has surveillance disks for a shinobi vm on it, and a couple more for 'play' and that's worked out pretty well. with proxmox its all about how you balance our those resources, thats the KEY thing with pve - whats the balance you want and need and do you ahve the gear to get it
"the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it." I'm not 100% sure what you mean by this. The protocol itself has no network interface requirements. You can share it on whatever interface you want, via the IP address that you want your clients to be able to connect to your SMB share with. So if you have a 1 GbE NIC and a 10 GbE NIC and your class C IPv4 address is something like 192.168.1.x (for the 1 GbE NIC) and 192.168.10.y (for the 10 GbE NIC), then you can have your clients connect to the 192.168.10.y subnet, if they're on the same subnet. The protocol has no network interface requirements.
Proxmox doesn't have a great native way to mix drive sizes as the main included RAID solution, ZFS, doesn't support mixed drives well. BTRFS is a option, but the unstable raid 5/6 would worry me if it was my main copy of data. You can use snapraid + mergerfs on proxmox if its for media files, but this wouldn't be suited towards something like VMs. I also made a video earlier about RAID like solution for mixed drive sizes that might help see the Pros and Cons of the different solutions. ruclips.net/video/NQJkTiLXfgs/видео.html
Another fantastic video, thank you! :) I completely forgot that turnkey containers exist... 🤦🏻♂️ Definitely going to use that for a small file server I've been meaning to get going. If there's enough interest / script material for it, could you do a deeper dive into these turnkey VMs/containers? Haven't used them much, personally
I did more or less the same but in the lxc with it mapped to the main proxmox and using webmin . I'm thinking of getting a Minisforum MS-01 and fitting a HBA card ( external ) in the 8 line GPU slot and feeding into a 4-8 Drive Bay enclosure , but can't find any which have sata plugs at the back with a built in psu , can't only find internal ones for servers .
What do you mean by NAS system? I'd look at turnkey if you want a easy web interface, or a lightweight distro like Debian/Alpine if you want to edit samba file manually.
Hey mate, i just want to say thanks tou you. Your first video i saw when i tried to gonfig promox and get SFTPGo to working, thanks toy i got it. Afer 2 years, bought real HP Servr. Just wanted tell that to you, thanks a million for your videos!
I'd really like you to not just break it all down but build it all up in the form of a ws to dual nas solution for prosumer, homelabbers, and smb sector - everybody wants and needs dual nas redundancy coupled with fast networking - like 40g - it is very possible and pretty cheap to get going and would make for great content - do it with cots refurb boxes and and a few nvme arrays #jumbo frames #mtu
Excellent overview! Thanks!! If I need both SMB and NFS, what do you think about Debian Container with Cockpit+45Drives(cockpit-file-sharing)? Between this and Turnkey which one do you recommend?
I haven't tried the 45drives/cockpit file sharing gui, and will give it a shot. You do need extra permissions to use NFS servers on a Proxmox containers due to how NFS uses a kernel server.
re: ZFS on root If you install Proxmox of a mirror ZFS root, and you want to then do things like GPU passthrough, the guides that you will likely find online for how to do this, won't always necessarily tell/teach you how to update the kernel/boot parameters for ZFS on root. As a result, I stayed away from it and I used my Broadcom/Avago/LSI MegaRAID 12 Gbps SAS RAID HBA and created a RAID6 array for my Proxmox OS boot drive, that way the Proxmox installed would install it onto a "single drive" when really it was 4x 3 TB HGST HDDs in a RAID6 array. That way, if one of my OS disks goes down, my RAID HBA can handle the rebuild.
I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu.
@@ElectronicsWizardry "I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu." You can, but the process for getting that up and running isn't nearly as well documented in the Proxmox forums vs. if you're using a non-ZFS root, where you can just update /etc/grub/default, and then run update-initramfs -u; update-grub; reboot to update the system vs. if you're using ZFS root, to update the kernel boot params, you need to do something else entirely. When I first deployed my consolidated server in January 2023, I originally set it up with a ZFS root, and ran into this issue very quickly, and that's how and why I ended up setting up my 4x 3 TB HGST HDDs in a RAID6 array rather than using raidz2 because with my RAID6 OS array, Proxmox would see it as like a "normal" drive, and so, I was then able to follow the documented steps for GPU passthrough. If it works, why break it?
@@ewenchan1239 I think Proxmox's reason for doing Proxmox boot tool of standard grub for ZFS boot is so that they can have redundant boot loaders. I don't think grub is made to be on multiple drives, where as Proxmox boot tool is made to be on all drives in the ZFS pool, and have all updated when a new kernel/kernel option is installed. I agree it would be nice if they just used grub, but I think editing kernel options with Proxmox boot tool should be editing /etc/kernel/cmdline and then proxmox-boot-tool refresh.
@@ElectronicsWizardry To be honest, since I got my "do-it-all" Proxmox server up and running, I didn't really spend much more time, trying to get ZFS on root to work with PCIe/GPU passthrough. As a result, I don't have deployment notes in my OneNote that I would then be able to share with others here, with step-by-step instructions so that they can deploy it themselves. I may revisit that in the future, but currently, I don't have any plans to do so.
Good explanation for the vm vs containers! Thanks. But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently. Also the ideal solution, on a three nodes example, shoult be to have some storage on either of the nodes. Is there any solution like glusterfs or similar in proxmox?
But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently. 1) This is a bit off-topic. He rightfully mentions the passthrough, because it involves a hypervisor. Otherwise, the underlying redundancy solution is a separate topic. 2) NAS usually implies Raid5 or 6. BTRFS raid is a chronically "experimental" feature. It will burn your data. Don't do it.
I haven't touched BTRFS much as its still experimental in Proxmox and has issues with RAID 5/6 which is often used for home NAS units. I love many BTRFS features and hope it gets to a stable state soon. Ceph can do a single node mixed drive size and easily expandable setup, but its pretty complex and not really made for single node setups.
Excellent explanation. I tried turnkey and it did not allow my nvme to work at full speed. I tried a Windows 10 vm with samba and it did. Still don't know why
Is OMV 7 a good option if for some personal preferences I prefer using it isntead of Unraid or TrueNAS? Will it still supports SMB, NFS, Time Machine Backup, and most of the features Unraid & TrueNAS have?
Sure OMV 7 works fine. It should support all the standard features(and uses the same linux and utilities under the hood for sharing so performance and compatibility should be similar).
did I get that right, when I passthrough a storage drive (ssd, nvme, etc) to a VM, than I cant use PBS to back up those passed storage devices, even though I can check the backup box?
Man i wish i knew so much on this topic! Could you help me out with my setup? I run proxmox on 128gb ssd. On top of this i have nvme 2tb and 2.5 mechanical also 2tb. I am planning to create a zfs for nvme to use for vm's and another zfs for mechanical drive to use for backups. Would this be fine? I also have an old NAS on the network so could use some network mount points if i lets say setup a nextcloud for instance. What do you think? Thanks a lot
Sure that seems like what I'd do here. Having a 'speed' pool for vms that need speed and a 'space' pool for stuff that doesn't need speed makes a lot of sense.
Setting up a new empty mount point for the container will be limited to the sixe that you select in the add mount point prompt. You increase the size of the mount point later on if needed.
@@ElectronicsWizardry Thanks for your reply. Btw with container’s bind mount (TurnKey), can I move the physical disk to another computer (bare metal Ubuntu) without the need to install Proxmox or anything else? I just want it to be easy to move arround the disk in case something happen to the host. Do you have any advice? I’m new to nas/fileserver.
@@andreasrichman5628 If you do the bind mount, you should be able to move the drive to a new system and access all the data. Just move the drive to the Ubuntu system, and it should be able to mount the drive(You may need to install a package on Ubuntu to use the filesystem).
@@ElectronicsWizardry Just to be sure, with VM (disk passthrough) I also can move the physical disk to another machine (bare metal Ubuntu) and access all the data, right?
Mighty ElectronicsWizard, do you also have information on how to achieve something similar with Ceph and CephFS? i.e. Promox cluster of 3 machines with ceph, and VMs on those 3 cluster nodes having to access a shared drive that's in Ceph?
Ceph/CephFS is a distributed filesystem and has no relation to network sharing protocols like SMB/CIFS nor NFS. To that end though, if you create a CephFS and then it is mounted by your Proxmox nodes in your Proxmox cluster, you can, absolutely share that same CephFS mount point either with SMB/CIFS and/or with NFS. As a NFS export, you would point the export path in /etc/export to that same path and/or SMB, you would edit your smb.conf file and point your share to that. The two work independently on each other.
@@ewenchan1239 You can absolutely consume CephFS block devices over the network. It just also needs libcephfs on the client. The problem is just that it's tricky to set up
The other hack Ive seen if you want to use Ceph or other clustered filesystems on SMB clients is to setup a VM/Container to mount CephFS, then have that share it over SMB. Then any system can mount the CephFS as a normal SMB share. The Samba sharing VM does become a single point of failure, but this is likely the best way of mounting on a device that doesn't have a easy way to get the CephFS client installed.
@@insu_na I could be wrong, but it is my understanding that libcephfs isn't available in Windows. Therefore; this wouldn't work. Conversely, if you set up CephFS and then mount it on the host (e.g. mount to it /mnt/pve/cephfs), then in your /etc/samba/smb.conf, you can point your SMB share to that mount point. Then that way, a) your client doesn't need libcephfs (Is there REALLY a reason why a client wants native CephFS access (i.e. CephFS, not Ceph RBD)?) (i.e. I can understand it that if you want Ceph (RBD) access, that you would want and/or need libcephfs on the client, which again, I'm not certain that that's available on Windows clients, maybe as an alternative to iSCSI, but if you don't need Ceph RBD, and you only want/need CephFS, then this method should work for you) and b) your don't need a VM to mount the CephFS only to then share it out over SMB. You can have the Proxmox host do that natively, on said Proxmox host itself.
@@ElectronicsWizardry You CAN do that, but you don't NEED to do that. If you're using Proxmox as a NAS, then you can just mount the CephFS pool directly in Proxmox, and then etc /etc/samba/smb.conf and point your share to that mount point (e.g. /mnt/pve/cephfs). You don't need to route/pass it through a VM. Conversely, however, if you DO route it through a VM or a CT, then what you can do is store the VM/CT disk/volume on shared storage, and then if you have a Proxmox cluster (which you'll need for Ceph anyways), you can configure HA for that VM/CT, such that if one of the nodes has an issue, you can have the VM/CT live migrate over to another node within the Proxmox cluster, and that way, you won't lose connectivity to the CephFS SMB share. That would be ONE option as that would be easier to present to your network than trying to configure it for the three native Proxmox nodes.
This depends on how the VM is setup and if your using pass-through or virtual disks, but generally VMs have good disk performance and likely more than what would be needed for a VM.
"Battery backed caching for high speed I/O." Sorry, but that's actually NOT what the battery backup unit (BBU) is for, in regards to RAID HBAs. Battery backup units (BBUs) is used on RAID HBAs to prevent against the write hole issue that may present itself in the event of a power failure. The idea is that if you are writing data and then lose power, then the system won't know what was the data that was still in flight that was in the process of being committed to stable storage (disk(s)). A BBU basically keeps the RAID card alive long enough to flush the DRAM that's on said RAID HBA to disk, so that any data that's in volatile memory (DRAM cache of the RAID HBA) won't be lost. It has nothing to do with I/O performance.
I want to say the DRAM on a RAID card is used for caching disk IO in addition to storing inflight data to prevent the write hole issue. RAID cards let the onboard DRAM to be used as a write back cache safely as it won't be lost in a power outage. Also I have seen much faster short term write speeds when using RAID cards making me think the cache is used in this way. This does depend on the RAID card, and there are likely some that only use the cache for prevent write hole issues.
@@ElectronicsWizardry Um....it depends. If you're using async writes, what happens is that for POSIX compliance, writing to the DRAM on a RAID HBA will be considered as a write acknowledgement that's sent back to the application that's making said write request. So, in effect, your system is "lying" to you by saying that data has been written (committed) to disk when really, it hasn't. It's only been written to the DRAM cache on the RAID HBA and then the RAID HBA sets the policy/rule/frequency for how often it will commit the writes that have been cached in DRAM and flush that to disk. Per the Oracle ZFS Administration guide, the ZFS intent log is, by design, intended to do the same thing. Async writes are written to the ZIL (and/or if the ZIL is on a special, secondary, or dedicated ZIL device, known as a SLOG device), and then ZFS manages the flushes from ZIL to commit to disk either when the buffer is full or in 5 second intervals, whichever comes first. If you're using synchronous writes, whereby, a positive commitment to disk is required before the ACK is sent back, then you generally won't see much in the way of a write speed improvement, unless you're using tiered storage. Async writes CAN be dangerous for a variety of reasons, and some applications (e.g. databases) sometimes (often) require sync writes to make sure that the database table itself, doesn't get corrupted as a result of the write hole due to a power outage.
Unraid can make a lot of sense in a VM. Their parity setup is one of the best if you want flexible multi drive setups. I have found it to work well to put a USB stick in the server and pass the USB device through so Unraid can use the GUID correctly for licensing.
I still do not understand how to manage simple lab ( Proxmox on SSD, 2 x 2TB HDD), how can I handle one hdd fail or ssd fail to recover whole system? Can someone help me figure it out? I am lost...
If you want RAID for redundancy in Proxmox, your best option in software is probably by using ZFS. Setup a ZFS pool of the drives with a mirror or other pool that has redundancy. Otherwise you can use hardware RAID if your system supports it.
@dexzoyp Yup, you can make a ZFS pool under the host, then disks, then ZFS. Then make a new mirrored pool and select add as storage. Then you can add virtual disks to the ZFS pool that will be mirrored on the selected drives.
@@ElectronicsWizardry - Proxmox ( running on SSD ) - TrueNAS ( running on HDD ) - NextCloud - FileShare - Gitlab Server ( running on HDD ) Does the architecture make sense? Is it safe in your opinion? Can you guide me based on your experience?
This is not a tutorial. It is a masterclass. Just brilliant. Thank you.
lol made by golem 😂😂
Clear explanation. Amazing!
I have all with 2 boxes. VM/LXC should run fast so they sit inside NVME CEPH pool while ISOs and backups located on Synology NAS.
Using the Turnkey file server you suggested in your previous video. It's the best method I have seen so far. I don't need all the other bloatware the others offer. Turnkey file server offers a lean mean NAS machine lol.
I also tried Turnkey file server, but at some point I missed the filesystem management features of btrfs on which I store the files. Also by then I read pretty deep into smb.conf and missed some options (which I then put into the smb.conf manually. So I now run my NAS on a Debian VM without any GUI.
By far the most comprehensive guide going. Not only did you demonstrate all the various methods, you explained how to them, AND what to be aware of if you do.
This easily the BEST guide on RUclips for this particular scenario. If I come across any forum post asking about this, I will be referencing this particular video.
Outstanding.
Fantastic video.
One note to viewer's if you doing storage and specially zfs or btrfs, always make sure you purchase CMR Drives, else be prepared to never be able to recover your pool if one of the disk goes bad.
So if I just throw in some random (but same size) drives, ie 2x 5TB sata drives in a mirrored zfs, one fails, and i replace it with a new, it will not work/be a waste?
@@Tr4shSpirits mirror should work, but raidz will not rebuild. Mirrors are very space inefficient.
@@monish05m got it. I know, but for personal mediastorage I am fine with it. Hdds are very cheap now so I just rund mirroring for this purpose. My compute server has a proper raid card and parity tho
You're actually a legend, clear and easy to understand. Keep up the good work!
Another advantage of using VMs or Containers for the NAS is in my Opinion Network Isolation and the ease to put the NAS on the Networks I want.
Thank you! I struggled with this back in August of 2023 setting up my first proxmox PC ever, and it took a few days before I understood it more and figured it out. All I wanted was to slap in some hard drives in the proxmox PC and share them to whatever CT/VM I created. One requires bindmount, the other requires NFS. I had no idea at the time because I'm still fairly new to Linux.
But I appreciate this video because I still haven't set up an SMB share but I think I will now! I'm tired of having to use WinSCP to copy files from Linux to Windows when I can just do an SMB share and immediately have easy access.
Thank you for useful, insightful videos for Linux n00bs like me. :d
If you need help with deploying Samba quickly, I can copy and paste my deployment notes from my OneNote at home.
I think that a lot of guides overcomplicates things, when really, there's only less than 20 lines that you will need to get a simple, basic, SMB share up and running.
@@ewenchan1239I think I'm going to use the turn key template for sharing built right in Proxmox. This guy also did a tutorial on it as he mentioned in this video. But thanks!
@@nirv
No problem.
For the benefit of everybody else, I'll still post my deployment notes for deploying Samba on Debian 11 (Proxmox 7.4-3). (The instructions should still work for Debian 12 (Proxmox 8.x).
//begin
Installing and configuring Samba on Debian 11:
# apt install -y samba
# systemctl status nmbd
# cd /etc/samba
# cp smb.conf smb.conf.bk
# vi smb.conf
add to the bottom:
[export_myfs]
comment = Samba on Debian
path = /export/myfs
read-only = no
browsable = yes
valid users = @joe
writable = yes
# smbpasswd -a joe
# systemctl restart smbd; systemctl restart nmbd
//end
That's it.
With these deployment notes, now you can have access to a SMB share, directly on your Proxmox host, without the need for a VM nor a CT.
That way, the only way that this SMB share will go down would be if there's a problem with smbd/nmbd itself, and/or if your Proxmox host went down; at which point, you have other issues to deal with.
This video was perfectly timed for me as I’m looking to finally migrate some NTFS Shares from a Windows 2016 VM under Proxmox to something more modern. Time to experiment - Thank you!!
My favorite way to manage smb/nfs shares is cockpit in lxc privileged container :)
Thats a good idea. I'm gonna look up cockpit some more as its been mentioned in the comments a few times.
I have another video stored in my playlist that explains exactly how to do this. You're not the only person to recommend this solution.
I appreciate this video as it outlines all the other options on a high level whilst still explaining how to go about it.
Thank your amazing videos!
My setup is:
Supermicro X10SRH-CF
2x120GB SSD boot drive of Proxmox
For the fileserver part:
2x400GB SSD (partitioned 300+64GB)
6x4TB
Connected via HBA
VM/LXC stay on 300GB SSD (zfs mirror)
"NAS" is located at 6x4TB raidz2 + 64GB (zfs mirror) special device
All is managed by Cockpit plus 45drives plugins for ZFS and Fileshare
It's for home use, not HA 😅
Please make a video about SMB share. I always have to look up a guide to make a basic share.
It'll be nice to know what other options I have
Very educational. Explained very well. Thank you. As a new Proxmox user I learned a lot from this.
Cockpit + Filesharing plugin in LXC for me is the best option.
Thanks
Love your videos, two thumbs up.
I would also consider throwing Xpenology in a VM on Proxmox. Prox would help in testing updates.
I install SAMBA servers in LXC containers. Best mix for performance and flexibility. If it breaks, you can easily revert back if you have a backup and you don't have to worry about polluting the host OS.
Pertinent timing!
I was just trying to figure out a way for my Proxmox to detect where it is sharing NFS, so that it can gracefully close them down if I'm powering it off. I can parse journalctl , looked at nfswatch (which is pretty old).
I wonder if there's a better way.
I also prefer to use a vm or a container for my file shares. The nice thing with container and the mount points is that if you increase the size it automatically increses the filesystem in the container as well. I once saw apalrds video on using a container and installing 45Drives' cockpit-file-sharing to have a nice GUI. If you create a user it automatically creates a smb password as well. It so easy to use and I still use this for some backups on my main computer. I never liked truenas so using 45Drive tools was a fantastic idea.
I use a vm with truenas myself, 12 drives, works great
finaly someone competent answaring once and for all all the reddit question on how to create a nas XD
Very useful video, it made it easy for me to determine the best option for my NAS setup.
I have Unraid running as VM on my Proxmox server and pass through an HBA to it.
Works great. :)
I’m curious. Did you d push passthrough of the boot drive or are you booting from a virtual disk?
wow! it is amazing condensed 18 minutes ^_^ thanks a million ^_^
a dedicated video on samba sounds great. Still struggling to get the rights correct for both windows and linux at the same time.
Love your in depth videos! SMB video would be awesome!!
Let me add that to my list. Want to make sure I do the video right so it might take some time.
4:16 if I have a parity array or RAID Z array in ZFS terminology I can't just add a 6th drive to a 5 drive array for example
- Raidz expansion has been implemented in OpenZFS. Although I don't know its current status in Proxmox.
Raid Z expansion seems to be in the 2.3 OpenZFS release that isn't out yet. Once ZFS releases it, it will probably be in the next Proxmox version. I'm guess probably less than a year out now, and I'll make a video when its added.
This was exactly what I was looking form, thank you!
Great, just using Proxmox with Turnkey as container.
one thing I noticed is that backuing up LXC mountpoints is pretty slow vs VM disks (qcow? I cannot remember). I imagine it's because the content of a mountpoint might have to be crawled and proxmox checks for the archive bit, versus using some sort of changed block tracking for vm disks
so any shares I would run on VMs, even if reads are quick, it's wasted processing time and wear on the disks
I'll need to check the backup time comparison, but I think your right with containers doing a file level backup and being slower. Also if using PBS you can backup only changes to a running VM, making incremental backups much faster.
Following your review on Ugreen's NASync DXP480T I backed the project. My plan is to install Proxmox on it and virtualise a NAS as well as VM with an archlinux or nixOS desktop which I can access from anywhere. I hope that this desktop will have very little latency in order to replace my current desktop installation on bare metal. It would be great you could make a series of videos which show how to achieve this.
Good explanation!
A scenario i would like to see, is creating a CephFS filesystem in a proxmox cluster and expose it as a SMB fileserver to client OS ...
You can do that.
That's actually pretty easy.
I don't know if I have my erasure coded CephFS exposed as a SMB share, but you can absolutely you that.
You mount the CephFS to a mount point, and then in your smb.conf file, you can just point you share mount point to the same location, and now you've shared your CephFS as a SMB share.
You can absolutely do that.
Keep up the good work!
A bit of a crazy suggestion for a future video: running TrueNAS Scale or UnRAID as.... a container. Theoretically, it should be possible.
You have the create the LXC container yourself, which isn't a super trivial task, but it can be done.
(Actually, in theory, it would be easier to deploy TrueNAS Scale as a container than UnRAID because TrueNAS Scale also runs on top of Debian, so really, all you would need to do is add the repos, and then find the difference in the list of packages that are installed, write that package delta file to a text file, and then install that package diff text file in a Debian LXC container.)
That part isn't super difficult. It might take a little bit of testing to make sure everything works, as it should, but there shouldn't really be any technical reason as to why this method *can't* work.
the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it. but if you plan out nics and networking well its no biggie. i have multiple pve servers, one that is primarily my NAS but with 5 vms running, one which is mostly for VMs but which also has surveillance disks for a shinobi vm on it, and a couple more for 'play' and that's worked out pretty well. with proxmox its all about how you balance our those resources, thats the KEY thing with pve - whats the balance you want and need and do you ahve the gear to get it
"the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it."
I'm not 100% sure what you mean by this.
The protocol itself has no network interface requirements.
You can share it on whatever interface you want, via the IP address that you want your clients to be able to connect to your SMB share with.
So if you have a 1 GbE NIC and a 10 GbE NIC and your class C IPv4 address is something like 192.168.1.x (for the 1 GbE NIC) and 192.168.10.y (for the 10 GbE NIC), then you can have your clients connect to the 192.168.10.y subnet, if they're on the same subnet.
The protocol has no network interface requirements.
I have a bunch of odd sized hard drives that I threw into my Proxmox, what software RAID type should I use?
Proxmox doesn't have a great native way to mix drive sizes as the main included RAID solution, ZFS, doesn't support mixed drives well. BTRFS is a option, but the unstable raid 5/6 would worry me if it was my main copy of data. You can use snapraid + mergerfs on proxmox if its for media files, but this wouldn't be suited towards something like VMs.
I also made a video earlier about RAID like solution for mixed drive sizes that might help see the Pros and Cons of the different solutions. ruclips.net/video/NQJkTiLXfgs/видео.html
I like this vid. Good insiight and good tips.
Another fantastic video, thank you! :)
I completely forgot that turnkey containers exist... 🤦🏻♂️ Definitely going to use that for a small file server I've been meaning to get going.
If there's enough interest / script material for it, could you do a deeper dive into these turnkey VMs/containers? Haven't used them much, personally
Glad I helped you with setting up your nas on Proxmox.
A video on Turnkey containers is a good idea. I'll start playing around with them soon.
So Good! Thank you!
I did more or less the same but in the lxc with it mapped to the main proxmox and using webmin .
I'm thinking of getting a Minisforum MS-01 and fitting a HBA card ( external ) in the 8 line GPU slot and feeding into a 4-8 Drive Bay enclosure , but can't find any which have sata plugs at the back with a built in psu , can't only find internal ones for servers .
Great contribution. Many, many thanks!!!!
Which NAS system would you recommend for the LXC case???
What do you mean by NAS system? I'd look at turnkey if you want a easy web interface, or a lightweight distro like Debian/Alpine if you want to edit samba file manually.
Hey mate, i just want to say thanks tou you. Your first video i saw when i tried to gonfig promox and get SFTPGo to working, thanks toy i got it. Afer 2 years, bought real HP Servr. Just wanted tell that to you, thanks a million for your videos!
I'd really like you to not just break it all down but build it all up in the form of a ws to dual nas solution for prosumer, homelabbers, and smb sector - everybody wants and needs dual nas redundancy coupled with fast networking - like 40g - it is very possible and pretty cheap to get going and would make for great content - do it with cots refurb boxes and and a few nvme arrays #jumbo frames #mtu
Excellent overview! Thanks!! If I need both SMB and NFS, what do you think about Debian Container with Cockpit+45Drives(cockpit-file-sharing)? Between this and Turnkey which one do you recommend?
I haven't tried the 45drives/cockpit file sharing gui, and will give it a shot. You do need extra permissions to use NFS servers on a Proxmox containers due to how NFS uses a kernel server.
@@ElectronicsWizardry Oh.. I see. I had no idea. Learning so much from your channel. Keep up the great work!!
re: ZFS on root
If you install Proxmox of a mirror ZFS root, and you want to then do things like GPU passthrough, the guides that you will likely find online for how to do this, won't always necessarily tell/teach you how to update the kernel/boot parameters for ZFS on root.
As a result, I stayed away from it and I used my Broadcom/Avago/LSI MegaRAID 12 Gbps SAS RAID HBA and created a RAID6 array for my Proxmox OS boot drive, that way the Proxmox installed would install it onto a "single drive" when really it was 4x 3 TB HGST HDDs in a RAID6 array.
That way, if one of my OS disks goes down, my RAID HBA can handle the rebuild.
I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu.
@@ElectronicsWizardry
"I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu."
You can, but the process for getting that up and running isn't nearly as well documented in the Proxmox forums vs. if you're using a non-ZFS root, where you can just update /etc/grub/default, and then run update-initramfs -u; update-grub; reboot to update the system vs. if you're using ZFS root, to update the kernel boot params, you need to do something else entirely.
When I first deployed my consolidated server in January 2023, I originally set it up with a ZFS root, and ran into this issue very quickly, and that's how and why I ended up setting up my 4x 3 TB HGST HDDs in a RAID6 array rather than using raidz2 because with my RAID6 OS array, Proxmox would see it as like a "normal" drive, and so, I was then able to follow the documented steps for GPU passthrough.
If it works, why break it?
@@ewenchan1239 I think Proxmox's reason for doing Proxmox boot tool of standard grub for ZFS boot is so that they can have redundant boot loaders. I don't think grub is made to be on multiple drives, where as Proxmox boot tool is made to be on all drives in the ZFS pool, and have all updated when a new kernel/kernel option is installed.
I agree it would be nice if they just used grub, but I think editing kernel options with Proxmox boot tool should be editing /etc/kernel/cmdline and then proxmox-boot-tool refresh.
@@ElectronicsWizardry
To be honest, since I got my "do-it-all" Proxmox server up and running, I didn't really spend much more time, trying to get ZFS on root to work with PCIe/GPU passthrough.
As a result, I don't have deployment notes in my OneNote that I would then be able to share with others here, with step-by-step instructions so that they can deploy it themselves.
I may revisit that in the future, but currently, I don't have any plans to do so.
Thank you ❤
Good information.
I’ve struggled passing hw resources into lxc. Passing a controller or zfs drives into a proxmox ve hosted trueNAS vm is eons easier.
Good explanation for the vm vs containers! Thanks. But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently. Also the ideal solution, on a three nodes example, shoult be to have some storage on either of the nodes. Is there any solution like glusterfs or similar in proxmox?
I forgot to mention Ceph, but I don't if there is the best solution...
But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently.
1) This is a bit off-topic. He rightfully mentions the passthrough, because it involves a hypervisor. Otherwise, the underlying redundancy solution is a separate topic.
2) NAS usually implies Raid5 or 6. BTRFS raid is a chronically "experimental" feature. It will burn your data. Don't do it.
I haven't touched BTRFS much as its still experimental in Proxmox and has issues with RAID 5/6 which is often used for home NAS units. I love many BTRFS features and hope it gets to a stable state soon.
Ceph can do a single node mixed drive size and easily expandable setup, but its pretty complex and not really made for single node setups.
@@ElectronicsWizardry I follow BTRFS Raid56 for a decade. It ain't moving anywhere, unfortunately.
Excellent explanation. I tried turnkey and it did not allow my nvme to work at full speed. I tried a Windows 10 vm with samba and it did. Still don't know why
What speeds were you seeing? I have seen that tuning can be needed at times to get the most out of SMB with > 1gbe networks.
Would there be any problems running mergerfs and snapraid on the proxmox node?
Is OMV 7 a good option if for some personal preferences I prefer using it isntead of Unraid or TrueNAS? Will it still supports SMB, NFS, Time Machine Backup, and most of the features Unraid & TrueNAS have?
Sure OMV 7 works fine. It should support all the standard features(and uses the same linux and utilities under the hood for sharing so performance and compatibility should be similar).
Thank you so much!! Is it feasible to do SMB in an unprivileged container?
I think and works fine in unprivileged containers but haven’t tested it myself.
did I get that right, when I passthrough a storage drive (ssd, nvme, etc) to a VM, than I cant use PBS to back up those passed storage devices, even though I can check the backup box?
Yup, passing through a disk like /dev/sda can't be backed up. Your best way to back it up is to switch to virtual disks or backup withing the Vm.
@@ElectronicsWizardry thank you for responding, in that case I need to do it within the VM, which make sense.
Man i wish i knew so much on this topic! Could you help me out with my setup? I run proxmox on 128gb ssd. On top of this i have nvme 2tb and 2.5 mechanical also 2tb. I am planning to create a zfs for nvme to use for vm's and another zfs for mechanical drive to use for backups. Would this be fine? I also have an old NAS on the network so could use some network mount points if i lets say setup a nextcloud for instance. What do you think? Thanks a lot
Sure that seems like what I'd do here. Having a 'speed' pool for vms that need speed and a 'space' pool for stuff that doesn't need speed makes a lot of sense.
Thanks a lot! I made it today :) so far so good. @ElectronicsWizardry
13:20 With Blank mount point, does it mean mounting the whole device (physical disk)? and What size (GiB) do we have to input?
Setting up a new empty mount point for the container will be limited to the sixe that you select in the add mount point prompt. You increase the size of the mount point later on if needed.
@@ElectronicsWizardry Thanks for your reply. Btw with container’s bind mount (TurnKey), can I move the physical disk to another computer (bare metal Ubuntu) without the need to install Proxmox or anything else? I just want it to be easy to move arround the disk in case something happen to the host. Do you have any advice? I’m new to nas/fileserver.
@@andreasrichman5628 If you do the bind mount, you should be able to move the drive to a new system and access all the data. Just move the drive to the Ubuntu system, and it should be able to mount the drive(You may need to install a package on Ubuntu to use the filesystem).
@@ElectronicsWizardry Just to be sure, with VM (disk passthrough) I also can move the physical disk to another machine (bare metal Ubuntu) and access all the data, right?
Mighty ElectronicsWizard, do you also have information on how to achieve something similar with Ceph and CephFS?
i.e. Promox cluster of 3 machines with ceph, and VMs on those 3 cluster nodes having to access a shared drive that's in Ceph?
Ceph/CephFS is a distributed filesystem and has no relation to network sharing protocols like SMB/CIFS nor NFS.
To that end though, if you create a CephFS and then it is mounted by your Proxmox nodes in your Proxmox cluster, you can, absolutely share that same CephFS mount point either with SMB/CIFS and/or with NFS.
As a NFS export, you would point the export path in /etc/export to that same path and/or SMB, you would edit your smb.conf file and point your share to that.
The two work independently on each other.
@@ewenchan1239 You can absolutely consume CephFS block devices over the network. It just also needs libcephfs on the client. The problem is just that it's tricky to set up
The other hack Ive seen if you want to use Ceph or other clustered filesystems on SMB clients is to setup a VM/Container to mount CephFS, then have that share it over SMB. Then any system can mount the CephFS as a normal SMB share. The Samba sharing VM does become a single point of failure, but this is likely the best way of mounting on a device that doesn't have a easy way to get the CephFS client installed.
@@insu_na
I could be wrong, but it is my understanding that libcephfs isn't available in Windows.
Therefore; this wouldn't work.
Conversely, if you set up CephFS and then mount it on the host (e.g. mount to it /mnt/pve/cephfs), then in your /etc/samba/smb.conf, you can point your SMB share to that mount point.
Then that way, a) your client doesn't need libcephfs (Is there REALLY a reason why a client wants native CephFS access (i.e. CephFS, not Ceph RBD)?) (i.e. I can understand it that if you want Ceph (RBD) access, that you would want and/or need libcephfs on the client, which again, I'm not certain that that's available on Windows clients, maybe as an alternative to iSCSI, but if you don't need Ceph RBD, and you only want/need CephFS, then this method should work for you) and b) your don't need a VM to mount the CephFS only to then share it out over SMB.
You can have the Proxmox host do that natively, on said Proxmox host itself.
@@ElectronicsWizardry
You CAN do that, but you don't NEED to do that.
If you're using Proxmox as a NAS, then you can just mount the CephFS pool directly in Proxmox, and then etc /etc/samba/smb.conf and point your share to that mount point (e.g. /mnt/pve/cephfs).
You don't need to route/pass it through a VM.
Conversely, however, if you DO route it through a VM or a CT, then what you can do is store the VM/CT disk/volume on shared storage, and then if you have a Proxmox cluster (which you'll need for Ceph anyways), you can configure HA for that VM/CT, such that if one of the nodes has an issue, you can have the VM/CT live migrate over to another node within the Proxmox cluster, and that way, you won't lose connectivity to the CephFS SMB share.
That would be ONE option as that would be easier to present to your network than trying to configure it for the three native Proxmox nodes.
But what about the disk IO performance in a VM?
This depends on how the VM is setup and if your using pass-through or virtual disks, but generally VMs have good disk performance and likely more than what would be needed for a VM.
My pve box has 1 256gb nvme where proxmox lives, 1 500gb where vm's live and 4 hd's for storage.
"Battery backed caching for high speed I/O."
Sorry, but that's actually NOT what the battery backup unit (BBU) is for, in regards to RAID HBAs.
Battery backup units (BBUs) is used on RAID HBAs to prevent against the write hole issue that may present itself in the event of a power failure.
The idea is that if you are writing data and then lose power, then the system won't know what was the data that was still in flight that was in the process of being committed to stable storage (disk(s)).
A BBU basically keeps the RAID card alive long enough to flush the DRAM that's on said RAID HBA to disk, so that any data that's in volatile memory (DRAM cache of the RAID HBA) won't be lost.
It has nothing to do with I/O performance.
I want to say the DRAM on a RAID card is used for caching disk IO in addition to storing inflight data to prevent the write hole issue. RAID cards let the onboard DRAM to be used as a write back cache safely as it won't be lost in a power outage. Also I have seen much faster short term write speeds when using RAID cards making me think the cache is used in this way. This does depend on the RAID card, and there are likely some that only use the cache for prevent write hole issues.
@@ElectronicsWizardry
Um....it depends.
If you're using async writes, what happens is that for POSIX compliance, writing to the DRAM on a RAID HBA will be considered as a write acknowledgement that's sent back to the application that's making said write request.
So, in effect, your system is "lying" to you by saying that data has been written (committed) to disk when really, it hasn't. It's only been written to the DRAM cache on the RAID HBA and then the RAID HBA sets the policy/rule/frequency for how often it will commit the writes that have been cached in DRAM and flush that to disk.
Per the Oracle ZFS Administration guide, the ZFS intent log is, by design, intended to do the same thing.
Async writes are written to the ZIL (and/or if the ZIL is on a special, secondary, or dedicated ZIL device, known as a SLOG device), and then ZFS manages the flushes from ZIL to commit to disk either when the buffer is full or in 5 second intervals, whichever comes first.
If you're using synchronous writes, whereby, a positive commitment to disk is required before the ACK is sent back, then you generally won't see much in the way of a write speed improvement, unless you're using tiered storage.
Async writes CAN be dangerous for a variety of reasons, and some applications (e.g. databases) sometimes (often) require sync writes to make sure that the database table itself, doesn't get corrupted as a result of the write hole due to a power outage.
Would it be crazy to run UnRaid as a VM on Proxmox?
Unraid can make a lot of sense in a VM. Their parity setup is one of the best if you want flexible multi drive setups. I have found it to work well to put a USB stick in the server and pass the USB device through so Unraid can use the GUID correctly for licensing.
I still do not understand how to manage simple lab ( Proxmox on SSD, 2 x 2TB HDD), how can I handle one hdd fail or ssd fail to recover whole system? Can someone help me figure it out? I am lost...
If you want RAID for redundancy in Proxmox, your best option in software is probably by using ZFS. Setup a ZFS pool of the drives with a mirror or other pool that has redundancy. Otherwise you can use hardware RAID if your system supports it.
@@ElectronicsWizardry you mean to create zfs pool in proxmox storage?
@dexzoyp Yup, you can make a ZFS pool under the host, then disks, then ZFS. Then make a new mirrored pool and select add as storage. Then you can add virtual disks to the ZFS pool that will be mirrored on the selected drives.
@@ElectronicsWizardry
- Proxmox ( running on SSD )
- TrueNAS ( running on HDD )
- NextCloud
- FileShare
- Gitlab Server ( running on HDD )
Does the architecture make sense? Is it safe in your opinion? Can you guide me based on your experience?
you got a patreon?
That hair tho😂