How To Setup NFS Shared Storage In Proxmox

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 119

  • @TechTutorialsDavidMcKone
    @TechTutorialsDavidMcKone  2 года назад +2

    If you want to learn more about Proxmox VE, this series will help you out
    ruclips.net/video/sHWYUt0V-c8/видео.html

    • @DarudeSandworm
      @DarudeSandworm 2 года назад

      "As a prerequisite you do need to make sure the actual server or servers have actually got access to the BAS"
      I think this is the part I have no idea how to do. I've never used NFS before so I'm kind of upside down on how you would do this.

  • @thiggs383
    @thiggs383 2 года назад +4

    Thank you for the very detailed info! Many tutorials skip over lots of information. You've earned my subscription. Cheers!

  • @stevegraham5494
    @stevegraham5494 Год назад +2

    Fantastic video! There are so many nuances to TrueNAS that it can get confusing very quickly. I followed your video and it worked the first time. And it helped it all make more sense. Thanks!

  • @toolbelt
    @toolbelt Год назад +3

    Thank you for this video and for taking the time to explain everything as you went along. Extremely helpful.

  • @TAL74
    @TAL74 Год назад +1

    Many thanks for the very informative content. My Proxmox landscape is getting more and more fun with every video of you, thanks

  • @robothtml
    @robothtml 3 месяца назад +1

    Thank you for this video. Not for the impatient because you go into a ton of detail.

  • @BroodBoyChees
    @BroodBoyChees 2 года назад +3

    Thanks! 🙏 I love how you go through and explain everything

  • @kwaapia
    @kwaapia Год назад +1

    Very detailed explanation of every option. Exactly what I needed. Pls keep this up. Thank you.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      Good to know the video helped, so thanks for the feedback

    • @kwaapia
      @kwaapia Год назад

      @@TechTutorialsDavidMcKone Have you dabbled in building a Hashicorp nomad cluster as an alternative to kubernetes ? ( nomad / vault / consul + tls ) . I have trawled the web and cant find any. Something with a detailed explanation like you provide in all your videos will be gem!

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      @@kwaapia Probably not as even kubernetes isn't on my radar at the moment

  • @XX99XXL
    @XX99XXL Месяц назад +1

    Excellent well explained and relaxed approach. Very good teacher!

  • @Mandolorian84
    @Mandolorian84 Год назад +1

    I wanted to change my nfs location for the VMs. thank you so much for the detailed explination.

  • @dogpile79
    @dogpile79 9 месяцев назад +1

    Extremely helpful and very good to understand. Thank you!

  • @Nice49838
    @Nice49838 9 месяцев назад +1

    thank you for explaining this root user thing, this solved my problem.

  • @ilducedimas
    @ilducedimas Год назад +2

    You sir are a good damn teacher. Thanks

  • @mattsoares608
    @mattsoares608 7 месяцев назад +1

    This video is exactly what I needed. Thank you.

  • @sn5101
    @sn5101 Месяц назад +1

    Thank you for sharing, David! Best video on the subject I have seen - you explain very well the reasoning behind the choices.
    One questions I have: why is Proxmox connecting via root? Can I not have proxmox do backups via `proxmox` user for example and not via root?
    For context: I do other things such as db backups in from my VM to my NAS and there I need to set up the same DB user/group between the NAS and the VM(s) which is a pain, but that's the only way I have found to work so that I don't do each connection via root.
    Any advice on the subject would be greatly appreciated!

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Месяц назад

      It's running on a Linux system and is run as root because it needs access to everything; Basically, the hypervisor will carve up all the hardware into virtual devices and that needs root access
      You can create other users and assign them permissions but I'm not seeing on option when attaching an NFS share to define access using a specific user account, in which case each server connects as the root user
      If you setup a backup job, there isn't an option to define a user there either
      Instead it relies on the authentication used to setup the storage location
      For NFS, we're stuck with the root account but with SMB/CIFS you can assign a different user account, which makes sense as Windows doesn't have a root account
      In which case, maybe SMB/CIFS is a better option for you
      VM network access tends to be independent of the hypervisor
      PVE might establish its own NAS storage connections, typically for central storage of VMs for instance, but these storage allocations aren't then referenced from within a VM
      For example, the servers connect to an NFS share and then the VM hard drives are stored on that share
      All the VM knows about is it has a hard drive and it can't reference that same NFS share via the hypervisor for its own purposes
      If a VM needs to access a share on the NAS, it doesn't connect to it via the hypervisor
      Instead, it makes its own connection to a share on the NAS, even if it's the same one the hypervisor uses
      Having said that, you could create hard drives on different storage for backup redundancy
      You could have a primary hard drive stored on the server's local drive and install the OS and applications on that for the VM
      You could then attach another hard drive to the VM, but one stored on an NFS share for instance
      Any backup run from within the VM is just from one hard drive to another
      Although doing a backup to the same location is frowned upon, you could create both hard drives on the same NAS share as typically you're dealing with database recovery most of the time
      After all, everything on the NAS needs to be backed up to something else locally anyway as well as somewhere remote in case things go wrong, so is there is still redundancy there

  • @rdeckard66
    @rdeckard66 Год назад +1

    Thanks a lot for this video! Helped me, as I always got permission denied errors in Proxmox when trying to connect/create the (TrueNAS scale) NFS share.
    Many YT videos mention that you should create a dedicated user (e.g. proxmox) in TrueNAS. I also did this, but I did not understand the Maproot User setting. And as you can't specify a user/password in Proxmox for NFS-Shares (unlike for SMB-Shares), I did not understand, why you should create a proxmox user in TrueNAS at all. Now, I understand it with your help. ;-)

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      Glad the video helped
      It's an odd feature I admit but back in the days everything used Root so things like Root squashing made sense
      I would really prefer a non-Root account to map NFS shares from Proxmox, but it still uses Root for everything

  • @silviumaneahamzau8542
    @silviumaneahamzau8542 6 месяцев назад +1

    Super useful! Thanks!

  • @tonys-vids
    @tonys-vids 9 месяцев назад +1

    Hi David, thanks for the tutorial, helped me setup my Proxmox NFS share as a TrueNAS newbie. Previously I was doing this manually off a Linux NFS server. Only thing I'd like to point out is that in the NFS permissions, Maproot User/Group did not work for me, Proxmox gave an error when it went to create the directory structure, I had to revert to Mapall User/Group for the permissions to work correctly. I'm using a PVE cluster using 8.1.3, so not sure if this is a recent change that's needed by Proxmox?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  9 месяцев назад

      Seems odd as I have servers on 8.1.3 and it works fine, although it's been the same process for any Linux computer
      Did you assign ownership of and permissions to the Dataset to the user?
      Then did you map the root account to that user and their group in the NFS share?
      Although it does seem odd why it works for all users instead of just root

    • @tonys-vids
      @tonys-vids 9 месяцев назад

      Yes, definitely. One notable difference is I created custom GID/UID of 1010 and assigned them to pvrusr user and pveusr group. In any case it's working albeit perhaps less securely that you had indicated.

  • @matiastrane7598
    @matiastrane7598 11 месяцев назад +1

    Hello David,
    It seems you're quite experienced with this, so i hope you're able to help. I've tried following almost every video on this topic; Mounting NFS shares from NAS in Proxmox, but my VM can only see the main directory and not any of the subdirectories or any files. As far as i understand, NFS uses IPs to authenticate but uses user/pass for permissions, however i see nothing about how to set that up in any guides or videos. It's like what i'm trying to do is very niche, when i feel it's very common. I checked all settings and permissions on my (Synology) NAS. When i run a Ubuntu VM and mount the same folder, i am able to see the subfolders. Do you have any idea what i've done wrong?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  11 месяцев назад

      The IP address can block or allow access for a device but it doesn't authenticate as such
      I haven't used a Synology NAS for some time but in most cases you first you need to decide what folder to share and then set the permissions to allow access to whatever user/group needs access
      You then need to create an NFS share and set the same permissions there, unless you need to remap a root that is for instance but that's more for servers like PVE than users logged into a VM
      But once that's done, for VMs you would typically mount the share from within the OS
      If the user account can only access the top level then re-check the folder and sub-folder permissions
      Chances are everyone has read-only access and/or the permissions haven't been propagated to the sub-folders and files
      Now if the VM is running Linux, you can run into problems if the UID (user ID)and GID (group ID) don't match at both ends
      You can usually check on a Linux computer with this command
      cat /etc/passwd
      You should see a list of accounts and the numbers following :x: are the UID then the GID and usually they're the same
      The NAS on the other hand will probably show these when you edit the user account
      If they don't match I find it easier to re-create the user account on the NAS although you probably need to update the folder permissions that account needs access to, so it's best done before any files exist

  • @JavierGarcia-wb9ql
    @JavierGarcia-wb9ql Месяц назад +1

    very good job, thanks!!!!

  • @zeal514
    @zeal514 Год назад +1

    What do you think of virtualizing TrueNAS, making a ZFS pool, and sharing it across a Proxmox cluster (the same cluster that will be virtualizing TrueNAS).
    I have 2 fairly good machines capable of running proxmox, but I currently have 1 running Ubuntu server bare metal, with a raid array. I was than thinking of getting a DAS, or a couple of backup drives, and make them a ZFS pool, and use it as a backup pool too the TrueNAS pool.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      I think the main problem is when you build things with dependencies
      If PVE is using a VM that provides it with shared storage things can get complicated
      Will it be possible to migrate that VM to the other node when it comes time to patching PVE or will it require downtime?
      And if you have enough local storage on both PVE servers anyway would Ceph not be the better option?
      Maybe having a physical NAS with a virtual NAS as a backup might be the better option as IT design doesn't usually consider double failures

    • @zeal514
      @zeal514 Год назад

      @@TechTutorialsDavidMcKone hmm yea I deffintely don't have enough space on both. It's a 15tb array currently, which I intend to expand to 30tb. Trying to go after the cheapest solution. Was considering a DAS.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      @@zeal514 You could certainly go with a DAS connected to one server, but patching would be a challenge

  • @MrNejix1
    @MrNejix1 6 месяцев назад +1

    Great video, loved it!

  • @michaelcooper5490
    @michaelcooper5490 Год назад +1

    David could you do a video on setting up a storage network please (TrueNAS/Proxmox style)? That is something I have been searching for, for a couple of weeks. Thank you. I am very impressed with your teaching ability. You do an awesome job.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад +1

      I've been releasing videos in order so hopefully this will help
      I did one for setting up TrueNAS Core
      ruclips.net/video/JX1ZyRY3h-E/видео.html
      I haven't covered iSCSI as it would need a separate backup solution. I've only ever used it when there's battery backed disk storage to avoid data loss
      Most servers I've used don't do much disk writes so any performance gains would be lost, although having a SLOG helps for NFS. Which I mentioned for a 10GB NAS I built
      ruclips.net/video/_60qEIRNGLE/видео.html
      The network connectivity could just be a dedicated VLAN so that Proxmox has direct access to the NAS and that VLAN will be configured on the physical switch
      But sometimes there isn't enough NICs, or more bandwidth is needed, so I did videos for creating VLANs for Proxmox, but the rest depends on the switch being used
      ruclips.net/video/ljq6wlzn4qo/видео.html
      ruclips.net/video/ljq6wlzn4qo/видео.html

    • @michaelcooper5490
      @michaelcooper5490 Год назад

      @@TechTutorialsDavidMcKone I saw it, that's why I asked for a Network Storage video?

    • @michaelcooper5490
      @michaelcooper5490 Год назад

      @@TechTutorialsDavidMcKone Thank you I will give the last two a look. I do appreciate it.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      @@michaelcooper5490 Can you be more specific because I think any other video on Proxmox and TrueNAS would just repeat what was already covered?

    • @michaelcooper5490
      @michaelcooper5490 Год назад

      @@TechTutorialsDavidMcKone I think I get the gist. I just have to create a vlan that only those hosts have access on and that would make it an isolated storage network so-to-speak.

  • @zyghom
    @zyghom 11 месяцев назад +1

    super nice - thx, That is what I am don't now having setup my TrueNAS on separated machine. Question I have:
    1- I have few NICs on Proxmox and few on TrueNAS
    2- how to set up both that for NFS purpose they will use separate NIC, without impacting VM NICs?
    3- TrueNAS is connected to 2 segments: 192.168.1.x and 100.x
    4- Proxmox has NICs in both segments (some VMs support everybody in the house, so these ones are on VLAN100 while other VMs are "internal only" so they are on VLAN1)
    thx

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  11 месяцев назад

      Well before 10Gb networks came along, normally you'd bundle maybe 4 NICs together and rely on VLANs to separate your devices
      Most clients aren't likely to exceed 1Gb/s as they only have one NIC, so the traffic should get shared across the links
      Assuming high transfers like backups are done at night, the user experience should be fine
      But if there are large traffic loads within VLANs, there are options in there to limit traffic flows which can help
      Although if PVE is showing high traffic exchanges between itself and TrueNAS for instance, and it's affecting users, it might be easier to split those NICs into two bundles
      This results in the higher volume traffic being on separate physical links
      Of course you don't have to bundle NICs together
      Instead you can dedicated ones to different VLANs
      For me, the preferred method in that situation is to create a bridge for each NIC then assign the relevant VLANs to it along with the VMs

  • @RufusCubano
    @RufusCubano 10 месяцев назад +1

    I spent like 2 weeks searching exactly for a tutorial like this. Thank you so much! One question though, being nfs a network protocol, would using a specific network card (let's say a 10 nic gig card) just for the nfs, be better approach to leave the actual network card free for Internet stuff?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  10 месяцев назад +1

      I think it would be overkill
      10Gb is still a lot of bandwidth and barely gets used, unless you have a lot and I mean a lot of concurrent data transfers
      So even in a lot of companies you tend to have dual 10Gb NICs for redundancy plus a dedicated management NIC

  • @pr0jectSkyneT
    @pr0jectSkyneT 9 месяцев назад +1

    Let's say I want to make a Proxmox container (LXC) on my local disk because my local disk is a small capacity NVMe drive but I want to be able to mount a NFS share onto said LXC for said container to be able to save on my TrueNAS drives for certain files/directories. How would I be able to mount said TrueNAS NFS share to the LXC?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  9 месяцев назад

      I haven't used LXC containers myself because there's a higher risk of being able to access the OS and in the case of PVE, other VMs etc.
      So I don't really know
      I suggest checking the forums as others will have found ways to do this

    • @pr0jectSkyneT
      @pr0jectSkyneT 9 месяцев назад

      @@TechTutorialsDavidMcKone ok thanks

  • @r2d23kk
    @r2d23kk 9 месяцев назад +1

    This is great! Thank you

  • @neuro112
    @neuro112 Год назад +1

    How do I go about making a nextcloud dataset and creating nfs share on proxmox? For the user/group, do I set it as www-data or pveuser? Do I need to change the mapalluser/group to www-data or pveuser?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      I haven't used Nextcloud myself so maybe someone else can suggest a better strategy
      This video is more about setting up NFS for shared storage in Proxmox so that VMs can be stored there to provide redundancy against node failure
      The thing is, Proxmox runs using the root account and the configuration doesn't offer the option to set a user account so that's why it's been setup as shown
      Assuming Nextcloud can be configured to connect to an NFS share and it supports user accounts in that configuration, it would be better to setup that NFS share with permissions assigned to a user account such as www-data you've mentioned

  • @stephanenadeau5060
    @stephanenadeau5060 6 месяцев назад +1

    Thanks a lot for the video, maproot did the trick. Also I'm using NFS ver 4.2 and it doesn't list the path. so you have to type it manually and also select NFS Version: 4.2 instead of Default, if you don't select that it doesn't work.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  6 месяцев назад

      Thanks for the feedback
      Out of curiosity did you hardcode the version on the NFS server end?
      I prefer the default mode because both sides will then agree what version to use and when a newer version of NFS comes out I won't have to update the mount(s)
      They should opt for the highest version they both support as part of the negotiation
      So I'm using TrueNAS Scale as the server and PVE 8.x for instance as the client and both ends have agreed to use 4.2
      You can find out what version they pick by running this command on a PVE node
      nfsstat -m

  • @matiastrane7598
    @matiastrane7598 11 месяцев назад +1

    Hello David,
    I've tried 4 times to comment on your answer on my comment, but it seems i'm not allowed to do that for some reason. Here is my answer:
    @TechTutorialsDavidMcKone Hi again David, thanks for your reply!
    The thing is, all videos and guides i've seen does not mount the share within the OS, they do it in Proxmox, configure NFS permissions on the NAS and then they're able to see it in the VM. My VM is Jellyfin and there aren't any settings in it for NFS shares, user permissions, UID or GID - at all. All i've seen that should be required is: create a shared folder in NAS, configure NFS permission (This IP is able to read, write and/or execute), mount the share to the VM in proxmox, open Jellyfin, add new library and there you go. It sounds and looks so simple, i have no idea where my setup is fundamentally different from everybody else's or why. I recently started my journey in homelabbing, so it's also brand new, no crazy configurations.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  11 месяцев назад

      I'm not sure what happened to your previous comments as I don't have any held for review
      When it comes to VMs, it's best to think of them as a physical computer and ignore the fact that they're running on a hypervisor
      I'm not aware of a means to mount a share to a VM via Promxox VE, but even if it were possible I wouldn't want to
      Hypervisors are a high risk device, so connectivity to them should be limited and tightly controlled
      The only time a computer/VM should be allowed to talk directly to the hypervisor is if it's being used to manage the hypervisor and/or cluster
      If you're storing your media files on a NAS then you need to configure the operating system that Jellyfin is using to mount an NFS or SMB share to that shared folder, just as you would if Jellyfin was on a physical computer
      Looking at the website, all of the operating systems that are supported could do that, although I know Windows would need the extra client software installing for NFS
      In other words, this isn't a configuration done within Jellyfin itself and it has nothing to do with Proxmox VE either
      An alternative option may be to map the share to Proxmox VE and then add that as a hard drive to the VM
      The big problem though would be that the media files would then exist within that hard drive image file and they wouldn't be directly accessible to other computers, smart TVs, etc,
      The media files could only be controlled via access to the VM itself or maybe through Jellyfin

  • @arkaditadevosyan5819
    @arkaditadevosyan5819 10 месяцев назад +1

    just awesome

  • @PlaceholderforBjorn
    @PlaceholderforBjorn Год назад +1

    Great guide!

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      Thanks for the feedback and good to know the video was helpful

    • @PlaceholderforBjorn
      @PlaceholderforBjorn Год назад

      @@TechTutorialsDavidMcKone
      That wasn't that great feedback 😆.
      But here it comes:
      What I really liked with the guide is that you explained every single part that you touched during the video. Especially with the permissions on TrueNAS, as that was one of the hardest things to grasp when I was new to TrueNAS.
      I relearned something I've forgotten, first that you need to use maproot/mapall on TrueNAS.
      And that on NFS you connect to the share as root, and you can have some permission conflicts if you don't use maproot/mapall. And with that I have some shares I need to update/revise to make them more secure.
      Most tubers doesn't explain things like this in detail, and I really need to understand the basic before I can, or especially want, to use a special feature. Because of security.

  • @jimscomments
    @jimscomments 10 месяцев назад

    Question sir - On another video you had three network set up. If I remember correctly one was the bridge, one was the storage and one was the cluster. The bridge was 172.16.19.0 and in this video a storage network of 172.16.20.0. Your 172.16.20.0 was the separate storage network not the cluster network? Correct?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  10 месяцев назад +2

      Yes, 172.16.20.0/24 was the storage network for access to a NAS in my lab

    • @jimscomments
      @jimscomments 10 месяцев назад

      Thanks for the fast response, the answer fit a couple pieces together nicely. Last April I was struggled to get a NFS share working until I saw this video . I'm visualizing my TrueNAS servers in PVE and re-watched this video which reminded me I hadn't added a storage network. Besides the quality of your videos in general, the time you take to explain things, the detail of information, I also like your additional suggestions on how to make PVE run better. If you celebrate the upcoming holiday please have a nice and safe one.

  • @dimitristsoutsouras2712
    @dimitristsoutsouras2712 2 года назад +1

    At 22:58 why didn t you force to use the latest NFS version since both prox and Truenas are able to use V4 +
    How could you make sure up to which version each system could use? Any command in cli on both prox and TrueNas?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад +1

      It's mainly to keep the admin simple
      In a lab for instance, servers get built, replaced, etc. so ideally I'd prefer to just set the version on the NAS and leave the server untouched
      This way both sides will negotiate the best version they can support and I'm controlling that choice in one place
      I have one NAS for instance which lets me disable v2/v3 and although it offers nothing more specific than v4, proxmox has negotiated v4.2 with it
      Unfortunately, Truenas only seems to allow you to enable v4 within the NFS service, but proxmox has negotiated v4.1 with it
      So if I can't disable v3 in Truenas, then any client could force it down to v3, so I don't see any security gain in hard coding the server
      As far as I'm aware we're still waiting for support of v4.2 in FreeBSD
      But if/when that becomes available, it means these servers will switch to that without me having to make any changes to them
      Anyway, if you want to check what version your proxmox servers are using, open a shell on one of them and run the following command
      nfsstat -m

    • @dimitristsoutsouras2712
      @dimitristsoutsouras2712 2 года назад +1

      @@TechTutorialsDavidMcKone Nice. I am using scale instead of core but it also has only enable nfsv4, without specifying which version.
      nfsstat -m (I thought it would be --version -v for some reason -v brings a lot of output in a form like running htop) showed nfs v3 :(
      I used from the same storage of truenas connected with prox with 10g connection and attached, 2 extra disks to a win serv VM. One with disk format qcow2 and the other with raw disk type. I then downloaded a 4gb file and tried to copy it to those attached storages . The storage based on format qcow 2 achieved 560 -790mb/s while the one based on raw one, started with 450 and seconds later dropped down to 40-45mb/s for the entire transfer.
      Results seems to me upside down from what it should be.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      I'm surprised it selected v3 as scale is based on Debian, similar to openmediavault and the OS supports v4.2
      Did you restart the NFS service after selecting v4?
      I haven't tried scale myself as I just want a basic NAS and core seems to win the performance battle

    • @dimitristsoutsouras2712
      @dimitristsoutsouras2712 2 года назад

      @@TechTutorialsDavidMcKone No possibly the issue is me not selecting - enabling v4 at first place. Let it decide via auto-nagotiation. I ll enable it and check again. By the way nfsstand command has no effect on scale's cli.
      I know that core is more performant than scale but I ve chosen scale for extra apps and free bsd seems to be left behind evolution. I prefer a more unified linux environment
      New Edit: well after selecting v4 on TrueNas and restarting service (rebooted proxmox as well) running command nfsstat -m showed 4.2
      Now I am trying to figre out what would be best to choose as that extra storage for the WinSerVM... raw disk type or qcow2 and with what sync options

  • @andreusfigueiredodesouza7901
    @andreusfigueiredodesouza7901 Год назад +1

    Thanks It's work for me

  • @DarudeSandworm
    @DarudeSandworm 2 года назад +2

    I think the part I'm missing is the "giving it access to the NAS part" but you kind of skipped it.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      Not quite sure what you mean, can you clarify please?
      In the video I set up an NFS share in TrueNAS as an example and allowed access to it for a dedicated user account I also created
      However, Proxmox will use its root account to connect and doesn't provide an option in the GUI to set user credentials for login
      So on the NAS, the NFS share was configured to map the root user to the user we allowed access for
      Next I showed how to connect to that NFS share from Proxmox so I'm not sure what part is missing

    • @rudypieplenbosch6752
      @rudypieplenbosch6752 2 года назад +1

      That was the essential part, so now we know the Proxmox root account is used, pretty weird, so I need to change my root account on my Truenas in order to let Proxmox in ? Probably it can be done in another way.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      It's best to not to allow root access
      This is because if you do, that user has the potential to try and access other part of the system using root privileges
      So what I did in the video was create a dedicated account for NFS access only
      I then used the NFS option to squash root account access on the share so that Proxmox ends up with the rights of this user instead, and all that has access to is the data in that folder
      Granted anyone on the network can access the share, so it's best to have this storage network restricted, but isolated storage is a common best practice anyway

    • @rudypieplenbosch6752
      @rudypieplenbosch6752 2 года назад +1

      @@TechTutorialsDavidMcKone Yes I know, but you just should have added the part where you create an extra user on proxmox and with those credentials you initialize the access to the NFS share, it would have been more clear. Now this essential info is missing and that is a bit of a shame since you spend quite some effort on this tutorial.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      Proxmox doesn't offer a user account option for NFS in the GUI and instead it tries to connect as root
      So there is no point creating a user account on Proxmox if we can't use it
      Now for most computer systems, the guidance is to not allow remote root user access but Proxmox doesn't care
      In which case we compromise and allow access to the share as long as we see an attempt to connect as root, and it doesn't matter what password is sent
      It's a terrible idea really but it's the lesser of two evils and bear in mind these sort of systems were built when a root account was the only account being used
      Like other storage solutions then, our main security comes from only allowing legitimate computers to have direct physical access to the NAS
      But we now have an extra problem to resolve
      The NAS uses file permissions on its files and folders, so we still need to see some user account being used, it just can't be root
      To deal with this, we configure NFS to ignore the root account from Proxmox and substitute it for our pvuser account we created on Truenas and made owner of the folder
      Proxmox doesn't know about this account and still think it's connected as root
      Truneas on the other hand, treats the connection as if pveuser has logged in
      Although this would be the same on another NAS using NFS

  • @royrowan4664
    @royrowan4664 Год назад +1

    Thanks!

  • @TheBabooner
    @TheBabooner 2 года назад +1

    Wouldn't storing the VMs on a NAS (and hence most likely a NAS drive) affect performance of the VM versus storing it locally on a SSD or NVMe. Thank you for the great content and the exceptional level of patience

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад +3

      Thanks for asking
      Yes, you're right. NAS data transfers will be slower than local storage
      It's a trade off for reliability and cost savings and makes more sense for clusters
      If a hypervisor fails you can't easily access the files in its local storage
      With shared storage, a cluster can get those VMs back up and running much quicker because another server can access them immediately or run another instance as a hot standby
      If you need to patch a hypervisor you can migrate the VMs quickly because you only copy things like the VM's RAM and CPU state. With local storage you would also have to copy the VM's hard drive(s) over the network from one local storage to another
      Another appeal for me is I just have to back up the NAS because it has direct access to these VM files
      Granted, when you first build a VM it will take much longer to install an OS like Windows if it's stored on a NAS, but if you're counting the minutes then you could always install it to local storage then migrate it to the NAS afterwards
      Once the VM is up and running though, the differences aren't so noticeable because the operating system and application you run is loaded into RAM anyway
      But it would be a bigger issue if you've got something like a database that needs to perform a lot of disk writes
      Granted the NAS is a single point of failure in all this, but the ones I've had over the years have been very reliable. They only get replaced when it feels necessary
      Just checked and this once has been running 24x7 for nearly 7 years, yikes

    • @TheBabooner
      @TheBabooner 2 года назад

      @@TechTutorialsDavidMcKone you Sir are a scholar and a gentleman. Thank you for taking so much time of your day to respond.

    • @richardcole5826
      @richardcole5826 9 месяцев назад

      @@TechTutorialsDavidMcKone what about setting up Ceph Pools in Proxmox, especially for the VM files.... It seems to be a great new option and as far as resilience, your NFS is a single point of failure, as apposed to having files synced to multiple cluster nodes. In some cases, you can eliminate the expense of an expensive storage array for the VM's by using the nodes local disks, in a Ceph Pool and usually the cluster nodes have plenty of untapped drive bays..

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  9 месяцев назад

      @@richardcole5826 I did look into it a while ago, but my concern is when you have a small cluster
      You need a minimum of 3 nodes to run Ceph, but I understand it's better to have more
      For me, that means one more server to run 24x7 rather than 2 PVE servers and a low power qdevice
      Another thing that's nagging me is how many hard drives this would "burn" through versus simple replication to a backup NAS
      You also really need to to aim for 10GB connectivity for this, so the cost keeps going up

  • @andrewmaynard6693
    @andrewmaynard6693 Год назад +1

    good video!

  • @coolchap22
    @coolchap22 2 года назад +1

    Hi, was able to mount in proxmox using the tutorial. I have mounted NFS in my container using a mount point. I am using transmission to directly download the file on the NFS share. I am getting permission denied. Any pointers to solve?

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      Is it the container that's being denied access?
      If so it's independent of Proxmox and will have its own method of authentication so it depends on how you've set up the mount
      You could try the map all option instead of map root as a quick way to see if it's a user authentication problem
      Does the NFS server restrict access to certain IP addresses as that's a possibility?
      Another reason I've found is if the NFS server has DNS issues and can't resolve the client's hostname. It's part of a security check and results in a connection but no actual access

    • @coolchap22
      @coolchap22 2 года назад

      @@TechTutorialsDavidMcKone hi, thanks for the quick reply. I can create folder or files from shell in container. Its transmission who is troubling

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      I'm a bit confused so please help me better understand the situation
      Have you mapped to an NFS share on a NAS from a Linux container in Proxmox?
      In the shell, you can create and edit a file on that share but can't copy a file to it?
      Or is the issue that an application running in the container can't send a file?

    • @coolchap22
      @coolchap22 2 года назад

      @@TechTutorialsDavidMcKone, I have mapped nfs to proxmox node and then a mount point in container.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      Not sure what to suggest really as it sounds like a configuration issue and I'm not familiar enough with this strategy
      I just run a NAS and map everything to that so it's not something I run into
      It could even be an AppArmor issue like I found in a reddit post:
      _"I also had some issues with apparmor, but I solved it as described below.
      Edit these files on Proxmox and add the following line at the end of the file inside the last bracket }
      # nano /etc/apparmor.d/lxc/lxc-default-cgns
      mount fstype=nfs
      # nano /etc/apparmor.d/lxc/lxc-default-with-mounting
      mount fstype=nfs*
      And then restart the AppArmor (or host)._

  • @ianwilliams7740
    @ianwilliams7740 Год назад +3

    This doesnt help for people that want to share an existing Share on a NAS with multiple containers/VMs via NFS.. This is about creating virtual disks on a SAN via NFS...

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад +1

      That's true
      Connecting VMs to a shared drive for instance is independent of the hypervisor
      This isn't something you would do within Proxmox VE itself
      Instead you configure the NFS client within the OS of each VM and how you do that depends on what its OS is

  • @jenniferw8963
    @jenniferw8963 Год назад +1

    Proxmox is really pissing me off. I spent 6 hours so far trying to get the GUI to create a NFS storage. The NFS share mounts fine in the shell / cli using mount. But it says it's offline when I use the gui.

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад +1

      Can't say I've run into problems on the Proxmox side as such
      If it helps, you can check /etc/pve/storage.cfg to compare what you're doing in the CLI versus what Proxmox is doing via the GUI
      I've had permissions problems and firewall restrictions that have stopped NFS working but that's been resolved on the NFS server
      And there's not a lot of options available in the Proxmox GUI anyway so I would suggest checking the NFS server logs
      You can also run tail -f /var/log/syslog on the Proxmox side when you try to create a connection
      Or resort to tcpdump to try and figure out where things are going wrong

  • @Ohamdaoui
    @Ohamdaoui 2 года назад +1

    Could you change the title from Proxmox NFS Share to TrueNas NFS Share

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      I admit, it could do with a title change as the algorithms have changed
      But the video was more for how to connect Proxmox to NFS shared storage

  • @pelican-p6u
    @pelican-p6u Год назад +1

    content of 10 min becomes half an hour with fake ppl like this

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  Год назад

      Thanks for the feedback, always appreciated
      I'm always looking for ways to improve the videos I put out
      So when you say 10 mins do you mean you watched for 10 mins then gave up because you couldn't find what you were looking for?
      Or did you find what you want but you only wanted to see a demonstration without explanation and so would have preferred a shorter video?

  • @dimitristsoutsouras2712
    @dimitristsoutsouras2712 2 года назад +1

    First of all let me express my gratitude for that quick screen at 19:20 with a little alphanumerical string
    What the above line means is that at lastttttttttttt a person shown except from just clicking add storage and filling out the blanks in order for the nas server to show up the share, you ve shown the network tab!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    Yes it might seem simple t omany but it isnt . One picture more than a thousand words. By that pic I noticed I was right on forums where I literally cant rememebr the dozens of times I asked
    -does proxmox needs only vmbrs to work?
    -can t just put a static address to the nic-port-interface and have a static ip from the relevant port at nas side and communicate with each other this way?
    Those two simple damn questions never been answered not even at prox forums by gurus. The why eludes me even today.
    By that screen of yours (printscreened it / printed it to a wall wallpaper size :)) you can assume the following as well
    -when a vmbr is created on top of a nic-port-interface then that nic-port-interface shouldn t auto start because then goodbye proxmox gui or net access at all (happened more than once after updates that damn thing goes to /etc/network/interfaces and adds a line above the port / ports of the vmbr with auto enps70 and below follows the correct line iface enps70 inet manual) So bottom line for this is only bond or vmbr containing that port has to autostart not the correlated port
    - you proved and answer to me at least that you can set up a port-nic-interface without having to create a bridge on top of that.
    Now my last questions about the network connection between NAS=PROXMOX are
    -do you use a 10gb connection / DAC cable to connect them together?
    -if yes on how many fields did you have to set MTU to 9000 in order to enable jumbo frames
    In proxmox level there is that MTU value for port / bond and bridge. Do you use it for all of them (in case you use the 10gb connection on top of a bond on top of 2 10gb ports for instance). A recent answer I had about this was / sounds reasonable.
    -how come and you didn t use iscsi connection and preferred the NFS which adds the synced writes delay?
    -i have the same setup as you and when I go from proxmox's cli and ping nas's ip, it is inaccessible. pretty sure / double checked cables /nic cards and settings (ok those are easy to setup)

    • @TechTutorialsDavidMcKone
      @TechTutorialsDavidMcKone  2 года назад

      I don't use 10GB at the moment
      Although Jumbo frames applies more to large file transfers
      I've only seen a need for this on my physical network so far, even though the computers are on 1GB, as I make videos and it made a big difference for 20GB+ file transfers
      But I haven't done this for Proxmox as the VMs I run rarely do large data transfers once they're up and running
      So I couldn't really say where it's most efficient to make these changes
      I used to use iSCSI in the past, but you needed a separate backup system to access the share and then back that up somewhere else
      The main reason I switched to NFS is because the NAS can access these files and it's much easier, quicker and cheaper to let the NAS run its own backup system
      Besides, I don't want to have to restore an entire iSCSI share just to restore a file for one VM
      Yes, iSCSI is faster and that's in part because the writes aren't synced by default, but a power loss for iSCSI is more likely to lead to data loss
      NFS seems fine for me as once the VMs are built and running in RAM I don't notice any latency
      As for not being able to ping your NAS, check the interface is active and Proxmox can ping its own IP address. I got caught out by that once as it's not so obvious when an interface isn't active

    • @dimitristsoutsouras2712
      @dimitristsoutsouras2712 2 года назад

      @@TechTutorialsDavidMcKone Nice points you mentioned there. As for the connection the port was active and after I managed to get on my hands another 10g nic it worked at once. So there might be a hardware failure or something.
      Thank you once more.