Proxmox iSCSI target with Synology NAS shared storage and troubleshooting
HTML-код
- Опубликовано: 14 июл 2024
- If you are using Proxmox in your home lab environment and you have a SAN or NAS running iSCSI, you can attach this shared storage to your Proxmox node. In this video, we take a look at adding Synology iSCSI shared storage to Proxmox and see how this can be done. We also look at troubleshooting iSCSI storage in Proxmox.
Subscribe to the channel: / @virtualizationhowto
My blog: www.virtualizationhowto.com
_____________________________________________________
Social Media: / vspinmaster
LinkedIn: / brandon-lee-vht
Github: github.com/brandonleegit
Introduction to Synology NAS and Proxmox iSCSI LUNs! - 0:00
Beginning the setup of the iSCSI target on the Synology NAS - 1:02
Creating the new iSCSI target - 1:45
Creating the iSCSI LUN - 2:06
Viewing the summary screen of the iSCSI target creation process - 2:51
Setting the allow multiple sessions and selecting the network adapter used - 3:13
Verifying the network configuration for the iSCSI connectivity from Proxmox to Synology - 4:35
Additional network adapter for iSCSI traffic - 4:50
Verifying routing and Layer 2 connectivity with iSCSI - 5:45
SSH'ing into your Proxmox node verifying IP addresses and running traceroute - 6:17
Proceeding with iSCSI configuration in Proxmox - 7:15
Adding the connection to the Synology NAS from Proxmox - 7:27
Describing using LVM over iSCSI - unchecking the use LUNs directly - 8:25
Adding the LVM device on top of the iSCSI connection - 8:57
Synology LUN listed - 10:00
Creating a test VM to use the new Synology iSCSI LUN storage - 10:30
Troubleshooting iSCSI connections in Proxmox - 10:56
Using the ping command and the traceroute command review - 11:15
Errors when adding and removing LUNs in Proxmox - 11:33
Concluding thoughts on iSCSI LUNs on Proxmox - 12:20
Take a look at the write up of the process here:
- www.virtualizationhowto.com/2... Хобби
Great information! Thanks for imparting your knowledge on iSCSi while giving my eyes something to SCSi 🤓
Great helpful video. I m using NFS with the Synology. Thank you
太好了,最近正在测试PVE,刚好我也有个黑裙NAS🤣
Thank you, finally I get it! I couldn't connect the dots between getting the iscsi lun onto the Proxmox host, and allowing the host to actually use it for anything. Creating the LVM storage on top of the iscsi storage was the missing link.
Thank you for posting this. i have ISCI lun on a Synology which was attached to my old Intel Mac mini so we could share the old iPhoto libraries. This required licensed software which I have but Mac OS no longer supports. I have a Proxmox cluster so can now try to get access to the files.
Great video, thanks.
Thanks for the information!
Any time!
Very helpful. Thank you, sir.
Glad it was helpful!
Greath Video, Man!
I was finally able to do this, I did not realize I needed a 2nd nic connection and was trying to do this with the single nic on my Lenovo M900 tiny, but I was able to plug in a usb-nic adapter and was able to get the 2nd connection and then the rest of your steps worked great. Is there anyway to not add a 2nd nic ?? THANK YOU so much, I was trying a couple things but I think you got me going in the right direction now, I will know tomorrow when I have more time to really test this.
Great video. 🐾🐾 :)
I'd love to see you do an episode on multipath with iSCSI and 2 Proxmox nodes.
Deadwing, will keep that one in mind!
@@VirtualizationHowto I also would like to see this, I have setup two Proxmox servers and want to utilize my Sinology's to handle all storage and would love to see how you could create shared storage on the synology and then access it from both Proxmox servers at the same time.
I second this. Too little information about MPIO out there. And I would suggest doing both iSCSI and NFS, where NFS is likely the easiest option for most.
good work
very helpful thank you. am I the only that noticed the doggo in the backround? lol
:) @rongabriel good catch!
thnx ser!!!
Thank you 👋
To have it working, I had to login in SSH and apt install open-iscsi.
You can verify if it's the issue for you with pvesm status
Adding the iscsi in proxmox has the bug you mentioned. Nothing is using the volume you either wait or delete and recreate or reboot and you ll see the iscsi appear in the gui.
Instead of messing around with VLANs and such, since I'm not that far yet, would it be okay to just connect the second port of my Synology NAS to a secondary port on my Proxmox blade? As far as I know I'm fully VLAN capable (HP DL360 g9, HP j9980a layer 2 switch) but I just want to avoid complex stuff as much as I can because this network isn't just a homelab, it's also what I use when coming back home from work tired to watch some series and such. Don't want too many abstraction layers hindering peace and tranquillity after work.
Edit: It seems to work. Important is the "Line of sight" and making sure that both devices are on the same network.
what's the reason why we don't use LUNs directly?
Hi there I have the same setup with clustered 4 nodes ; 2 can see the lvm and 2 are not , can you support in troubleshooting this case ? thanks in advance.
Is there a way to scan the target to find newly created LUNs? I had to restart the host to get the new LUN to show up.
You didn't mention you have separate layer 2 networks, one for iscsi and one for other network stuff. I just happen to notice your iscsi interface was set to jumbo frames and your other network interface was at 1500. I was just thinking that was worth a mention. Also pictures say a lot when you are explaining networking, but it's your video and not mine. In proxmox maybe they are separate bridges connected to separate switches or directly attached to the SAN. Either way you must be using separate physical interfaces as you leave the proxmox box.
@ipstacks11, thank you for the comment! Please share some of your expertise and experiences over on the forums here: www.virtualizationhowto.com/community. Thank you again.
Hello. Is that possible to expand synology storage by mounting nfs or iscsi target on proxmox/truenas?
@Piotr_T thank you for the comment! Can you join up on the forums and we can discuss further? www.virtualizationhowto.com/community
hi can you show how to do pci passthrough on a hba card?
Does Promox support SAN FC Interfaces and Storgae Mapping, If So kindly provide links.
Yes it does, i have been running an old NETAPP DS14-MK2-FC (loud as hell) connected directly to a proxmox server with fc adapter.
Just make sure to fix the disk to be 512 instead of (netapp)520 blocksize
@@landychev Kindly provide links i can refere if possible?
why iSCSI over NFSv4 ???
I tend to prefer block storage over file-level storage for virtualized workloads
@@VirtualizationHowto did not know about the multi-path-tool, i figured it will just work or auto enable with two or more NICs
@@VirtualizationHowtoWhy is that? In my mind block devices, running over the network, are much more susceptible to data corruption if there is a networking issue. NFSv4.1 now also supports multipathing which allows using fairly basic networking equipment and still get redundancy.
"ensuring the connection is a layer 2 connection"
As someone with a CCNA and like 7 certs this is brain breaking, because its the wrong way to talk about that stuff.
Anything involving an IP address is layer 3, and simply having link-lights means you have layer 2 connectivity.
Just pinging an address means layers 1 2 and 3 and 4 are operational because ICMP is layer 4
What you mean to say is non-routed or 1 hop or point to point
@christopherhall7216 ...thank you for the comment...VLANs exist at layer 2 and this is what was meant by layer 2 since you don't "route" vlans.
@@VirtualizationHowto I see what you mean, but for there to ever be layer 3 connectivity layer 2 has to be solid, wrether it be several links of layer 2 or one. Wouldnt matter if it was routed 9 hops. Layer 2 would be connected
Broadcast domain seems like the term you were after, but that can even be stretched across wan with L2TP so ¯\_(ツ)_/¯
Im just not used to people using those networking terms like that
Just my $0.10, I came across the same "already used by volume" error, I simply used "wipefs --all --backup /dev/sda" then was able to create lvm.