Proxmox VE Dedicated Migration Interface
HTML-код
- Опубликовано: 6 фев 2025
- In this video we show you how to configure a dedicated migration interface for Proxmox VE
By default this traffic will be sent over the interface Proxmox VE was configured with when it was installed
And that can cause remote management and user connectivity issues for instance
Because even if a VM's hard drive is on shared storage, a live migration requires transferring the VM's RAM
Provided the hypervisors have multiple physical or partitioned interfaces, you can assign a specific interface to carry this migration traffic and avoid oversubscribing other interfaces
NOTE: If you are using the firewall in Proxmox VE, you will need to allow SSH traffic between the hypervisors on this interface
=============================
SUPPORT THE CHANNEL
Donate through Paypal:
paypal.me/Davi...
Donate through Buy Me A Coffee:
buymeacoffee.c...
Become a monthly contributor on Patreon:
/ dmckone
Become a monthly contributor on RUclips:
/ techtutorialsdavidmckone
==============================
=============================
MY RECORDING HARDWARE:
Blue Yeti USB Microphone
amzn.to/3IfL3qm
Blue Radius III Custom Shockmount for Yeti and Yeti Pro USB Microphones
amzn.to/3G3f89P
RØDE PSA1 Professional Studio Arm
amzn.to/3Z3lPBF
Aokeo Professional Microphone Pop Filter
amzn.to/3VuZl9H
Logitech StreamCam
amzn.to/3WyZTwl
Elgato Key Light Air - Professional 1400 lumens Desk Light
amzn.to/3G81OB9
Neewer 2 Packs Tabletop LED Video Light Kit
amzn.to/3CcuN5O
Elgato Green Screen
amzn.to/3CoJBOL
=============================
==============================
MEDIA LINKS:
Website - www.techtutori...
Twitter - / dsmckone1
Facebook - / dsmckone
Linkedin - / dmckone
Instagram - / david.mckone
==============================
proxmox migration interface,proxmox migration network,proxmox migration settings,pve migration interface,pve migration settings,pve migration network,proxmox vm migration
If you want to learn more about Proxmox VE, this series will help you out
ruclips.net/video/sHWYUt0V-c8/видео.html
This was exactly what I was looking for. Thanks for all your proxmox videos David. They've been so useful in expanding my proxmox knowledge beyond the initial basic configuration.
Thanks for the feedback
Good to know these videos have been helpful
Благодарим ви!
Моля
Thanks for this, it's confirmed that the problem I have are actually that my interfaces are not setup correctly to start with!
I must admit PVE isn't as obvious as some other hypervisors I've set up when it comes to interfaces
But I do still like it alot
Great info! Just what I needed. I switched to a 10gbe interface from a 1gbe interface and my migration times got cut in half. I'm using ceph, so just the RAM contents needed to be moved (4GB). I'm still scratching my head as to why the speed up was not greater given the 10x bandwidth increase. Using iperf3 I've confirmed the interface is 10 G (Transfer 10.9 Gbytes, Bitrate 9.39 Gbits/sec).
The problem is that benchmarks don't reflect reality
Applications tend to be a lot slower when transferring files and you have to go down a rabbit hole to try and find where the bottleneck is
I would suggest though making sure Jumbo frames are enabled on the switch and on network cards in the same network/VLAN
What that setting is though depends and you'll have to experiment
I've maxed out the switch to 9216 bytes but because I have some computers with Intel NICs, all the computers had to be limited to 8996 bytes as anything higher can be a problem for some Intel NICs
Increasing the transmit and receive buffers on the network cards can help a bit as hardware buffers tend to be small
After that you have to factor in things like disk and disk controller speeds
When I was doing my own testing, uploading a file to a mechanical disk was hardly better than why I was on 1Gb
But when I uploaded the same file to an SSD it was much much faster
Monitor the switch interfaces as well as I once had a port max out during big file transfers. Replacing a DAC with a fibre cable and SFP+ ports resolved that for me
At the end of the day though, 10Gb+ networks are better suited to lots of concurrent traffic flows
When I uploaded several files at once and they weren't too big, the computer receiving them must have been able to cache them as the throughput was very high for that short duration window
But when I transferred just one very large file it usually maxed out to 2.5Gb/s, with the rate dropping and rising no doubt due to congestion algorithms kicking because it was too much for the computer to cope with
Transfers like that would go faster mind if I was using NFS instead of SMB, which brings me back to how applications can be the problem...
Thank you David I was wondering how to do this you are awesome sir.
Good to know the video was useful, so thanks for the feedback
@@TechTutorialsDavidMcKone Sorry to bother you again, I have a situation where I need to move VM hds from the current nas it is on so I can rebuild it and then move them back on to it. I have 2.5 gb switch which I am using for the migration under options will it move on the same network or will it use the management network?
@@michaelcooper5490 The migration interface is more for hypervisor to hypervisor transfers e.g. when the hd files are stored in local storage
But when the hd files are put on a NAS, they stay where they are when the VM is migrated and the migration interface will be used for syncing the RAM contents between the two hypervisors
In this case, if the VM hd files need to move from the NAS to another computer the hypervisor will pull the files over the NIC that connects it to the NAS
And then send them over the NIC that connects it to where the files need to be sent
It could be the same NIC, it could be more than one, it really depends on your situation
If this transfer involves the migration or management interface depends on if they provide connectivity to the source or destination
@@TechTutorialsDavidMcKone Got ya thank you very much I appreciate it.
Good and to the point, thanks!
Thanks for the feedback
Good to know the video was useful
Hi David. Thanks for this. Couple of questions, one of my three nodes doesn't have a spare NIC - I assume all three would need a dedicated NIC for this to work (so they are on a separate network)? Also, do you know if this would route all replication traffic over the same interface (or is it only for live migrations)? I set up replication as per one of your other videos and that's really the traffic I would like to separate from my main network. Cheers.
According to the documentation, you select a network for migration traffic so each server needs an interface in the same network
"... the network must be specified so that each node has exactly one IP in the respective network"
It doesn't need to be a physical interface mind, even a virtual one will do
There's a Bandwidth Limit setting which allows you to set a bandwidth limit for migration traffic so you could carve up a 10GB NIC for instance, putting the interfaces into different VLANs and set an upper bandwidth limit for the different types of traffic
I'm not seeing anything about what interface the replication traffic uses or how to set a different one
I'm not seeing a separate bandwidth setting for it either
I did find this forum post though and looking at the feedback I suspect replication and migration traffic use the same interface
forum.proxmox.com/threads/force-zfs-replication-traffic-over-separate-nic.56081/
@@TechTutorialsDavidMcKone Thank you, using a virtual interface on my node without a third NIC is a good idea. I can use a spare physical interface on my other two nodes. Unfortunately my home network is limited to 1gbe for now. I currently only have only one physical interface being used for proxmox on each node (they also have a separate physical NIC for WAN because I've recently virtualised pfsense, combined with moving all of my VMs/LXCs from a single node to a cluster (followed your other video!)). The replications happen pretty fast but I have noticed occassional (very) minor performance issues and I assume it's because replication is saturating the link. Thanks again, your videos are excellent.
@@georgec2932 After the initial replication you get deltas so there should be less traffic
Like most file transfers though I wouldn't be surprised if it didn't try and grab as much bandwidth as possible
If you go to Datacenter | Options there's a Bandwidth Limits setting that should let you restrict the traffic
I know this is an older video, but I’m hoping you can offer some wisdom.
My setup has 3 interfaces per server. I’m renaming the physical ports from what they really are just for simplicity.
eno1 is my management 1GbE with vmbr0 assigned by default
eno2 is my VM network 10GbE with my own created vmbr1
eno3 is another 1GbE that is on its own switch with the other node(s) for cluster traffic. Bridge named cluster1
The issue I am having is that if I assign an IP that is out-of-subnet with VMBR0 to VMBR1 in an attempt to allow migration traffic over it, the GUI immediately becomes inaccessible on both vmbr0 and vmbr1. vmbr0 is 192.168.1.2/24 and vmbr1 is 192.168.10.2/24. They are both being routed by the same router, so I suspect that may be part of my issue. How would you get around this?
You have to be really careful with routing
Typically a server should have only one interface that's assigned with a default gateway
It depends but it might be the management interface for instance and so it's used as the one to access the Internet for server updates as well
The other interfaces should have just an IP address as they are meant to be isolated subnets
So migration traffic for instance stays on one interface and so all servers in the cluster need an interface in that subnet for direct connectivity
None of those subnets should be reachable via a router or firewall as it results in asymmetric traffic and things can fall apart, more so when it's a firewall, but even a router can sometimes cause problems
For remote access, the servers should only be targeted by the IP address on the interface with a default gateway
In corner cases, you have to create static routes on the server
Let's say a computer in a remote subnet 10.1.1.0/24 is trying to reach the server on 10.2.2.20/24
The router has links in both subnets and so it can route between them
In which case, the server needs a static route for 10.1.1.0/24 pointing to the router 10.2.2.1
For other traffic, the server will still use the default gateway and the interface that's configured with one
Does it have fallback to main network if migration network is not ready?
I haven't seen anything in the documentation to suggest that it would
@ so am I :) that’s is why I am asking. If you have this kind of installation (with dedicated migration network) can you please test it?
@@hpsfresh Well a server should really have two NICs bonded together for this so it would have its redundancy that way
A more typical solution I've seen is to bond two high speed NICs for all traffic and use VLANs
PVE has a setting for migration bandwidth to go with that to avoid overloading the NIC Typically though migrations tend to be restricted to out of hours anyway
Mind you some servers I've seen "break" a NIC into virtual NICs so that the OS thinks it has multiple NICs
Again, bandwidth limits are then imposed to avoid network overload
Hello David,
thanks for your great videos!
In my case this does not work.
Dependig of the node in Migration settings differ the Network address.
Trying to migrate i get the error message: "could not get migration ip: multiple different, IP addresses configured for network '10.XX.YY.ZZ/16' "?
Greetings Micha
Normally computers don't allow multiple interfaces in the same subnet but that error suggests you might
It's unusual to assign IP addresses belonging to a /16 network as it's too large. Typically it would be broken down into /24 subnets for instance
I'm wondering if a server has a NIC with an IP address and /16 mask in error. If so that would overlap with a lot of other subnets and lead to confusion
I suggest you check to make sure all of the servers in the cluster have a network interface in the same subnet and that these are unique before you try to assign a migration network
You won't want a mix or overlap of subnets, for instance, one server with an IP of 10.1.1.127/24 and another with an IP of 10.1.1.130/25 for instance
From the first server's perspective, the second server is in the same subnet, but the second server will try and connect using it's default gateway as the subnets are different
And what you'll want are all servers with a network address in the same subnet
@@TechTutorialsDavidMcKone Hallo David, you are right - i found my mistake - two devices in one subnet... - Because of any errors i have to change my firewall. On the occassion i installed the proxmox cluster new and changed from 192.168.x.x addresses and /24 subnets to 10.x.x.x addresses and /16 subnets and VLAN's for clearer organisation. I used different addresses in the same subnet for different lan ports. An ceph installation error message i understood... ;) As you suggest i changed for this device back to /24 subnets and now it works. I'm not sure but it seems that vlan's not everywhere work and i'm searching for a way to implement trunk interface in SDN... Thank you very much. Sincerely Micha