Proxmox VE Dedicated Migration Interface
ฝัง
- เผยแพร่เมื่อ 8 ก.ย. 2024
- In this video we show you how to configure a dedicated migration interface for Proxmox VE
By default this traffic will be sent over the interface Proxmox VE was configured with when it was installed
And that can cause remote management and user connectivity issues for instance
Because even if a VM's hard drive is on shared storage, a live migration requires transferring the VM's RAM
Provided the hypervisors have multiple physical or partitioned interfaces, you can assign a specific interface to carry this migration traffic and avoid oversubscribing other interfaces
NOTE: If you are using the firewall in Proxmox VE, you will need to allow SSH traffic between the hypervisors on this interface
=============================
SUPPORT THE CHANNEL
Donate through Paypal:
paypal.me/Davi...
Donate through Buy Me A Coffee:
buymeacoffee.c...
Become a monthly contributor on Patreon:
/ dmckone
Become a monthly contributor on TH-cam:
/ techtutorialsdavidmckone
==============================
=============================
MY RECORDING HARDWARE:
Blue Yeti USB Microphone
amzn.to/3IfL3qm
Blue Radius III Custom Shockmount for Yeti and Yeti Pro USB Microphones
amzn.to/3G3f89P
RØDE PSA1 Professional Studio Arm
amzn.to/3Z3lPBF
Aokeo Professional Microphone Pop Filter
amzn.to/3VuZl9H
Logitech StreamCam
amzn.to/3WyZTwl
Elgato Key Light Air - Professional 1400 lumens Desk Light
amzn.to/3G81OB9
Neewer 2 Packs Tabletop LED Video Light Kit
amzn.to/3CcuN5O
Elgato Green Screen
amzn.to/3CoJBOL
=============================
==============================
MEDIA LINKS:
Website - www.techtutori...
Twitter - / dsmckone1
Facebook - / dsmckone
Linkedin - / dmckone
Instagram - / david.mckone
==============================
proxmox migration interface,proxmox migration network,proxmox migration settings,pve migration interface,pve migration settings,pve migration network,proxmox vm migration
If you want to learn more about Proxmox VE, this series will help you out
th-cam.com/video/sHWYUt0V-c8/w-d-xo.html
This was exactly what I was looking for. Thanks for all your proxmox videos David. They've been so useful in expanding my proxmox knowledge beyond the initial basic configuration.
Thanks for the feedback
Good to know these videos have been helpful
Great info! Just what I needed. I switched to a 10gbe interface from a 1gbe interface and my migration times got cut in half. I'm using ceph, so just the RAM contents needed to be moved (4GB). I'm still scratching my head as to why the speed up was not greater given the 10x bandwidth increase. Using iperf3 I've confirmed the interface is 10 G (Transfer 10.9 Gbytes, Bitrate 9.39 Gbits/sec).
The problem is that benchmarks don't reflect reality
Applications tend to be a lot slower when transferring files and you have to go down a rabbit hole to try and find where the bottleneck is
I would suggest though making sure Jumbo frames are enabled on the switch and on network cards in the same network/VLAN
What that setting is though depends and you'll have to experiment
I've maxed out the switch to 9216 bytes but because I have some computers with Intel NICs, all the computers had to be limited to 8996 bytes as anything higher can be a problem for some Intel NICs
Increasing the transmit and receive buffers on the network cards can help a bit as hardware buffers tend to be small
After that you have to factor in things like disk and disk controller speeds
When I was doing my own testing, uploading a file to a mechanical disk was hardly better than why I was on 1Gb
But when I uploaded the same file to an SSD it was much much faster
Monitor the switch interfaces as well as I once had a port max out during big file transfers. Replacing a DAC with a fibre cable and SFP+ ports resolved that for me
At the end of the day though, 10Gb+ networks are better suited to lots of concurrent traffic flows
When I uploaded several files at once and they weren't too big, the computer receiving them must have been able to cache them as the throughput was very high for that short duration window
But when I transferred just one very large file it usually maxed out to 2.5Gb/s, with the rate dropping and rising no doubt due to congestion algorithms kicking because it was too much for the computer to cope with
Transfers like that would go faster mind if I was using NFS instead of SMB, which brings me back to how applications can be the problem...
Thanks for this, it's confirmed that the problem I have are actually that my interfaces are not setup correctly to start with!
I must admit PVE isn't as obvious as some other hypervisors I've set up when it comes to interfaces
But I do still like it alot
Thank you David I was wondering how to do this you are awesome sir.
Good to know the video was useful, so thanks for the feedback
@@TechTutorialsDavidMcKone Sorry to bother you again, I have a situation where I need to move VM hds from the current nas it is on so I can rebuild it and then move them back on to it. I have 2.5 gb switch which I am using for the migration under options will it move on the same network or will it use the management network?
@@michaelcooper5490 The migration interface is more for hypervisor to hypervisor transfers e.g. when the hd files are stored in local storage
But when the hd files are put on a NAS, they stay where they are when the VM is migrated and the migration interface will be used for syncing the RAM contents between the two hypervisors
In this case, if the VM hd files need to move from the NAS to another computer the hypervisor will pull the files over the NIC that connects it to the NAS
And then send them over the NIC that connects it to where the files need to be sent
It could be the same NIC, it could be more than one, it really depends on your situation
If this transfer involves the migration or management interface depends on if they provide connectivity to the source or destination
@@TechTutorialsDavidMcKone Got ya thank you very much I appreciate it.
Благодарим ви!
Моля
Hello David,
thanks for your great videos!
In my case this does not work.
Dependig of the node in Migration settings differ the Network address.
Trying to migrate i get the error message: "could not get migration ip: multiple different, IP addresses configured for network '10.XX.YY.ZZ/16' "?
Greetings Micha
Normally computers don't allow multiple interfaces in the same subnet but that error suggests you might
It's unusual to assign IP addresses belonging to a /16 network as it's too large. Typically it would be broken down into /24 subnets for instance
I'm wondering if a server has a NIC with an IP address and /16 mask in error. If so that would overlap with a lot of other subnets and lead to confusion
I suggest you check to make sure all of the servers in the cluster have a network interface in the same subnet and that these are unique before you try to assign a migration network
You won't want a mix or overlap of subnets, for instance, one server with an IP of 10.1.1.127/24 and another with an IP of 10.1.1.130/25 for instance
From the first server's perspective, the second server is in the same subnet, but the second server will try and connect using it's default gateway as the subnets are different
And what you'll want are all servers with a network address in the same subnet
@@TechTutorialsDavidMcKone Hallo David, you are right - i found my mistake - two devices in one subnet... - Because of any errors i have to change my firewall. On the occassion i installed the proxmox cluster new and changed from 192.168.x.x addresses and /24 subnets to 10.x.x.x addresses and /16 subnets and VLAN's for clearer organisation. I used different addresses in the same subnet for different lan ports. An ceph installation error message i understood... ;) As you suggest i changed for this device back to /24 subnets and now it works. I'm not sure but it seems that vlan's not everywhere work and i'm searching for a way to implement trunk interface in SDN... Thank you very much. Sincerely Micha