@tjkreidl I think the issue is that he's got no 10G switch, hence the direct connection
But you live and you learn, best would be to pick up a cheap 10G switch and make it right!

Posts
-
RE: 10gb backup only managing about 80Mb
-
RE: 10gb backup only managing about 80Mb
@utopianfish I see, that explains a lot.
-
RE: 10gb backup only managing about 80Mb
@acebmxer said in 10gb backup only managing about 80Mb:
I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.
Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.
Tbh I do the same on our vmware hosts, 2x10G or 2x25G and then the management as a vlan interface on that vSwitch, aswell as the VLAN's used for storage, VM traffic and so on.
I find it much easier to keep the racks clean if we only have 2 connections from each hosts, rather than 4, since it kind of adds up really fast and makes the rack impossible to keep nice and clean when you have 15-20 machines in it + storage + switches + firewalls and all the inter-connections with other racks, ip-transit and so on.
Edit:
Except for vSAN hosts where the vSAN traffic needs atleast 1 dedicated interface, but those are the only exception. -
RE: 10gb backup only managing about 80Mb
@utopianfish said in 10gb backup only managing about 80Mb:
@nikade i think the problem is its using the mgmt interface to do the backup..its not touching the 10GB nics.. when i set it under Pools/Adanced/Backup to use the 10gb nic as default the job fails... setting it back to none the job is successful with a speed of 80 MiB/s.. so using the 1GB mgmt nic... how do i get the backups to use the dedicated 10gb link then. ?
May I ask why your management interface is not on the 10G nic? There is absolutely no downside to having that kind of setup.
We used this setup for 7 years on our Dell R630's without any issues at all. We had 2x10G NIC in our hosts and then put the management interface on top of the bond0 as a native vlan.
Then we just added our VLAN's on top on the bond0 and voila, all your interfaces benefits from the 10G nic's. -
RE: 10gb backup only managing about 80Mb
@olivierlambert said in 10gb backup only managing about 80Mb:
I would have ask the same question
Great minds and all that, you know
@utopianfish check if you have any kind of power options regarding "power saving" or "performance" modes you can change in the BIOS. That could make a big difference as well.
-
RE: 10gb backup only managing about 80Mb
@utopianfish said in 10gb backup only managing about 80Mb:
@olivierlambert ok here's a bit from the log.. Start: 2025-09-03 12:00
End: 2025-09-03 12:00
Duration: a few seconds
Size: 624 MiB
Speed: 61.63 MiB/sStart: 2025-09-03 12:00
End: 2025-09-03 12:00so other jobs are sowing anywhere betwwen 25 to about 80. MiB/s
What CPU are you using? We saw about the same speeds on our older Intel Xeon's with 2.4ghz and when we switched to newer Intel Xeon Gold with 3Ghz the speeds increased quite a bit, we're now seeing around 110-160 MiB/s after migrating the XO VM.
-
RE: Pre-Setup for Migration of 75+ VM's from Proxmox VE to XCP-ng
Welcome to the community @cichy!
Just out of curiosity, why are you migrating from proxmox to xcp-ng? Are you ex. vmware?
We used both vmware and xcp-ng for a long time and xcp-ng is was the obvious alternative for us for workloads that we didn't want in our vmware environment, mostly because of using shared storage and the general similarities. -
RE: Windows Server not listening to radius port after vmware migration
@acebmxer said in Windows Server not listening to radius port after vmware migration:
After migrating our windows server that host our Duo Proxy manager having an issue.
[info] Testing section 'radius_client' with configuration:
[info] {'host': '192.168.20.16', 'pass_through_all': 'true', 'secret': '*****'}
[error] Host 192.168.20.16 is not listening for RADIUS traffic on port 1812
[debug] Exception: [WinError 10054] An existing connection was forcibly closed by the remote hostAfter the migration I did have to reset the IP address and I did install the Xen tools via windows update.
Any suggestions? I am thinking I may have the same issue if i spin up the old vm as the vmware tools were removed which i think effected that nic as well....
On your VM that runs the Duo Auth Proxy service, check if the service is actually listening on the external IP or if its just listening on 127.0.0.1
If its just listening on 127.0.0.1 you can try to repair the Duo Auth Proxy service, take a snapshot before doing so.Also, if you're using encrypted passwords in your Duo Auth Proxy configuration you probably need to re-encrypt them, just a heads up, since I just had to do so after migrating one of ours.
Edit:
Do you have the "interface" option specified in your Duo Auth Proxy configuration? -
RE: High availability - host failure number
Think of it like this:
If you have 4 hosts, each host maximum usage will be 25% of the total - How much of that % do you want to reserve in case of a failed host?
Personally, I'd like to have the number set to 1 host (25%) because that means im able to use 3 hosts and the 4th hosts resources would be reserved in case of a failure. -
RE: Sdn controller and physical network
@blackliner said in Sdn controller and physical network:
@nikade How do you "pair" the XCP-ng SDN with your routing setup?
You cant/dont, you'll have to setup each private network on the vyos router and then have the vm private network routed through it manually.
For example if you have private network 1 with subnet 192.168.1.0/24 you'd have to add this network to the vyos router and assign 192.168.1.1/24 on the router.
Then set 192.168.1.1 as default gateway in your vm's which uses this network.Then you'll setup ospf or bgp on the vyos router manually with your upstream border/core-router or firewall. If the subnet is a private subnet you'll need to setup NAT as well somewhere before it reaches internet to NAT traffic from 192.168.1.0/24.
-
RE: Sdn controller and physical network
You would need a router within that private vlan which also has an external network and act as a router. Something needs to act as a router between the private network and the external network, with ospf or bgp.
We do about the same, with VyOS, and it works pretty good.
-
RE: Issue with SR and coalesce
@tjkreidl said in Issue with SR and coalesce:
@nikade Am wondering still if one of the hosts isn't connected to that SR properly. Re-creating teh SR from scratch would do the trick, but a lot of work shuffling all the VMs to different SR storage. Might be worth it, of course, if it fixes the issue.
Yeah maybe, but I think there would be some kind of indication in XO if the SR wasn't properly mounted on one of the hosts.
Lets see what happends, its weird indeed that its not shown. -
RE: Issue with SR and coalesce
@Byte_Smarter said in Issue with SR and coalesce:
Maybe I am reading this wrong but the SR is not there in a mount? but also it is viewable and working in XO and lists usage and all that?
It should definitely be listed here, I'd start over and see if it shows afterwards.
Remember to fully destroy and remove the SR (which will obviously remove all data) and then re-create it to make sure there's no weird stuff left. -
RE: Issue with SR and coalesce
Something feels wrong, it writes to the log that there are 23 VDI's to be deleted on SR with uuid 6eb76845-35be-e755-4d7a-5419049aca87 but you say there's not snapshots.
It doesn't add up or am I missing something? How many VDI's do you have on the SR? -
RE: Issue with SR and coalesce
That's weird indeed, I wonder what those 23 VDI's are then.
What is the uuid of this new SR? 6eb76845-35be-e755-4d7a-5419049aca87?I think you can show them from the cli with the following command:
xe sr-list params=name,uuid -
RE: Issue with SR and coalesce
@Byte_Smarter said in Issue with SR and coalesce:
Mar 15 04:06:41 ops-xen2 SMGC: [19269] Found 23 VDIs for deletion:
This is what makes me unsure, are you sure there is no snapshots on any VM's?
-
RE: Issue with SR and coalesce
@Byte_Smarter said in Issue with SR and coalesce:
@tjkreidl I am not sure if you saw my earlier post, I have several TB of space
Are you sure multipathing is disabled on all hosts in the pool?
Also, could you share a larger portion of the /var/log/SMlog ? -
RE: Issue with SR and coalesce
As replied by @dthenot you need to check /var/log/SMlog on all of your hosts to see which one it is failing on and why.
If the storage filled up before this started to happend my guess is that there is something corrupted, if that's the case you might have to clean up manually.I've had this situation once and got help from XOA support, they had to manually clean up some old snapshots and after doing so we triggered a new coalescale (rescan the storage) which were able to clean up the queue.
Untill that's finished I wouldn't run any backups, since that might cause more problems but also slow down the coalescale process. -
RE: Rolling pool update failed to migrate VMs back
We've also experience trouble almost every time we've updated our pools, ever since the old XenServer days and Citrix kind of recommended "manual intervention" because there was no mechanism to check which hosts that were suitable before a VM is migrated.
I think there has been a lot of work done to XOA tho to handle this, but I might've been mistaken, we just ended up re-installing our hosts and setting up a new pool which we then live migrate our VM's over too and scrap the old ones.
VMWare has some kind of logic, which will try to balance the load between the hosts and if you have DRS it will even make sure to balance your hosts automatically during runtime.
Im pretty sure XOA has this logic as well, but XCP-NG center definately doesn't, so avoid it as much as possible. -
RE: Onboarding + Support timezones
I don't think that's going to be an issue with Vates, they have ppl in different parts of the world and they're very helpfull.
Reach out to them and link this forum thread, im sure they will be able to get you going even tho you're in Australia.