Happy to hear that there's a potential lead, im also happy I found this thread so I can kick back and wait for Vates to fix it 
Posts
-
RE: Potential bug with Windows VM backup: "Body Timeout Error"
-
RE: Potential bug with Windows VM backup: "Body Timeout Error"
@MajorP93 im seeing this as well, I think the issue is related to communication between XO and XCP-NG.
I noticed that it doesn't seem to depend on the vdi size in our case, but rather latency between XO and XCP-NG, which are on different sites and connected via IPSEC VPN. -
RE: Racked today, entire hosting solution based on Vates stack
@Pilow Sounds like a solid, let me know when you're ready. I can spawn some VM's and use the webinterface and "try" to break it

-
RE: Racked today, entire hosting solution based on Vates stack
@Pilow sure, I can help you test some of the automations. What would you need done? Just "act" as a customer and setup some VM's and such?
-
RE: Racked today, entire hosting solution based on Vates stack
@Pilow yeah I know, we're a vmware customer as well and the pricing increase was pretty rough.
I can imagine this is a big upgrade for your company, and it seems like you already have the customers so this will probably be a very nice upgrade for them.
I can also guess, that you were able to completely rebuild everything now exactly how you wanted it, which is also a big plus. -
RE: Racked today, entire hosting solution based on Vates stack
@Pilow thats very impressive, you've done some really great work here.
I like the "private cloud" approach where you can have re-sellers under their own umbrella, but on your infrastructure.Do you already have customers on another platform or why did you decide to make this big investment with time, hardware, colocation, fiber and all that is needed for this kind of project?
-
RE: Racked today, entire hosting solution based on Vates stack
@Pilow sounds good, i'll follow this thread!
-
RE: VMs on OVH with Additional IP unable to be agile
The problem with those providers is, that the additional IP is usually L3 routed to the physical hosts IP.
A better solution would be to get a /29 subnet or similar, where the gateway is handled by the provider and then configure that subnet on a specific VLAN which is available on all hosts where you want the VM's to be able to run.More work for the provider, since they have to setup the gateway IP within the /29 subnet on their router and then configure the VLAN on the switchports towards your hosts, but its the best way to make the VM's agile.
-
RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
@jmannik said in Unable to enable High Availability - INTERNAL_ERROR(Not_found):
@tjkreidl This hasn't been my experience so far, enabling HA has just enabled HA, no reboot needed.
@psafont I am patching all my hosts now, will do the above test packages on Sunday Night (it is Friday afternoon at the time of this post)
Correct, no reboot needed to enable/disable HA.
-
RE: Racked today, entire hosting solution based on Vates stack
@Pilow well you seem to have a nice setup, I am looking forward to more pictures!
I wish I could share ours, but im not allowed to
-
RE: Racked today, entire hosting solution based on Vates stack
@olivierlambert said in Racked today, entire hosting solution based on Vates stack:
Obviously, you need some fun otherwise it would be boring

Haha yeah "fun"

-
RE: Racked today, entire hosting solution based on Vates stack
@Pilow Cool, you have now taught me something new, again

I find it very interesting that there is hosting business on these islands, I kind of expected everyone to use the cloud since it would be expensive to establish a datacenter precense on those islands.Prices do seem expensive, im in Sweden and we have a lot of fiber and ip-transits here.
We pay about €300 per month for 10G CWDM between our datacenters, €550 for redundant (2 paths). Distance is about 10-20km.IP-transit depends on the provider, we have 3 different ones, and we have different "deals" from each one of them. We mostly do 1G with 100-200Mbit/s but from our main provider which is 10G with a 1G traffic commit we pay about €375 per month. This price is mainly because I know one of the guys who works there + we're a big customer of theirs.
For comparison we pay about €300 for the other 1G with 100Mbit traffic commit...
-
RE: Racked today, entire hosting solution based on Vates stack
@Pilow That's a very nice setup, cool to see some real enterprise hardware for once in this thread

Also, thanks for informing me about Reunion Island, never heard of the place before and had to look it up on google. How many datacenters can you choose from on this island? Whats the connectivity like, is it very expensive with fiber and ip-transit? -
RE: Veeam backup with XCP NG
@acebmxer said in Veeam backup with XCP NG:
and when not accidentally routing the backups... save 6 min..

host

Amazing, that's really impressive. Backups has always taken a long time with XOA once you start to backup a lot of VM's so I hope this will improve the overall backup performance

-
RE: 10gb backup only managing about 80Mb
@tjkreidl I think the issue is that he's got no 10G switch, hence the direct connection

But you live and you learn, best would be to pick up a cheap 10G switch and make it right! -
RE: 10gb backup only managing about 80Mb
@utopianfish I see, that explains a lot.
-
RE: 10gb backup only managing about 80Mb
@acebmxer said in 10gb backup only managing about 80Mb:
I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.
Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.
Tbh I do the same on our vmware hosts, 2x10G or 2x25G and then the management as a vlan interface on that vSwitch, aswell as the VLAN's used for storage, VM traffic and so on.
I find it much easier to keep the racks clean if we only have 2 connections from each hosts, rather than 4, since it kind of adds up really fast and makes the rack impossible to keep nice and clean when you have 15-20 machines in it + storage + switches + firewalls and all the inter-connections with other racks, ip-transit and so on.
Edit:
Except for vSAN hosts where the vSAN traffic needs atleast 1 dedicated interface, but those are the only exception. -
RE: 10gb backup only managing about 80Mb
@utopianfish said in 10gb backup only managing about 80Mb:
@nikade i think the problem is its using the mgmt interface to do the backup..its not touching the 10GB nics.. when i set it under Pools/Adanced/Backup to use the 10gb nic as default the job fails... setting it back to none the job is successful with a speed of 80 MiB/s.. so using the 1GB mgmt nic... how do i get the backups to use the dedicated 10gb link then. ?
May I ask why your management interface is not on the 10G nic? There is absolutely no downside to having that kind of setup.
We used this setup for 7 years on our Dell R630's without any issues at all. We had 2x10G NIC in our hosts and then put the management interface on top of the bond0 as a native vlan.
Then we just added our VLAN's on top on the bond0 and voila, all your interfaces benefits from the 10G nic's. -
RE: 10gb backup only managing about 80Mb
@olivierlambert said in 10gb backup only managing about 80Mb:
I would have ask the same question

Great minds and all that, you know

@utopianfish check if you have any kind of power options regarding "power saving" or "performance" modes you can change in the BIOS. That could make a big difference as well.
-
RE: 10gb backup only managing about 80Mb
@utopianfish said in 10gb backup only managing about 80Mb:
@olivierlambert ok here's a bit from the log.. Start: 2025-09-03 12:00
End: 2025-09-03 12:00
Duration: a few seconds
Size: 624 MiB
Speed: 61.63 MiB/sStart: 2025-09-03 12:00
End: 2025-09-03 12:00so other jobs are sowing anywhere betwwen 25 to about 80. MiB/s
What CPU are you using? We saw about the same speeds on our older Intel Xeon's with 2.4ghz and when we switched to newer Intel Xeon Gold with 3Ghz the speeds increased quite a bit, we're now seeing around 110-160 MiB/s after migrating the XO VM.