just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

@MK.ultra don't think so
it's working without for me.
@tmk hi !
Many thanks, your modified python file did the trick, my static IP address is now working as intented.
I can confirm this is working on Windows 2025 Server as well.
@Bastien-Nollet said in Full backup - new long-retention options:
Yes, a backup is kept if it matches one of the retention criteria, either the schedule's retention or the LTR. (the backup is not duplicated, we just check for both criteria to know if we should keep the backup or not)
Could we have an option to choose wich LTR day from month to keep ?
and even for weekly, what weekday ?
finally understood the network problem...
If you have a match of MAC ADDRESS in netplan

Veeam restores the VM with a NEW MAC ADDRESS

so either you delete the match, or get the good mac on the VIF...
ok, installed pws 7

and it's working 

@florent okaaaay, I was taking it the wrong way.
I managed to have one proxy to see two remotes as intended, thanks to your advice.
Need to do some routing (DC1 and DC2 are on two distant datacenters, separate subnets) to make each proxy see the good route to each distant remote.
Thank you !
@Davidj-0 i'll ping back here whenever the app is ready to be tested
will be MS O365 auth to access
@nikade still early dev but here is what is actually working

4 clicks ! 
work in progress : dhcp server/openvpn server by tenant, outbound nat dedicated IP of available pool per tenant

and XO like interface for resellers to manage their clients (this is a global admin view, all internal, clients, and resellers available)
pushing VMs in their reserved vlan
start/stopping vms
view only on their backup logs (not possible with XOA ACLs/self service resources without being an admin)
reseller can manage its own tenant and its clients tenants, firewall rules are made so that the reseller can access all its client tenants (if he wants to put up its own monitoring for exemple, or mutualised services for its clients)
work in progress : replicate XOA self-service like options, but with custom granularity. VMs deployement with pulumi is quite finished, need to better manage the available templates to each client/reseller
their will be a global admin view for us, reseller view for reseller tenant+its clients tenants, and client view on its own tenant
spinning up a tenant with zero-to-ping in less than 5 minutes is the goal !
@nikade will share some automation screenshots of our current developments as soon as they are proofed
we're building on top of APIs, all custom settings
@olivierlambert here is the replica node, on a 10km distant datacenter (two ways 10Gb network between the two nodes)

smaller setup here, just to host backup copies of production node and replica/DR VMs.
the whole setup consist of 7 VMS Enterprise hosts

Netgate 8300 max firewalls top of rack.
we are a MSP (and Vates partner
Tier2 soon to be Tier4 !) providing full hosting or hybrid onprem/oncloud to our clients.
some other services like web hosting with Plesk platform, Veeam Cloud Connect, security/firewalling services, centralised monitoring with Centreon and as soon as we manage to connect onprem XCP to our cloud xoproxies, full replica solution of onprem XCP servers.
on the bare metal, XCP 8.3, value added with full automation of tenant creation/administration/documentation with diverse APIs (check CRM for clients tagged to be admin of their tenant, get VLANs from there, create them in XCP pools/switchs/pfsense firewalls, create firewall rules and limiters, create openvpn server, spin up VMs in client tenant, automatic netbox documentation on top of xoa plugin, enjoy !)
we are on pre-production and should be on the market in november, currently migrating OVH VMs to these servers.
VATES stack is the best solution to be fully integrated in our vision of providing VMs and services to clients in an efficient way.
we left vmware, had ESXs hosted in OVH datacenters in France, but they were 10 000km and 250ms away from our end users.
for those who wonder, we are located at Reunion Island, indian ocean, french overseas territory.
@olivierlambert when planning our infrastructure, Vates engineer told us to avoid Broadcom nics but to go Intel or nvidia/Mellanox.
we bought Mellanox
This is the way.
@MajorP93 stress of VM migration XD
good luck with your V2Vs !