just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

@MK.ultra don't think so
it's working without for me.
@tmk hi !
Many thanks, your modified python file did the trick, my static IP address is now working as intented.
I can confirm this is working on Windows 2025 Server as well.
finally understood the network problem...
If you have a match of MAC ADDRESS in netplan

Veeam restores the VM with a NEW MAC ADDRESS

so either you delete the match, or get the good mac on the VIF...
ok, installed pws 7

and it's working 

@florent okaaaay, I was taking it the wrong way.
I managed to have one proxy to see two remotes as intended, thanks to your advice.
Need to do some routing (DC1 and DC2 are on two distant datacenters, separate subnets) to make each proxy see the good route to each distant remote.
Thank you !
@nikade will share some automation screenshots of our current developments as soon as they are proofed
we're building on top of APIs, all custom settings
@nikade we had a similar setup on VMWARE solution, OVH bare metal hosted in France.
but you know. broadcom 
main company is Toolbox, we decided to migrate onprem and cloud clients to full Vates locally hosted on the island this time, and separated the hosting in Cloudbox, a sister company of Toolbox.
many clients do not want to be hosted externaly of the island because of the latency. 250ms to 10ms is quite an upgrade for some situations.
and disaster recovery for 10Tb of vm infrastructure from OVH to Reunion gets you a high RTO, many clients had their external backups on our OVH servers. from days to hours now if needed.
@nikade still early dev but here is what is actually working

4 clicks ! 
work in progress : dhcp server/openvpn server by tenant, outbound nat dedicated IP of available pool per tenant

and XO like interface for resellers to manage their clients (this is a global admin view, all internal, clients, and resellers available)
pushing VMs in their reserved vlan
start/stopping vms
view only on their backup logs (not possible with XOA ACLs/self service resources without being an admin)
reseller can manage its own tenant and its clients tenants, firewall rules are made so that the reseller can access all its client tenants (if he wants to put up its own monitoring for exemple, or mutualised services for its clients)
work in progress : replicate XOA self-service like options, but with custom granularity. VMs deployement with pulumi is quite finished, need to better manage the available templates to each client/reseller
their will be a global admin view for us, reseller view for reseller tenant+its clients tenants, and client view on its own tenant
spinning up a tenant with zero-to-ping in less than 5 minutes is the goal !
@bdenv-r are you trying XOSTOR on 8.2 or 8.3 ?
had many problem of performance and tap-drive locking my VDIs on 8.2 xostor
would like to know if 8.3 xostor still have issues
@nikade will share some automation screenshots of our current developments as soon as they are proofed
we're building on top of APIs, all custom settings
@olivierlambert and hurricanes saeson from November to march 
ha, we have an active volcano on the island too 
@nikade so our 10G WDM is ten times your price (but redundancy included :')
check here for a cool map
https://www.submarinecablemap.com/
@nikade there are many local datacenter operators (ZEOP/OMEGA1/SFR/IDOM/CANAL+/FREE)
I chose SFR because they have connectivity also upto Mayotte Island (look it up too
) where we have clients that will profit our hosting solution on Reunion Island.
Many submarine cables reach us (oldest one is the SAFE : South Africa - Far East to Asia) and some new submarine cables to Africa.
Fiber connectivity exists, not cheap 
for the x2paths 10Gb between the nodes you can count 3K€/month (no internet, just data)
100Mb symmetric internet connectivity from datacenter, with good SLAs, 500€/m
Real challenge to be in the middle of an ocean.
@Henrik 1Tb RAM per host on production and replication, storage is 25Gb fiber channel, fully multipathed iSCSI (so yeah... thick provisionning lvmoiscsi...
SANs are thin storage backend, its reliable and redundant but lvmoiscsi is storage hungry, need to have good monitoring !).
one host on each node have local raid5 SSD storage, where we put our own management&automation vms, clients are on shared storage
S3 minios on iSCSI as remotes, cross backuped between the two nodes with xoproxies on each end.
designed to be fully resilient with the less SPOF inside
@umbradark said in XCP-ng 8.3 and Dell R660 - crash during boot, halts remainder of installer process (bnxt_en?):
Blacklisting the driver during install does not bypass the issue — the installer still fails to proceed without the NIC disabled in BIOS.
any idea if there is a problem with BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller ?
I have two clusters of 3 hosts to upgrade to 8.3 soon
@olivierlambert here is the replica node, on a 10km distant datacenter (two ways 10Gb network between the two nodes)

smaller setup here, just to host backup copies of production node and replica/DR VMs.
the whole setup consist of 7 VMS Enterprise hosts

Netgate 8300 max firewalls top of rack.
we are a MSP (and Vates partner
Tier2 soon to be Tier4 !) providing full hosting or hybrid onprem/oncloud to our clients.
some other services like web hosting with Plesk platform, Veeam Cloud Connect, security/firewalling services, centralised monitoring with Centreon and as soon as we manage to connect onprem XCP to our cloud xoproxies, full replica solution of onprem XCP servers.
on the bare metal, XCP 8.3, value added with full automation of tenant creation/administration/documentation with diverse APIs (check CRM for clients tagged to be admin of their tenant, get VLANs from there, create them in XCP pools/switchs/pfsense firewalls, create firewall rules and limiters, create openvpn server, spin up VMs in client tenant, automatic netbox documentation on top of xoa plugin, enjoy !)
we are on pre-production and should be on the market in november, currently migrating OVH VMs to these servers.
VATES stack is the best solution to be fully integrated in our vision of providing VMs and services to clients in an efficient way.
we left vmware, had ESXs hosted in OVH datacenters in France, but they were 10 000km and 250ms away from our end users.
for those who wonder, we are located at Reunion Island, indian ocean, french overseas territory.