just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

@MK.ultra don't think so
it's working without for me.
@tmk hi !
Many thanks, your modified python file did the trick, my static IP address is now working as intented.
I can confirm this is working on Windows 2025 Server as well.
@Bastien-Nollet where is this "offline backups" check option ?
I'm aware of snapshot mode/offline, but not offline backups ?
EDIT : my bad, I found it, it's only available for FULL BACKUPS, not DELTA BACKUPS

@Bastien-Nollet Hi, here is the feedback : having two sequences for two schedules keeps the chain continuity and is to be considered working as two enabled schedules on one backup job.

Todays backup was indeed a delta, and did the healthcheck as intended.
Question is answered !
@Bub haha great 
yup in the process, you probably had a successful test in CLI to mount and it would block in XOA remote config as the mount already exists.
in your first post it was : "stderr": "mount.nfs: failed to prepare mount: Operation not permitted
and you had a mount.nfs: mount(2): Device or resource busy later on
so keep this unmount command in case 
new VM wizard 
1- Name it

2- Prep it

You choose the available pool/host/SR/network, and then template. All choices are driven top to bottom.
all filtering happening here is based on TAGs in XOA, hope XO6 will not break it
but it was also the easiest way to differentiate "templates" that are HUB templates, and "templates" that are VM config models and need an ISO
bios mode is also automagically selected by tags...
3- Customize it

Dynamic cloudconfig file creation, you fill the form, it changes the config
config is manually editable if you want to
4- Deploy it

Final check before Pulumi does its magic.
to be done :
beta opens in a couple weeks max I think !
@Bub why 192.168.45.20 in the REMOTE configuration ?!
in CLI you point to 192.168.221.20
get out of /run/xo-server/mounts/LAB-NFS
could you do a : mount -t nfs
just to see if there is not ALREADY a mount point... ?
if there is just do : umount /run/xo-server/mounts/LAB-NFS
@Bub I agree, all checks are green 
could you do a : ip a
on the xoa ?
@Bub perhaps is it permission related, or NFS version /mapping related ?
screenshot ther nfs server side configuration ? if possible.
@Bub mmm put -v before the -t !
@Bub -v, --verbose say what is being done
just try -v ?
@Mathieu +1
We have the same problem. Changing a bit in the mac-address of the replicated/DRed VM seems to get the plugin to accept to sync again. But obviously not a good solution.
@Chico008 I think it's not a bug, it's a feature.