just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

@MK.ultra don't think so
it's working without for me.
@tmk hi !
Many thanks, your modified python file did the trick, my static IP address is now working as intented.
I can confirm this is working on Windows 2025 Server as well.
Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
@Bastien-Nollet where is this "offline backups" check option ?
I'm aware of snapshot mode/offline, but not offline backups ?
EDIT : my bad, I found it, it's only available for FULL BACKUPS, not DELTA BACKUPS

@acebmxer xoconfig goes to vates but not pool metadata
doesn't cost anything to put a job on it I guess
so we wait for Vates team to tell us why this vebose audit log on metadata
or perhaps it IS the metadata that would explain the fuss
@acebmxer you need to go in backup/restore/metadata
it will try to list your metadata backups by an api call
then go back to audit you will see the audit log
mine is gigantic. 
@acebmxer do you have AUDIT LOGS activated ?
if so, could you check on your XOA if you have the same from-earth-to-moon audit log on metadata backup list ?

why do I have quite a full log of all my infrastructure information in this log ?
this one is a bug I guess 
perhaps is it linked to the infinite refresh I had ?
EDIT : saved the log on the desktop in notepad ... 1.528 Mo !

@acebmxer yeah I know... sometimes I have to REFRESH BACKUP LIST for normal backups of VMs, but this button doesn't even exists for metadata
i'll let tomorrow sequence execute, and report back if the metadata restore points are unaivailable again.
as this job was done with a XOA proxy, i tested it WITHOUT the proxy... and the restore points are back...
but when i put again the proxy in the job configuration, they stay there, as usual... all is OK now...
I don't know what happened...
to note : I manually launched the backup to test with & without proxies, I don't know if there is a different behavior than the usual : executed by sequence.
Hi, we're on latest XOA 5.112.1
when browsing to backup/restore/metadata it loads indefinitly...

never showing the restorable points. and they exist.



Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
@bazzacad great, glad it worked for you 
@bazzacad you do not seem to have the same number of PIFs in each host
I wish you do not have to, but you can swap names if needed, read the doc here https://docs.xcp-ng.org/networking/#renaming-nics