just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing
just to #brag

Deploy of worker is okay (one big rocky linux with default settings 6vcpu, 6Gb RAM, 100Gb vdi)
first backup is fine, with decent speed ! (to a xcp hosted S3 minio)
will keep testing

Hey all,
We are proud of our new setup, full XCPng hosting solution we racked in a datacenter today.
This is the production node, tomorrow i'll post the replica node !
XCPng 8.3, HPE hardware obviously, and we are preparing full automation of clients by API (from switch vlans to firewall public IP, and automatic VM deployment).
This needs a sticker "Vates Inside"
#vent
@sluflyer06 and I wish HPE would be added too 
I did stick to version: 1 in my working configuration

Had to rename my "Ethernet 2" nic name to Ethernet2 without the space
You have to put the exact template nic name for this to work.
@MajorP93 throw in multiple garbage collections during snap/desnap of backups on a XOSTOR SR, and these SR scans really get in the way
Hi,
Smart backups are wonderful to manage smart selection of VMs to be backuped.
When we browse the RESTORE section, would be cool to get the TAGs back visually, and possibility to filter them
I'd like to get "all VMs restorable with this particular TAG" type of filter, hope i'm clear.

Perhaps something to add to XO6 ?
Could we have a way to know wich backup is part of LTR ?
In veeam B&R, when doing LTR/GFS, there is a letter like W(weekly) M(monthly) Y(yearly)to signal the fact in the UI

That's pure cosmectics indeed, but practical.
@Forza I didn't try, as my default Graylog Input was UDP and worked with the hosts...
But guys, that was it. In TCP mode, it's working. Rapidly set up a TCP input, and voila.

is there anywhere where we can check the backlog / work in progress / to be done on XO6 ?
@MK.ultra don't think so
it's working without for me.
@ph7 thank you for your tests
some Vates dev are lurking in these forums, they will probably stumble upon this post anytime soon 
@ph7 that's it. I can't, and see the failed task logs I provided earlier.
I can restore a full VM, but not its files. Either Windows or different flavor of linuxes (debian, ubuntu, alma, ...) same problems.
I think something is wrong somewhere, but dont know where...
annnnnnnd it was as simple as converting date to milliseconds.
start:>1765209598000 end:<1765231198000
Hi,

what would be the syntax to filter logs by start-date / end-date ?
epoch timestamp ?

any idea or how-to ?
@Forza you will have to switch to LATEST to profit from end month release
STABLE is one version behind LATEST
both are production ready.
@paco seems to be the 10Mb cloudconfig drive leftover after template deployement
you could delete it, if it is not in use anymore (you forgot it ?)
beware do not delete anything before being sure what you are deleting.
on another simpler install (One host, one XOA, no proxy, SMB remote in same lan not an S3 remote), XOA 5.112.1
same problem !
I think something has been broken along the way @bastien-nollet @florent
granular file restore is important for us, otherwise we have to get Veeam Agent backup instead of XO Backup
antoher log from listPartitions :
{
"id": "0miuq9mt5",
"properties": {
"method": "backupNg.listPartitions",
"params": {
"remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
"disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd"
},
"name": "API call: backupNg.listPartitions",
"userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
"type": "api.call"
},
"start": 1765051796921,
"status": "failure",
"updatedAt": 1765051856924,
"end": 1765051856924,
"result": {
"url": "https://10.xxx.xxx.61/api/v1",
"originalUrl": "https://10.xxx.xxx.61/api/v1",
"message": "HTTP connection has timed out",
"name": "Error",
"stack": "Error: HTTP connection has timed out\n at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n at ClientRequest.emit (node:events:518:28)\n at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n at Object.onceWrapper (node:events:632:28)\n at TLSSocket.emit (node:events:530:35)\n at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.Socket._onTimeout (node:net:595:8)\n at listOnTimeout (node:internal/timers:581:17)\n at processTimers (node:internal/timers:519:7)"
}
}
{
"id": "0miunp2s1",
"properties": {
"method": "backupNg.listPartitions",
"params": {
"remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
"disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd"
},
"name": "API call: backupNg.listPartitions",
"userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
"type": "api.call"
},
"start": 1765047478609,
"status": "failure",
"updatedAt": 1765047530203,
"end": 1765047530203,
"result": {
"code": -32000,
"data": {
"code": 5,
"killed": false,
"signal": null,
"cmd": "vgchange -an cl",
"stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n at genericNodeError (node:internal/errors:984:15)\n at wrappedFn (node:internal/errors:538:14)\n at ChildProcess.exithandler (node:child_process:422:12)\n at ChildProcess.emit (node:events:518:28)\n at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n at maybeClose (node:internal/child_process:1104:16)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
},
"message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n"
}
}