about these KEY backups, I think perhaps LTR got in the way @florent @bastien-nollet
still no way of knowing WHEN a weekly/monthly backup is happening ?
about these KEY backups, I think perhaps LTR got in the way @florent @bastien-nollet
still no way of knowing WHEN a weekly/monthly backup is happening ?
@flakpyro indeed seems related.
I also have this bug :

on some VMs all jobs do KEY points, but in the backup logs they are indeed DELTA

you can see as mere Megabytes are transfered that it's a delta backup... but point is presented as KEY
here is the log :

Hi,
Latest XOA, with fully patched XCP 8.3 here.
I'm fiddling around again with NBD+CBT in backup jobs (was avoiding CBT for a time, to reliably control my backups and avoid unnecessary KEY points) in the context of THICK SRs to spare some space.
I know that CBT is reset when migrating from one SR to another.
But here is what I encounter :
does this mean if I do a rolling pool update or host maintenance that will move the all VMs around, all CBT will be disabled and I should expect a FALL BACK TO FULL on all my NBD+CBT enabled backup jobs ??!
why disabling CBT on a change of HOST and no move of SR ?
@denis.grilli really big news, I need to have XO STOR working 
Thanks for your problems and support correting them 
@olivierlambert so XO5 will have a quite long lifespan, as everything must be included in XO6 ?
@ph7 thank you for your tests
some Vates dev are lurking in these forums, they will probably stumble upon this post anytime soon 
@ph7 that's it. I can't, and see the failed task logs I provided earlier.
I can restore a full VM, but not its files. Either Windows or different flavor of linuxes (debian, ubuntu, alma, ...) same problems.
I think something is wrong somewhere, but dont know where...
annnnnnnd it was as simple as converting date to milliseconds.
start:>1765209598000 end:<1765231198000
Hi,

what would be the syntax to filter logs by start-date / end-date ?
epoch timestamp ?

any idea or how-to ?
@Forza you will have to switch to LATEST to profit from end month release
STABLE is one version behind LATEST
both are production ready.
@paco seems to be the 10Mb cloudconfig drive leftover after template deployement
you could delete it, if it is not in use anymore (you forgot it ?)
beware do not delete anything before being sure what you are deleting.
on another simpler install (One host, one XOA, no proxy, SMB remote in same lan not an S3 remote), XOA 5.112.1
same problem !
I think something has been broken along the way @bastien-nollet @florent
granular file restore is important for us, otherwise we have to get Veeam Agent backup instead of XO Backup
antoher log from listPartitions :
{
"id": "0miuq9mt5",
"properties": {
"method": "backupNg.listPartitions",
"params": {
"remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
"disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd"
},
"name": "API call: backupNg.listPartitions",
"userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
"type": "api.call"
},
"start": 1765051796921,
"status": "failure",
"updatedAt": 1765051856924,
"end": 1765051856924,
"result": {
"url": "https://10.xxx.xxx.61/api/v1",
"originalUrl": "https://10.xxx.xxx.61/api/v1",
"message": "HTTP connection has timed out",
"name": "Error",
"stack": "Error: HTTP connection has timed out\n at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n at ClientRequest.emit (node:events:518:28)\n at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n at Object.onceWrapper (node:events:632:28)\n at TLSSocket.emit (node:events:530:35)\n at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n at TLSSocket.Socket._onTimeout (node:net:595:8)\n at listOnTimeout (node:internal/timers:581:17)\n at processTimers (node:internal/timers:519:7)"
}
}
{
"id": "0miunp2s1",
"properties": {
"method": "backupNg.listPartitions",
"params": {
"remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
"disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd"
},
"name": "API call: backupNg.listPartitions",
"userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
"type": "api.call"
},
"start": 1765047478609,
"status": "failure",
"updatedAt": 1765047530203,
"end": 1765047530203,
"result": {
"code": -32000,
"data": {
"code": 5,
"killed": false,
"signal": null,
"cmd": "vgchange -an cl",
"stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n at genericNodeError (node:internal/errors:984:15)\n at wrappedFn (node:internal/errors:538:14)\n at ChildProcess.exithandler (node:child_process:422:12)\n at ChildProcess.emit (node:events:518:28)\n at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n at maybeClose (node:internal/child_process:1104:16)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
},
"message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n Logical volume cl/root in use.\n Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n"
}
}
Hi, on latest channel XOA, we get this error :
{
"id": "0miuqao5o",
"properties": {
"method": "backupNg.listFiles",
"params": {
"remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
"disk": "/xo-vm-backups/ec9e8a54-a78e-8ca8-596e-20ebeaaa4308/vdis/70dec2db-a660-4bf4-b8f9-7c90e7e45156/7fe5a104-e9a3-4e16-951c-f88ce78e3b2a/20251206T161309Z.alias.vhd",
"path": "/",
"partition": "6f2859cc-5df3-4c47-bd05-37d3b066f11e"
},
"name": "API call: backupNg.listFiles",
"userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
"type": "api.call"
},
"start": 1765051845324,
"status": "failure",
"updatedAt": 1765051845346,
"end": 1765051845346,
"result": {
"code": -32000,
"data": {
"code": 32,
"killed": false,
"signal": null,
"cmd": "mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd",
"stack": "Error: Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n\n at genericNodeError (node:internal/errors:984:15)\n at wrappedFn (node:internal/errors:538:14)\n at ChildProcess.exithandler (node:child_process:422:12)\n at ChildProcess.emit (node:events:518:28)\n at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n at maybeClose (node:internal/child_process:1104:16)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
},
"message": "Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n"
}
}
sometimes, we can get through the volume/partition selection, but then the restoration nerver ends...
Remote is working, tested ok.
Remote is accessed by a XO PROXY that have been rebooted.
Backups TO this remote is ok.
Restoration of FULL VM from the same VM from same remote is also OK.
Only the granular file restore that is not working...
any idea ?
@ideal perhaps you could use advantage of dynamic memory
https://docs.xcp-ng.org/vms/#dynamic-memory
to oversubscribe memory and have all 4 VMs up at once... or reduce the allocated memory of your VMs, you seem to have a pretty big VM in terms of memory in comparison to the 2 others on your screenshot
@isdpcman-0 said in Install mono on XCP-ng:
Our RMM tools will run on CentOS but fail to install because they are looking for mono to be on the system. How can I install Mono on an XCP-ng host so we can install our monitoring/management tools?
Reply
I think it is advised to consider hosts as appliances and not install any external packages (repos are disabled for that purpose, that's probably your issue for installing anything)
even in case of clusters of many hosts in a pool, you should deploy same packages on all hosts to expect compliancy between hosts...
better use SNMP to monitor you hosts ? or standard installed packages ?
@ideal you should, yes.
beware of dom0 memory (the host), it consumes memory too

@olivierlambert said in Test results for Dell Poweredge R770 with NVMe drives:
Hang on!
no pun intended ? 