some timeout or race condition between the end of the job and the mail generation ?
perhaps putting 10sec delay to send mail ?
some timeout or race condition between the end of the job and the mail generation ?
perhaps putting 10sec delay to send mail ?
we have a strange behavior in the mail reports of XOA Backup.
the backup is done, we see the delta point on the remote, in XOA it's all green, no sign of INTERRUPTED, but the mail report tells otherwise :

the "INTERRUPTION" seems to happen on the remote

the point in the remote :
in XOA logs :


other backups are okay, this same one will be okay too tonight...
what is happening ?
false alarm ? @florent @bastien-nollet
{
"data": {
"mode": "delta",
"reportWhen": "always"
},
"id": "1766680469800",
"jobId": "87966399-d428-431d-a067-bb99a8fdd67a",
"jobName": "BCK_C_xxxx",
"message": "backup",
"proxyId": "5359db6e-841b-4a6d-b5e6-a5d19f43b6c0",
"scheduleId": "56872f53-4c20-47fc-8542-2cd9aed2fdde",
"start": 1766680469800,
"status": "success",
"infos": [
{
"data": {
"vms": [
"b1eef06b-52c1-e02a-4f59-1692194e2376"
]
},
"message": "vms"
}
],
"tasks": [
{
"data": {
"type": "VM",
"id": "b1eef06b-52c1-e02a-4f59-1692194e2376",
"name_label": "xxxx"
},
"id": "1766680472044",
"message": "backup VM",
"start": 1766680472044,
"status": "success",
"tasks": [
{
"id": "1766680472050",
"message": "clean-vm",
"start": 1766680472050,
"status": "success",
"end": 1766680473396,
"result": {
"merge": false
}
},
{
"id": "1766680474042",
"message": "snapshot",
"start": 1766680474042,
"status": "success",
"end": 1766680504544,
"result": "c4b42a79-532e-c376-833b-22707ddad571"
},
{
"data": {
"id": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
"isFull": false,
"type": "remote"
},
"id": "1766680504544:0",
"message": "export",
"start": 1766680504544,
"status": "success",
"tasks": [
{
"id": "1766680511990",
"message": "transfer",
"start": 1766680511990,
"status": "success",
"end": 1766680515706,
"result": {
"size": 423624704
}
},
{
"id": "1766680521053",
"message": "clean-vm",
"start": 1766680521053,
"status": "success",
"tasks": [
{
"id": "1766680521895",
"message": "merge",
"start": 1766680521895,
"status": "success",
"end": 1766680530887
}
],
"end": 1766680531173,
"result": {
"merge": true
}
}
],
"end": 1766680531192
}
],
"infos": [
{
"message": "Transfer data using NBD"
},
{
"message": "will delete snapshot data"
},
{
"data": {
"vdiRef": "OpaqueRef:d8aef4c9-5514-6623-1cda-f5e879c4990f"
},
"message": "Snapshot data has been deleted"
}
],
"end": 1766680531211
}
],
"end": 1766680531267
}
not better with production upgraded to 6.0.1 (XOA and XO Proxies)
we will open a support ticket
ps : if we delog/relog from XOA with another user, we have better chance to get the file restore working... not 100% very unstable
is there a link ?!
@olivierlambert I had the time to test connecting a REMOTE from production on latest XO CE on replica datacenter
and file restore is working flawlessly... ultra fast, and working.
either we have a problem on production, or the last update of XO6 corrected the bug ?
we still are on 5.113.2 on production
@fluxtor is your XOA accessible over HTTP 80 ?
@fluxtor said in License no longer registered after upgrade:
Looks to me like the XOA5 webUI is out of sync with the underlying updater status as everything still seems to work.
CTRL+F5 ?
or private navigation, to see if its not a cache issue ?
on the source VM :
and booom, DR is okay ! that was not a XAPI problem

so, there was a problem with this VM VIF... and error message in XO6 permitted to pin point it
thanks for the advice @florent !
here is the errot in XO6
this UUID of network is NOT the one on the source VM...
how is it possible ?
@florent yes I can try with XO from source to latest
let me restore the VM and launch a DR with XO from source and i'll report back
source host :
xe host-param-list uuid=161be695-e1f9-4271-b581-27b716fde9a5 |grep xapi
software-version (MRO): product_version: 8.3.0; product_version_text: 8.3; product_version_text_short: 8.3; platform_name: XCP; platform_version: 3.4.0; product_brand: XCP-ng; build_number: cloud; git_id: 0; hostname: localhost; date: 20250909T12:59:54Z; dbv: 0.0.1; xapi: 25.6; xapi_build: 25.6.0; xen: 4.17.5-15; linux: 4.19.0+1; xencenter_min: 2.21; xencenter_max: 2.21; network_backend: openvswitch; db_schema: 5.786
DR target host :
xe host-param-list uuid=e604c3bf-373c-489b-b191-edecbabec43f |grep xapi
software-version (MRO): product_version: 8.3.0; product_version_text: 8.3; product_version_text_short: 8.3; platform_name: XCP; platform_version: 3.4.0; product_brand: XCP-ng; build_number: cloud; git_id: 0; hostname: localhost; date: 20250909T12:59:54Z; dbv: 0.0.1; xapi: 25.6; xapi_build: 25.6.0; xen: 4.17.5-15; linux: 4.19.0+1; xencenter_min: 2.21; xencenter_max: 2.21; network_backend: openvswitch; db_schema: 5.786
tried to delete the VM, restore it, same result
tried to get the job done by an XO PROXY instead of XOA, same result
3 VMs have been deployed from the same Hub Template Ubuntu 24.04 at the same time.
weird.
@florent this is where it's strange, all my 7 hosts were installed the same day, with the same patches...
all 3 VMs were deployed the same day/way too
why are 2 OK and not the third ?
indeed CR is working, but DR is hiding something from me
Hi,
XCP 8.3, XOA 5.113.2 here
a DR job, with 3 VMs, 2 are OK, one will not pass... And I dont understand why, first time I see this error
"message": "(intermediate value) is not iterable",
"name": "TypeError",
"stack": "TypeError: (intermediate value) is not iterable\n at Xapi.import (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/vm.mjs:610:21)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_writers/FullXapiWriter.mjs:56:21"

any idea anyone ?
or @bastien-nollet @florent
there is free space on the destination SR (iSCSI SR)
@olivierlambert nice, better than megathreads !
added my first TAG demand on it
about these KEY backups, I think perhaps LTR got in the way @florent @bastien-nollet
still no way of knowing WHEN a weekly/monthly backup is happening ?
@flakpyro indeed seems related.
I also have this bug :

on some VMs all jobs do KEY points, but in the backup logs they are indeed DELTA

you can see as mere Megabytes are transfered that it's a delta backup... but point is presented as KEY
here is the log :

Hi,
Latest XOA, with fully patched XCP 8.3 here.
I'm fiddling around again with NBD+CBT in backup jobs (was avoiding CBT for a time, to reliably control my backups and avoid unnecessary KEY points) in the context of THICK SRs to spare some space.
I know that CBT is reset when migrating from one SR to another.
But here is what I encounter :
does this mean if I do a rolling pool update or host maintenance that will move the all VMs around, all CBT will be disabled and I should expect a FALL BACK TO FULL on all my NBD+CBT enabled backup jobs ??!
why disabling CBT on a change of HOST and no move of SR ?
@denis.grilli really big news, I need to have XO STOR working 
Thanks for your problems and support correting them 
@olivierlambert so XO5 will have a quite long lifespan, as everything must be included in XO6 ?
@ph7 thank you for your tests
some Vates dev are lurking in these forums, they will probably stumble upon this post anytime soon 