Backup jobs run twice after upgrade to 5.76.2
-
In this case, it's better that you open a ticket on xen-orchestra.com
-
@olivierlambert Done
-
Thanks a lot for this great ticket with all the context, this is really helpful!
-
Hi,
I can give a little update on our issues, but our 'workaround' is really time consuming.
Besides the problem of randomly stuck CR jobs, which could be fixed mostly by:
stop xo-server/ start xo-server (sometimes needed to restart xo-vm) and restart the interrupted jobs again and again until all VM are successfully backed up.
Now without changing anything the jobs are again started twice again most of the time, but not always.
I can not find a correlation to this.
Somehow it looked that this could be fixed by dis/enabling the backup jobs, but as of now this doesn't make any difference.
-Toni
-
@tts Are You on latest XOA version?
-
It wasn't XOA, we used xo-server from sources.
I planned to test with XOA but Olivier yesterday wrote that it's probably fixed upstream.
So I updated our installation to xo-server 5.57.3 / xo-web 5.57.1 and will see if it's fixed.
fingers crossed
-
Maybe there are additional problems. I just noticed one of our rolling-snapshot task started twice again.
This time it's even more weird, both jobs are listed as successfull.
The first job displays just nothing in the detail window:
json:{ "data": { "mode": "full", "reportWhen": "failure" }, "id": "1583917200011", "jobId": "22050542-f6ec-4437-a031-0600260197d7", "jobName": "odoo_hourly", "message": "backup", "scheduleId": "ab9468b4-9b8a-44a0-83a8-fc6b84fe8a95", "start": 1583917200011, "status": "success", "end": 1583917207066 }
The second job shows 4? VMs but only consists of one VM which failed, but is displayed as successful
json:{ "data": { "mode": "full", "reportWhen": "failure" }, "id": "1583917200014", "jobId": "22050542-f6ec-4437-a031-0600260197d7", "jobName": "odoo_hourly", "message": "backup", "scheduleId": "ab9468b4-9b8a-44a0-83a8-fc6b84fe8a95", "start": 1583917200014, "status": "success", "tasks": [ { "id": "1583917200019", "message": "snapshot", "start": 1583917200019, "status": "success", "end": 1583917201773, "result": "32ba0201-26bd-6f29-c507-0416129058b4" }, { "id": "1583917201780", "message": "add metadata to snapshot", "start": 1583917201780, "status": "success", "end": 1583917201791 }, { "id": "1583917206856", "message": "waiting for uptodate snapshot record", "start": 1583917206856, "status": "success", "end": 1583917207064 }, { "data": { "type": "VM", "id": "b6b63fab-bd5f-8640-305d-3565361b213f" }, "id": "1583917200022", "message": "Starting backup of HALOffice01. (22050542-f6ec-4437-a031-0600260197d7)", "start": 1583917200022, "status": "failure", "tasks": [ { "id": "1583917200028", "message": "snapshot", "start": 1583917200028, "status": "success", "end": 1583917220144, "result": "62e06abc-6fca-9564-aad7-7933dec90ba3" }, { "id": "1583917220148", "message": "add metadata to snapshot", "start": 1583917220148, "status": "success", "end": 1583917220167 } ], "end": 1583917220405, "result": { "message": "no object with UUID or opaque ref: 1536b84b-412d-07ad-2dcd-9db3162296e6", "name": "Error", "stack": "Error: no object with UUID or opaque ref: 1536b84b-412d-07ad-2dcd-9db3162296e6\n at Xapi.apply (/opt/xen-orchestra/packages/xen-api/src/index.js:573:11)\n at Xapi.getObject (/opt/xen-orchestra/packages/xo-server/src/xapi/index.js:128:24)\n at Xapi.deleteVm (/opt/xen-orchestra/packages/xo-server/src/xapi/index.js:692:12)\n at iteratee (/opt/xen-orchestra/packages/xo-server/src/xo-mixins/backups-ng/index.js:1268:23)\n at /opt/xen-orchestra/@xen-orchestra/async-map/src/index.js:32:17\n at Promise._execute (/opt/xen-orchestra/node_modules/bluebird/js/release/debuggability.js:384:9)\n at Promise._resolveFromExecutor (/opt/xen-orchestra/node_modules/bluebird/js/release/promise.js:518:18)\n at new Promise (/opt/xen-orchestra/node_modules/bluebird/js/release/promise.js:103:10)\n at /opt/xen-orchestra/@xen-orchestra/async-map/src/index.js:31:7\n at arrayMap (/opt/xen-orchestra/node_modules/lodash/_arrayMap.js:16:21)\n at map (/opt/xen-orchestra/node_modules/lodash/map.js:50:10)\n at asyncMap (/opt/xen-orchestra/@xen-orchestra/async-map/src/index.js:30:5)\n at BackupNg._backupVm (/opt/xen-orchestra/packages/xo-server/src/xo-mixins/backups-ng/index.js:1265:13)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (internal/process/task_queues.js:97:5)\n at handleVm (/opt/xen-orchestra/packages/xo-server/src/xo-mixins/backups-ng/index.js:734:13)" } } ], "end": 1583917207065 }
So the results are clearly wrong, job is ran twice, this time at the same point in time.
Would it help to delete and recreate all of the backup jobs?
What extra information I could offer to get this issue fixed?Already thinking about setting up another new xo-server from scratch now.
please help and advice,
thanks Toni -
Please try with XOA and report if you have the same problem.
-
I'm currently on it, we did setup the XOA appliance recently. Just needs activating the trial phase, I think I can do that today in a few hours.
Can we import the settings safely from our current 'from sources' version or do we need to setup everything from scratch?
-
Yes you can import settings in XOA.
-
Ok thanks, will do that and report back later.
-
Looks like XOA on stable channel is only updating to v5.43.2,
I really don't get the point why 'rolling back' to a much older version could help solve the backup problems in the more recent version 'from sources' 5.56+?
Is this a completely different codebase?
Quite lost now!
-
XOA got its own numbering, unrelated to
xo-server
orxo-web
: it'a all a matter of a coherent "whole", including dependencies etc. Thing we can't provide when built on sources.Please test and report You can switch to
latest
too, doesn't matter.