backup failed
-
So it's a question for @julien-f or @Darkbeldin
-
Hi,
Just another 'me too' - my backups are failing with the same message as well.
I just rebuilt xen-orchestra, running on a Debian 10 VM.
-
@ci4ic4 Replying to myself - disregard this. I found out why. I used to run Xen-Orchestra as root on my previous Debian 9 instance; I had to switch to Debian 10 (nodejs incompatibility at the time, I don't know if it has been resolved) and I didn't bother setting it up to run as root, left it to start as my normal user.
I had to add my normal user account to the 'root' group and 'chmod 775 /run' directory - after that the NFS 'remote' mounts OK and the backup succeeds.
-
I'm also on Debian 10. but I'm running as root. Still have the problem, happened again this morning. running the backup again succeeds...
-
{ "data": { "mode": "full", "reportWhen": "failure" }, "id": "1623351600009", "jobId": "05d242c4-b048-41bd-8ed7-3c32b388ab55", "jobName": "Finance (Every 3 Days at 22:00)", "message": "backup", "scheduleId": "e493f947-a723-4ba1-95c2-5df1bdd762f5", "start": 1623351600009, "status": "failure", "infos": [ { "data": { "vms": [ "5402c8be-e33f-d762-0fea-c6052d831aa1", "010d5f6c-b6cc-94da-9853-3a61d542ba60", "f81f70bf-b6a7-7fa8-dfcd-6e0ff296d077", "ee5b6776-bbe1-7de2-3d8c-815d44599253", "9536ea02-0ea6-d69e-0ba6-b02672ff4d18", "ebb1bdec-e2d3-4b7e-52f7-a5e880c0644a", "21a40b8c-3c7e-16dd-cda9-bf9a4cf4a212", "4872fef3-1e2b-6169-439c-5e831859175e", "ddf2b801-f18a-89ec-b82f-5183bab519e9" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "5402c8be-e33f-d762-0fea-c6052d831aa1" }, "id": "1623351600765:0", "message": "backup VM", "start": 1623351600765, "status": "failure", "end": 1623351606719, "result": { "message": "all targets have failed, step: writer.beforeBackup()", "name": "Error", "stack": "Error: all targets have failed, step: writer.beforeBackup()\n at VmBackup._callWriters (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:118:13)\n at async VmBackup.run (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:343:5)" } }, { "data": { "type": "VM", "id": "010d5f6c-b6cc-94da-9853-3a61d542ba60" }, "id": "1623351600780", "message": "backup VM", "start": 1623351600780, "status": "failure", "end": 1623351606719, "result": { "message": "all targets have failed, step: writer.beforeBackup()", "name": "Error", "stack": "Error: all targets have failed, step: writer.beforeBackup()\n at VmBackup._callWriters (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:118:13)\n at async VmBackup.run (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:343:5)" } }, { "data": { "type": "VM", "id": "f81f70bf-b6a7-7fa8-dfcd-6e0ff296d077" }, "id": "1623351600780:0", "message": "backup VM", "start": 1623351600780, "status": "success", "tasks": [ { "id": "1623351607155", "message": "snapshot", "start": 1623351607155, "status": "success", "end": 1623351615220, "result": "51934b74-e52c-4cf7-6301-258f05a6096c" }, { "data": { "id": "6c2bf055-1260-401f-ac8e-d0e69b03b970", "type": "remote", "isFull": true }, "id": "1623351615245", "message": "export", "start": 1623351615245, "status": "success", "tasks": [ { "id": "1623351615308", "message": "transfer", "start": 1623351615308, "status": "success", "end": 1623351893293, "result": { "size": 2739784704 } } ], "end": 1623351894456 } ], "end": 1623351898277 }, { "data": { "type": "VM", "id": "ee5b6776-bbe1-7de2-3d8c-815d44599253" }, "id": "1623351606719:0", "message": "backup VM", "start": 1623351606719, "status": "success", "tasks": [ { "id": "1623351607170", "message": "snapshot", "start": 1623351607170, "status": "success", "end": 1623351660079, "result": "1618c328-f5d0-ba0a-12f6-873b8add4ba1" }, { "data": { "id": "6c2bf055-1260-401f-ac8e-d0e69b03b970", "type": "remote", "isFull": true }, "id": "1623351660103", "message": "export", "start": 1623351660103, "status": "success", "tasks": [ { "id": "1623351660736", "message": "transfer", "start": 1623351660736, "status": "success", "end": 1623352583590, "result": { "size": 6931830272 } } ], "end": 1623352584082 } ], "end": 1623352587494 }, { "data": { "type": "VM", "id": "9536ea02-0ea6-d69e-0ba6-b02672ff4d18" }, "id": "1623351606719:2", "message": "backup VM", "start": 1623351606719, "status": "success", "tasks": [ { "id": "1623351607172", "message": "snapshot", "start": 1623351607172, "status": "success", "end": 1623351637850, "result": "55148313-ac49-073a-480d-076e7adc8215" }, { "data": { "id": "6c2bf055-1260-401f-ac8e-d0e69b03b970", "type": "remote", "isFull": true }, "id": "1623351637875", "message": "export", "start": 1623351637875, "status": "success", "tasks": [ { "id": "1623351638535", "message": "transfer", "start": 1623351638535, "status": "success", "end": 1623363752942, "result": { "size": 256111958016 } } ], "end": 1623363753843 } ], "end": 1623363770261 }, { "data": { "type": "VM", "id": "ebb1bdec-e2d3-4b7e-52f7-a5e880c0644a" }, "id": "1623351898278", "message": "backup VM", "start": 1623351898278, "status": "failure", "end": 1623351904526, "result": { "message": "all targets have failed, step: writer.beforeBackup()", "name": "Error", "stack": "Error: all targets have failed, step: writer.beforeBackup()\n at VmBackup._callWriters (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:118:13)\n at async VmBackup.run (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:343:5)" } }, { "data": { "type": "VM", "id": "21a40b8c-3c7e-16dd-cda9-bf9a4cf4a212" }, "id": "1623351904527", "message": "backup VM", "start": 1623351904527, "status": "success", "tasks": [ { "id": "1623351905474", "message": "snapshot", "start": 1623351905474, "status": "success", "end": 1623351912328, "result": "5d45e921-a934-01db-32ac-abc5a8e23322" }, { "data": { "id": "6c2bf055-1260-401f-ac8e-d0e69b03b970", "type": "remote", "isFull": true }, "id": "1623351912370", "message": "export", "start": 1623351912370, "status": "success", "tasks": [ { "id": "1623351913333", "message": "transfer", "start": 1623351913333, "status": "success", "end": 1623352708494, "result": { "size": 40308589056 } } ], "end": 1623352709017 } ], "end": 1623352710173 }, { "data": { "type": "VM", "id": "4872fef3-1e2b-6169-439c-5e831859175e" }, "id": "1623352587495", "message": "backup VM", "start": 1623352587495, "status": "failure", "end": 1623352588661, "result": { "message": "all targets have failed, step: writer.beforeBackup()", "name": "Error", "stack": "Error: all targets have failed, step: writer.beforeBackup()\n at VmBackup._callWriters (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:118:13)\n at async VmBackup.run (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:343:5)" } }, { "data": { "type": "VM", "id": "ddf2b801-f18a-89ec-b82f-5183bab519e9" }, "id": "1623352588661:0", "message": "backup VM", "start": 1623352588661, "status": "success", "tasks": [ { "id": "1623352588716", "message": "snapshot", "start": 1623352588716, "status": "success", "end": 1623352670691, "result": "201861e7-142a-ebe4-61ba-b86577dfcc41" }, { "data": { "id": "6c2bf055-1260-401f-ac8e-d0e69b03b970", "type": "remote", "isFull": true }, "id": "1623352670744", "message": "export", "start": 1623352670744, "status": "success", "tasks": [ { "id": "1623352671666", "message": "transfer", "start": 1623352671666, "status": "success", "end": 1623358080755, "result": { "size": 9415387136 } } ], "end": 1623358080960 } ], "end": 1623358083272 } ], "end": 1623363770261 }
-
@joearnon The original error is unfortunately not present in this log, please take a look at
xo-server
logs (written on itsstderr
). -
@julien-f can you ell me how to do that? as i said im a windows guy not linux, so youll have to walk me through the process...
-
-
@danp
using :
journalctl -u xo-server -f -n 50
gives nothing about the backup. -
@joearnon Did that command show any output? If so, then try increasing the numeric value (500, 1000, etc) until you get the desired output.
You could also issue the command and then kick off the backup, which should show you the resulting output in realtime.
-
@danp
The log shows only the start and stop of the xo-server. 4 days ago. i tried increasing to 1000 and got the same result. nothing about backup. -
@joearnon Not what I was expecting. Maybe it's different on Debian. <shrug> Did you use a script to install XO? If so, which one?
-
Hello everyone,
I ran into the same problem. I had the xo-server rebuilt using a force rebuild. Without effect.
Information about Xen Orchestra:
- xo-server 5.79.5
- xo-web 5.82.0
- nodejs v14.17.1
Xen Orchestra says:
Error: all taget have failed, step: writer.beforeBackup()
The output of "journalctl -u xo-server -f -n 50" can you see below:
Jun 17 11:51:52 xoce xo-server[2210]: 2021-06-17T09:51:52.735Z xo:backups:VmBackup WARN writer.beforeBackup() { Jun 17 11:51:52 xoce xo-server[2210]: error: Error: Lock file is already being held Jun 17 11:51:52 xoce xo-server[2210]: at /opt/xen-orchestra/node_modules/proper-lockfile/lib/lockfile.js:68:47 Jun 17 11:51:52 xoce xo-server[2210]: at callback (/opt/xen-orchestra/node_modules/graceful-fs/polyfills.js:299:20) Jun 17 11:51:52 xoce xo-server[2210]: at FSReqCallback.oncomplete (fs.js:193:5) Jun 17 11:51:52 xoce xo-server[2210]: at FSReqCallback.callbackTrampoline (internal/async_hooks.js:131:17) { Jun 17 11:51:52 xoce xo-server[2210]: code: 'ELOCKED', Jun 17 11:51:52 xoce xo-server[2210]: file: '/run/xo-server/mounts/e916984e-b326-4c2a-a8b1-d94c28a22953/xo-vm-backups/03500917-c7e5-0bd4-e684-ec3ffa33a455' Jun 17 11:51:52 xoce xo-server[2210]: },
For all VMs it is the same output so i copied it only once
-
A commit fixed that very recently. Please rebuild on latest
master
commit, as you should do anytime you have a problem -
I'm on
Updated commit 9a8138d07bc1a5f457ebcb4bbff83cd07cda80ed 2021-06-17 11:56:04 +0200
I think this is the master commit!?
and the problem still is there...
-
It's indeed the last one. Are you sure it's properly rebuilt?
Anyway, that might be something else. @julien-f any idea?
-
I'm sure I did a complete rebuild.
-
FWIW, I have also started having this same issue after updating to the latest sources.
Here are some details --
Commit d44509b2cd394e3a38dc4ba392cc54dd2f50e89f Working backing Commit 56e4847b6bb85da8ae2dc09e8e9fb7a0db36070a Missing writer issue Commit 9a8138d07bc1a5f457ebcb4bbff83cd07cda80ed all targets have failed, step: writer.beforeBackup()
-
Thank you all, I'm investigating.
-
Should be fixed, sorry for this
Thanks all for your feedback!