backup failed
-
@joearnon Did that command show any output? If so, then try increasing the numeric value (500, 1000, etc) until you get the desired output.
You could also issue the command and then kick off the backup, which should show you the resulting output in realtime.
-
@danp
The log shows only the start and stop of the xo-server. 4 days ago. i tried increasing to 1000 and got the same result. nothing about backup. -
@joearnon Not what I was expecting. Maybe it's different on Debian. <shrug> Did you use a script to install XO? If so, which one?
-
Hello everyone,
I ran into the same problem. I had the xo-server rebuilt using a force rebuild. Without effect.
Information about Xen Orchestra:
- xo-server 5.79.5
- xo-web 5.82.0
- nodejs v14.17.1
Xen Orchestra says:
Error: all taget have failed, step: writer.beforeBackup()
The output of "journalctl -u xo-server -f -n 50" can you see below:
Jun 17 11:51:52 xoce xo-server[2210]: 2021-06-17T09:51:52.735Z xo:backups:VmBackup WARN writer.beforeBackup() { Jun 17 11:51:52 xoce xo-server[2210]: error: Error: Lock file is already being held Jun 17 11:51:52 xoce xo-server[2210]: at /opt/xen-orchestra/node_modules/proper-lockfile/lib/lockfile.js:68:47 Jun 17 11:51:52 xoce xo-server[2210]: at callback (/opt/xen-orchestra/node_modules/graceful-fs/polyfills.js:299:20) Jun 17 11:51:52 xoce xo-server[2210]: at FSReqCallback.oncomplete (fs.js:193:5) Jun 17 11:51:52 xoce xo-server[2210]: at FSReqCallback.callbackTrampoline (internal/async_hooks.js:131:17) { Jun 17 11:51:52 xoce xo-server[2210]: code: 'ELOCKED', Jun 17 11:51:52 xoce xo-server[2210]: file: '/run/xo-server/mounts/e916984e-b326-4c2a-a8b1-d94c28a22953/xo-vm-backups/03500917-c7e5-0bd4-e684-ec3ffa33a455' Jun 17 11:51:52 xoce xo-server[2210]: },
For all VMs it is the same output so i copied it only once
-
A commit fixed that very recently. Please rebuild on latest
master
commit, as you should do anytime you have a problem -
I'm on
Updated commit 9a8138d07bc1a5f457ebcb4bbff83cd07cda80ed 2021-06-17 11:56:04 +0200
I think this is the master commit!?
and the problem still is there...
-
It's indeed the last one. Are you sure it's properly rebuilt?
Anyway, that might be something else. @julien-f any idea?
-
I'm sure I did a complete rebuild.
-
FWIW, I have also started having this same issue after updating to the latest sources.
Here are some details --
Commit d44509b2cd394e3a38dc4ba392cc54dd2f50e89f Working backing Commit 56e4847b6bb85da8ae2dc09e8e9fb7a0db36070a Missing writer issue Commit 9a8138d07bc1a5f457ebcb4bbff83cd07cda80ed all targets have failed, step: writer.beforeBackup()
-
Thank you all, I'm investigating.
-
Should be fixed, sorry for this
Thanks all for your feedback!
-
a full backup of my VMs is in progress ... if any issues arise i will post it here. Thank you for your commitment! That is awesome!
-
@julien-f
Thanks a lot julien -
Full backup are finished. 3 of 4 VMs are backuped. The last one failed.
the backup of my nextcloud server failed with following error:
{ "data": { "mode": "full", "reportWhen": "failure" }, "id": "1623934497160", "jobId": "f67f6ed7-f013-4ff5-812e-520af137dc37", "jobName": "Full Backup", "message": "backup", "scheduleId": "e393ec6f-e3f6-4e8f-a6aa-49de1adc02ac", "start": 1623934497160, "status": "failure", "infos": [ { "data": { "vms": [ "03500917-c7e5-0bd4-e684-ec3ffa33a455", "43e2f8f3-b11c-b6c3-a405-c810aecf42c7", "750e2ee5-5d3d-d57c-2d82-f1fbfa2caa95", "95d56ff8-9262-8281-91d3-261795c4d75c" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "03500917-c7e5-0bd4-e684-ec3ffa33a455" }, "id": "1623934501609", "message": "backup VM", "start": 1623934501609, "status": "failure", "tasks": [ { "id": "1623934502218", "message": "snapshot", "start": 1623934502218, "status": "success", "end": 1623934507910, "result": "87149237-162e-3e72-7f86-7e3463a18cdf" }, { "data": { "id": "e916984e-b326-4c2a-a8b1-d94c28a22953", "type": "remote", "isFull": true }, "id": "1623934507940", "message": "export", "start": 1623934507940, "status": "failure", "tasks": [ { "id": "1623934507952", "message": "transfer", "start": 1623934507952, "status": "failure", "end": 1623935709862, "result": { "canceled": true, "method": "GET", "url": "https://192.168.32.2/export/?ref=OpaqueRef%3Aeaeffed6-2256-48fd-aa29-12808bc5ef82&use_compression=false&session_id=OpaqueRef%3Ac12804b3-36be-49c3-9578-051055335fbc&task_id=OpaqueRef%3A2ee9dfc0-7776-4ea2-adb6-3d86eaa4fc88", "message": "HTTP request has been canceled", "name": "Error", "stack": "Error: HTTP request has been canceled\n at IncomingMessage.emitAbortedError (/opt/xen-orchestra/node_modules/http-request-plus/index.js:79:19)\n at Object.onceWrapper (events.js:481:28)\n at IncomingMessage.emit (events.js:375:28)\n at IncomingMessage.patchedEmit (/opt/xen-orchestra/@xen-orchestra/log/configure.js:93:17)\n at IncomingMessage.emit (domain.js:470:12)\n at TLSSocket.socketCloseListener (_http_client.js:432:11)\n at TLSSocket.emit (events.js:387:35)\n at TLSSocket.patchedEmit (/opt/xen-orchestra/@xen-orchestra/log/configure.js:93:17)\n at TLSSocket.emit (domain.js:470:12)\n at net.js:675:12" } } ], "end": 1623935709863, "result": { "canceled": true, "method": "GET", "url": "https://192.168.32.2/export/?ref=OpaqueRef%3Aeaeffed6-2256-48fd-aa29-12808bc5ef82&use_compression=false&session_id=OpaqueRef%3Ac12804b3-36be-49c3-9578-051055335fbc&task_id=OpaqueRef%3A2ee9dfc0-7776-4ea2-adb6-3d86eaa4fc88", "message": "HTTP request has been canceled", "name": "Error", "stack": "Error: HTTP request has been canceled\n at IncomingMessage.emitAbortedError (/opt/xen-orchestra/node_modules/http-request-plus/index.js:79:19)\n at Object.onceWrapper (events.js:481:28)\n at IncomingMessage.emit (events.js:375:28)\n at IncomingMessage.patchedEmit (/opt/xen-orchestra/@xen-orchestra/log/configure.js:93:17)\n at IncomingMessage.emit (domain.js:470:12)\n at TLSSocket.socketCloseListener (_http_client.js:432:11)\n at TLSSocket.emit (events.js:387:35)\n at TLSSocket.patchedEmit (/opt/xen-orchestra/@xen-orchestra/log/configure.js:93:17)\n at TLSSocket.emit (domain.js:470:12)\n at net.js:675:12" } } ], "end": 1623936374410, "result": { "message": "this.delete is not a function", "name": "TypeError", "stack": "TypeError: this.delete is not a function\n at VmBackup._callWriters (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:115:20)\n at async VmBackup._copyFull (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:248:5)\n at async VmBackup.run (/opt/xen-orchestra/@xen-orchestra/backups/_VmBackup.js:383:9)" } }, { "data": { "type": "VM", "id": "43e2f8f3-b11c-b6c3-a405-c810aecf42c7" }, "id": "1623934501635", "message": "backup VM", "start": 1623934501635, "status": "success", "tasks": [ { "id": "1623934502223", "message": "snapshot", "start": 1623934502223, "status": "success", "end": 1623934511607, "result": "63b42581-fc41-2f6f-a65a-4e6ce2d1f751" }, { "data": { "id": "e916984e-b326-4c2a-a8b1-d94c28a22953", "type": "remote", "isFull": true }, "id": "1623934511673", "message": "export", "start": 1623934511673, "status": "success", "tasks": [ { "id": "1623934511686", "message": "transfer", "start": 1623934511686, "status": "success", "end": 1623934939556, "result": { "size": 12547841024 } } ], "end": 1623934939577 } ], "end": 1623934940295 }, { "data": { "type": "VM", "id": "750e2ee5-5d3d-d57c-2d82-f1fbfa2caa95" }, "id": "1623934940296", "message": "backup VM", "start": 1623934940296, "status": "success", "tasks": [ { "id": "1623934940332", "message": "snapshot", "start": 1623934940332, "status": "success", "end": 1623934942195, "result": "6ea682be-d8ee-d7af-e4db-26749bbd1d9b" }, { "data": { "id": "e916984e-b326-4c2a-a8b1-d94c28a22953", "type": "remote", "isFull": true }, "id": "1623934942234", "message": "export", "start": 1623934942234, "status": "success", "tasks": [ { "id": "1623934942295", "message": "transfer", "start": 1623934942295, "status": "success", "end": 1623935204057, "result": { "size": 8491254272 } } ], "end": 1623935204069 } ], "end": 1623935204770 }, { "data": { "type": "VM", "id": "95d56ff8-9262-8281-91d3-261795c4d75c" }, "id": "1623935204771", "message": "backup VM", "start": 1623935204771, "status": "success", "tasks": [ { "id": "1623935204826", "message": "snapshot", "start": 1623935204826, "status": "success", "end": 1623935206750, "result": "59f973c4-432c-7575-4934-1f868ecbb736" }, { "data": { "id": "e916984e-b326-4c2a-a8b1-d94c28a22953", "type": "remote", "isFull": true }, "id": "1623935206781", "message": "export", "start": 1623935206781, "status": "success", "tasks": [ { "id": "1623935206851", "message": "transfer", "start": 1623935206851, "status": "success", "end": 1623935536915, "result": { "size": 11896770560 } } ], "end": 1623935536933 } ], "end": 1623935537657 } ], "end": 1623936374413 }
The output of "journalctl -u xo-server -f -n 50" can you see below:
Jun 17 15:15:09 xoce xo-server[14386]: 2021-06-17T13:15:09.679Z xo:backups:worker WARN possibly unhandled rejection { Jun 17 15:15:09 xoce xo-server[14386]: error: Error: HTTP request has been canceled Jun 17 15:15:09 xoce xo-server[14386]: at IncomingMessage.emitAbortedError (/opt/xen-orchestra/node_modules/http-request-plus/index.js:79:19) Jun 17 15:15:09 xoce xo-server[14386]: at Object.onceWrapper (events.js:481:28) Jun 17 15:15:09 xoce xo-server[14386]: at IncomingMessage.emit (events.js:375:28) Jun 17 15:15:09 xoce xo-server[14386]: at IncomingMessage.patchedEmit (/opt/xen-orchestra/@xen-orchestra/log/configure.js:93:17) Jun 17 15:15:09 xoce xo-server[14386]: at IncomingMessage.emit (domain.js:470:12) Jun 17 15:15:09 xoce xo-server[14386]: at TLSSocket.socketCloseListener (_http_client.js:432:11) Jun 17 15:15:09 xoce xo-server[14386]: at TLSSocket.emit (events.js:387:35) Jun 17 15:15:09 xoce xo-server[14386]: at TLSSocket.patchedEmit (/opt/xen-orchestra/@xen-orchestra/log/configure.js:93:17) Jun 17 15:15:09 xoce xo-server[14386]: at TLSSocket.emit (domain.js:470:12) Jun 17 15:15:09 xoce xo-server[14386]: at net.js:675:12 { Jun 17 15:15:09 xoce xo-server[14386]: canceled: true, Jun 17 15:15:09 xoce xo-server[14386]: method: 'GET', Jun 17 15:15:09 xoce xo-server[14386]: url: 'https://192.168.32.2/export/?ref=OpaqueRef%3Aeaeffed6-2256-48fd-aa29-12808bc5ef82&use_compression=false&session_id=OpaqueRef%3Ac12804b3-36be-49c3-9578-051055335fbc&task_id=OpaqueRef%3A2ee9dfc0-7776-4ea2-adb6-3d86eaa4fc88' Jun 17 15:15:09 xoce xo-server[14386]: } Jun 17 15:15:09 xoce xo-server[14386]: }
The backup breaks off at approx. 44-45% with the error mentioned above ...
If that should be a new thread just give a decision then I'll open a new one.
-
@black_sam Please makes sure that you are using the latest
xen-api
lib. Also, this might be a XCP-ng/XenServer issues, please check your host's logs. Out of curiosity, are you using compression in your backup job? -
xcp-ng 8.2.0 is fully patched so ist should be up to date.
I didn't setup any compression options. I only configurated the backup jobs over xen orchestraβ¬dit:
hmmm... I think it was a problem with my network... after shifting the host into an other subnet without VLAN the backup finshed without any issue.Thank you for your support
-
@black_sam Thanks for your feedback
-