Mirror Incremental Bkp - Error Maximum call stack size exceeded
-
bkp_log.txt
Here are the log.My mirror incremental bkp always get stuck with one specific vm.
"result": { "message": "Maximum call stack size exceeded", "name": "RangeError", "stack": "RangeError: Maximum call stack size exceeded\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:418:7)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)\n at getUsedChildChainOrDelete (file:///opt/xo/xo-builds/xen-orchestra-202401192117/@xen-orchestra/backups/_cleanVm.mjs:433:23)" =============
-
Hi,
Can you provide a bit more information on your setup, the XO version you use and such
-
Xen Orchestra commit ec166
XCP-NG 8.2.1 (only one host)
Let me know what specific information you need if I'm missing some.
-
commit
ec1669a32
is almost 3 months old. You can't report an issue without being on the latest commit in our repo, as per our doc:https://xen-orchestra.com/docs/community.html#current-version
Please upgrade/rebuild and if you reproduce, we'll take a look
-
@olivierlambert Perfect, I will work on this and come back to you.
-
@olivierlambert I did the update and I'm getting the same error.
Herer are the log 2024-04-11T11_47_11.366Z - backup NG.txt
Sense my mirror incremental backup take a while to do the copy because of internet speed. I had to do update to another commit this morning. I will retest de backup job but still getting the error on the latest commit from last week.
-
What Node version are you using?
-
@olivierlambert Node v18.20.2
-
@olivierlambert any news on this ? Thank You.
-
Sorry I don't know what's the issue, anyone else can reproduce?
-
Do you know what would cause this issue with a specific vm backup ?
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1714518000004", "jobId": "3c8de440-0a66-4a95-a201-968d1993bd8b", "jobName": "Main - Delta - BRO-NAS-001 - VM1SQLSRV2012_PROD", "message": "backup", "scheduleId": "41094424-a841-4a75-bb74-edd0c5d72423", "start": 1714518000004, "status": "failure", "infos": [ { "data": { "vms": [ "a23bacfb-543d-0ed7-cf90-902288f59ed6" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "a23bacfb-543d-0ed7-cf90-902288f59ed6", "name_label": "VM1SQLSRV2012_PROD" }, "id": "1714518001609", "message": "backup VM", "start": 1714518001609, "status": "failure", "tasks": [ { "id": "1714518001617", "message": "clean-vm", "start": 1714518001617, "status": "failure", "tasks": [ { "id": "1714518005435", "message": "merge", "start": 1714518005435, "status": "failure", "end": 1714518010534, "result": { "errno": -5, "code": "Z_BUF_ERROR", "message": "unexpected end of file", "name": "Error", "stack": "Error: unexpected end of file\n at BrotliDecoder.zlibOnError [as onerror] (node:zlib:189:17)\n at BrotliDecoder.callbackTrampoline (node:internal/async_hooks:128:17)" } } ], "end": 1714518010534, "result": { "errno": -5, "code": "Z_BUF_ERROR", "message": "unexpected end of file", "name": "Error", "stack": "Error: unexpected end of file\n at BrotliDecoder.zlibOnError [as onerror] (node:zlib:189:17)\n at BrotliDecoder.callbackTrampoline (node:internal/async_hooks:128:17)" } }, { "id": "1714518010549", "message": "snapshot", "start": 1714518010549, "status": "success", "end": 1714518014388, "result": "bfce73a6-ea5b-d498-f3d8-d139b9fe29d5" }, { "data": { "id": "b99d8af6-1fc0-4c49-bbe8-d7c718754070", "isFull": true, "type": "remote" }, "id": "1714518014392", "message": "export", "start": 1714518014392, "status": "success", "tasks": [ { "id": "1714518015575", "message": "transfer", "start": 1714518015575, "status": "success", "end": 1714530612896, "result": { "size": 1668042162176 } }, { "id": "1714530613148", "message": "clean-vm", "start": 1714530613148, "status": "success", "end": 1714530617953, "result": { "merge": true } } ], "end": 1714530618039 } ], "end": 1714530618039 } ], "end": 1714530618040 }
-
@mguimond Could be corruption in the disk. What commit are you on for XO? Is your host fully patched?
-
Yeah it sounds that the merge on the backup side can't succeed since the chain seems corrupted.