@florent hi, i've adjusted the retention parameters and i'm waiting for some days of backup/mirror for checking

Best posts made by robyt
-
RE: mirror backup to S3
-
huge number of api call "sr.getAllUnhealthyVdiChainsLength" in tasks
Hi, i've this situation in Task
What's this task?
Now i've ~50 of this call..
Latest posts made by robyt
-
RE: mirror backup to S3
@florent i've a little problem with backup to s3/wasabi..
for delta seems all ok:
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1751914964818", "jobId": "e4adc26c-8723-4388-a5df-c2a1663ed0f7", "jobName": "Mirror wasabi delta", "message": "backup", "scheduleId": "62a5edce-88b8-4db9-982e-ad2f525c4eb9", "start": 1751914964818, "status": "success", "infos": [ { "data": { "vms": [ "2771e7a0-2572-ca87-97cf-e174a1d35e6f", "b89670f6-b785-7df0-3791-e5e41ec8ee08", "cac6afed-5df8-0817-604c-a047a162093f" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "b89670f6-b785-7df0-3791-e5e41ec8ee08" }, "id": "1751914968373", "message": "backup VM", "start": 1751914968373, "status": "success", "tasks": [ { "id": "1751914968742", "message": "clean-vm", "start": 1751914968742, "status": "success", "end": 1751914979708, "result": { "merge": false } }, { "data": { "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88", "type": "remote" }, "id": "1751914984503", "message": "export", "start": 1751914984503, "status": "success", "tasks": [ { "id": "1751914984667", "message": "transfer", "start": 1751914984667, "status": "success", "end": 1751914992365, "result": { "size": 125829120 } }, { "id": "1751914995521", "message": "clean-vm", "start": 1751914995521, "status": "success", "tasks": [ { "id": "1751915004208", "message": "merge", "start": 1751915004208, "status": "success", "end": 1751915018911 } ], "end": 1751915020075, "result": { "merge": true } } ], "end": 1751915020077 } ], "end": 1751915020077 }, { "data": { "type": "VM", "id": "2771e7a0-2572-ca87-97cf-e174a1d35e6f" }, "id": "1751914968380", "message": "backup VM", "start": 1751914968380, "status": "success", "tasks": [ { "id": "1751914968903", "message": "clean-vm", "start": 1751914968903, "status": "success", "end": 1751914979840, "result": { "merge": false } }, { "data": { "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88", "type": "remote" }, "id": "1751914986808", "message": "export", "start": 1751914986808, "status": "success", "tasks": [ { "id": "1751914987416", "message": "transfer", "start": 1751914987416, "status": "success", "end": 1751914993152, "result": { "size": 119537664 } }, { "id": "1751914996024", "message": "clean-vm", "start": 1751914996024, "status": "success", "tasks": [ { "id": "1751915005023", "message": "merge", "start": 1751915005023, "status": "success", "end": 1751915035567 } ], "end": 1751915039414, "result": { "merge": true } } ], "end": 1751915039414 } ], "end": 1751915039415 }, { "data": { "type": "VM", "id": "cac6afed-5df8-0817-604c-a047a162093f" }, "id": "1751915020089", "message": "backup VM", "start": 1751915020089, "status": "success", "tasks": [ { "id": "1751915020443", "message": "clean-vm", "start": 1751915020443, "status": "success", "end": 1751915030194, "result": { "merge": false } }, { "data": { "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88", "type": "remote" }, "id": "1751915034962", "message": "export", "start": 1751915034962, "status": "success", "tasks": [ { "id": "1751915035142", "message": "transfer", "start": 1751915035142, "status": "success", "end": 1751915052723, "result": { "size": 719323136 } }, { "id": "1751915056146", "message": "clean-vm", "start": 1751915056146, "status": "success", "tasks": [ { "id": "1751915064681", "message": "merge", "start": 1751915064681, "status": "success", "end": 1751915116508 } ], "end": 1751915117838, "result": { "merge": true } } ], "end": 1751915117839 } ], "end": 1751915117839 } ], "end": 1751915117839 }
For full i'm not sure:
{ "data": { "mode": "full", "reportWhen": "always" }, "id": "1751757492933", "jobId": "35c78a31-67c5-47ba-9988-9c4cb404ed8e", "jobName": "Mirror wasabi full", "message": "backup", "scheduleId": "476b863d-a651-42e5-9bb3-db830dbdac7c", "start": 1751757492933, "status": "success", "infos": [ { "data": { "vms": [ "2771e7a0-2572-ca87-97cf-e174a1d35e6f", "b89670f6-b785-7df0-3791-e5e41ec8ee08", "cac6afed-5df8-0817-604c-a047a162093f" ] }, "message": "vms" } ], "end": 1751757496499 }
XOA send to me the email with this report
Job ID: 35c78a31-67c5-47ba-9988-9c4cb404ed8e Run ID: 1751757492933 Mode: full Start time: Sunday, July 6th 2025, 1:18:12 am End time: Sunday, July 6th 2025, 1:18:16 am Duration: a few seconds
four second for 203 gb?
-
RE: mirror backup to S3
Hi @florent, i've clean backup data, add the correct retention and now
it's fine
i'm lowering nbd connection (from 4 to 1), the speed of "test backup con mirror" is too low -
RE: mirror backup to S3
@florent hi, i've adjusted the retention parameters and i'm waiting for some days of backup/mirror for checking
-
RE: mirror backup to S3
@acebmxer of course, this is only a test.
the problem is not the schedulng but why incremental send every time all data. -
RE: mirror backup to S3
@acebmxer [excuse for the poor english!]
i've now this situation:
1 backup job with two disables schedules, one full and one delta, to a nas
1 mirror full backup to wasabi (S3)
1 mirror incremental backupi've insert two sequences:
one starting at sunday for full backup (the sequence is full backup and then full mirror)
one every 3 hours with delta backup and then mirror incrementalthe job start at the correct hour but the mirror incremental send every time the same data size..
backup to nas:dns_interno1 (ctx1.tosnet.it) Transfer data using NBD Clean VM directory cleanVm: incorrect backup size in metadata Start: 2025-06-24 16:00 End: 2025-06-24 16:00 Snapshot Start: 2025-06-24 16:00 End: 2025-06-24 16:00 Backup XEN OLD transfer Start: 2025-06-24 16:00 End: 2025-06-24 16:01 Duration: a few seconds Size: 132 MiB Speed: 11.86 MiB/s Start: 2025-06-24 16:00 End: 2025-06-24 16:01 Duration: a minute Start: 2025-06-24 16:00 End: 2025-06-24 16:01 Duration: a minute Type: delta
dns_interno1 (ctx1.tosnet.it) Wasabi transfer Start: 2025-06-24 16:02 End: 2025-06-24 16:15 Duration: 13 minutes Size: 25.03 GiB Speed: 34.14 MiB/s transfer Start: 2025-06-24 16:15 End: 2025-06-24 16:15 Duration: a few seconds Size: 394 MiB Speed: 22.49 MiB/s Start: 2025-06-24 16:02 End: 2025-06-24 16:17 Duration: 15 minutes Wasabi Start: 2025-06-24 16:15 End: 2025-06-24 16:17 Duration: 2 minutes Start: 2025-06-24 16:02 End: 2025-06-24 16:17 Duration: 15 minutes
the job send every time 25gb to wasabi, not the incremental data.
-
error in xo task with sequence?
Good morning, sequence work fine but i've a long list of task closed but at 50% (?)
raw log is correct{ "id": "0mca491c8", "properties": { "name": "Schedule sequence", "userId": "c5ce5e50-29d9-4c00-84e8-402e1063a5c7", "type": "xo:schedule:sequence", "progress": 50 }, "start": 1750744800007, "status": "success", "updatedAt": 1750746259107, "end": 1750746259107 }
It's only a ui problem?
-
RE: mirror backup to S3
@acebmxer ok, i don't use (for full + delta) the old schedulng (one job but two scheduling, one full and one delta), but i must separate the two jobs?
-
mirror backup to S3
Good morning, i'v some VM (~30) in four logical group.
for every group i create the backup (one full weekly and 40 incremental) and i want mirror to wasabi S3 storage.
How i can start mirroring when one of the full/incremental backup end?
I don't want to start mirror when a backup is not finished!
thank you -
RE: Short VM freeze when migrating to another host
@olivierlambert ops.. why the best topology?