• no alias references VHD

    2
    0 Votes
    2 Posts
    132 Views
    olivierlambertO
    Ping @florent
  • mirror backup: temporarily disabled

    3
    1
    0 Votes
    3 Posts
    91 Views
    olivierlambertO
    Ping @florent
  • Backup from replicas possible?

    21
    0 Votes
    21 Posts
    1k Views
    florentF
    @flakpyro for now there is no tag selector , but you can now select the VM list to be replicated
  • Backup or snap shot for XCP-NG with USB passthrough

    2
    0 Votes
    2 Posts
    105 Views
    olivierlambertO
    Hi, For reference, it's in our "Product management" backlog on ref PRODU-69. Now we need to find the time to prioritize this and investigate on it first (how much effort needed).
  • Feature Request - Next scheduled run

    feature request
    1
    1
    0 Votes
    1 Posts
    54 Views
    No one has replied
  • Moving management network to another adapter and backups now fail

    12
    0 Votes
    12 Posts
    383 Views
    nikadeN
    @syscon-chrisl OK that's another story, totally weird. It should totally work if the networking is correct, we've done this at work many times (Migrating from 1G to 10G NIC for example).
  • XO backup, is this mess made with intention?

    9
    -1 Votes
    9 Posts
    309 Views
    K
    @Prilly Looks like @lawrencesystems has a good explainer video on YouTube at https://youtu.be/weVoKm8kDb4?si=QH93rOKaXLglrIbS Check it out when you get a chance, thanks.
  • Schedule backup for 1st Sunday of the month

    2
    0 Votes
    2 Posts
    105 Views
    M
    Found this... https://xcp-ng.org/forum/topic/3553/creating-schedules-to-run-on-specific-days
  • Conflicting backup schedules

    1
    0 Votes
    1 Posts
    54 Views
    No one has replied
  • How can I duplicate backup settings to a different XO instance? Should I?

    4
    0 Votes
    4 Posts
    146 Views
    A
    @CodeMercenary I use the paid for XOA and it backups up its config to the cloud so it wouldn't be a problem for me to restore. You can easily set something like that up by backing up to a store location then having that backup to a cloud storage location. As far as having the XO instances running Im not sure how that would work. I know with NBD storage you should be able to connect to the hard drive with multiple XO's (I think) but the problem I ran into quickly was the fact that when I made a backup using my XOA, the XO from source would see it as a backup without an associated job. If you had three XO's running with 3 different remote storage locations I don't think you'd have a problem with that. Instead you might run into the XO's seeing the 3 different backup jobs running , 2 of which, they don't know about. I use XOA to backup to a different storage array, then I have that storage backup to the cloud. My backups aren't really up to "standard" but thats all I can do for now. My plan is to make backups to a different storage pool on the same central storage I run my VM's on, back that whole unit up to an off site unit with ZFS replication, then back that backup to the cloud with StorJ. Right now Ive just got the one backup on the same Storage server and the StorJ.
  • Immutable S3 Backups (Backblaze) and Merging; A Little Confused

    5
    0 Votes
    5 Posts
    293 Views
    planedropP
    @florent Finally getting back to this post, I know it's been months, just haven't had a lot of time, sorry! I think I am still a bit confused, but I will do some additional testing to see if I can confirm my suspicions. My confusion is that, you can't merge the deltas into the key if the key is locked behind Object Lock, the file isn't writable so you can't do the merge operation, right? So that being said, it sounds to me like maybe the retention of the object lock/immutability needs to be set to be less than the retention period in XOA, right? This way the original key is not immutable and can be written to when the merge happens? Or does XOA just "wait" until the key isn't locked and then do the merge operation?
  • New GFS strategy

    6
    0 Votes
    6 Posts
    253 Views
    M
    @florent Looking forward to the docs. Just this one thing (Guess most would be interested): On my example: keeping 7 daily backups, I presume it keeps the last backup from each day (as there are 2 schedules I presume it would keep the backup from 22:00)
  • Designing a backup strategy

    4
    0 Votes
    4 Posts
    183 Views
    M
    @florent What are your thoughts on my last three questions please.
  • Advice on backup for windows fileservers

    7
    0 Votes
    7 Posts
    234 Views
    R
    @DustinB thanks all, the agent based backups seem to work the way we need, we removed the older alike backup appliances so fully on xoa backup now. I saw alike is going opensource but almost no activity on the github account, i expect that the product will discontinue as it is allready pretty outdated, nice chance for XOA to fill the gap.
  • The "paths[1]" argument must be of type string. Received undefined

    7
    1
    0 Votes
    7 Posts
    316 Views
    Tristis OrisT
    @stephane-m-dev that happens again for 1 vm. { "data": { "type": "VM", "id": "316e7303-c9c9-9bb6-04ef-83948ee1b19e", "name_label": "name" }, "id": "1732299284886", "message": "backup VM", "start": 1732299284886, "status": "failure", "tasks": [ { "id": "1732299284997", "message": "clean-vm", "start": 1732299284997, "status": "failure", "warnings": [ { "data": { "path": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "error": { "generatedMessage": true, "code": "ERR_ASSERTION", "actual": false, "expected": true, "operator": "==" } }, "message": "VHD check error" }, { "data": { "alias": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd" }, "message": "missing target of alias" } ], "end": 1732299341663, "result": { "code": "ERR_INVALID_ARG_TYPE", "message": "The \"paths[1]\" argument must be of type string. Received undefined", "name": "TypeError", "stack": "TypeError [ERR_INVALID_ARG_TYPE]: The \"paths[1]\" argument must be of type string. Received undefined\n at resolve (node:path:1169:7)\n at normalize (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/path.js:21:27)\n at NfsHandler.__unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/abstract.js:412:32)\n at NfsHandler.unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/node_modules/limit-concurrency-decorator/index.js:97:24)\n at checkAliases (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:132:25)\n at async Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:284:5)\n at async Promise.all (index 1)\n at async RemoteAdapter.cleanVm (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:283:3)" } }, { "id": "1732299285125", "message": "clean-vm", "start": 1732299285125, "status": "failure", "warnings": [ { "data": { "path": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "error": { "generatedMessage": true, "code": "ERR_ASSERTION", "actual": false, "expected": true, "operator": "==" } }, "message": "VHD check error" }, { "data": { "alias": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd" }, "message": "missing target of alias" } ], "end": 1732299343111, "result": { "code": "ERR_INVALID_ARG_TYPE", "message": "The \"paths[1]\" argument must be of type string. Received undefined", "name": "TypeError", "stack": "TypeError [ERR_INVALID_ARG_TYPE]: The \"paths[1]\" argument must be of type string. Received undefined\n at resolve (node:path:1169:7)\n at normalize (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/path.js:21:27)\n at NfsHandler.__unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/abstract.js:412:32)\n at NfsHandler.unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/node_modules/limit-concurrency-decorator/index.js:97:24)\n at checkAliases (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:132:25)\n at async Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:284:5)\n at async Promise.all (index 3)\n at async RemoteAdapter.cleanVm (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:283:3)" } }, { "id": "1732299343953", "message": "snapshot", "start": 1732299343953, "status": "success", "end": 1732299346495, "result": "ee646d05-83b2-31d8-e54b-0d3b0cf7df1d" }, { "data": { "id": "4b6d24a3-0b1e-48d5-aac2-a06e3a8ee485", "isFull": false, "type": "remote" }, "id": "1732299346495:0", "message": "export", "start": 1732299346495, "status": "success", "tasks": [ { "id": "1732299353253", "message": "transfer", "start": 1732299353253, "status": "success", "end": 1732299450434, "result": { "size": 9674571776 } }, { "id": "1732299501828:0", "message": "clean-vm", "start": 1732299501828, "status": "success", "warnings": [ { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241104T180802Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241102T180758Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241103T180648Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241105T181019Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" } ], "end": 1732299518747, "result": { "merge": false } } ], "end": 1732299518760 }, { "data": { "id": "8da40b08-636f-450d-af15-3264b9692e1f", "isFull": false, "type": "remote" }, "id": "1732299346496", "message": "export", "start": 1732299346496, "status": "success", "tasks": [ { "id": "1732299353244", "message": "transfer", "start": 1732299353244, "status": "success", "end": 1732299450546, "result": { "size": 9674571776 } }, { "id": "1732299451765", "message": "clean-vm", "start": 1732299451765, "status": "success", "warnings": [ { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241103T180648Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241104T180802Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241102T180758Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241105T181019Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" } ], "end": 1732299501791, "result": { "merge": false } } ], "end": 1732299501828 } ], "infos": [ { "message": "Transfer data using NBD" } ], "end": 1732299518760 } ], "end": 1732299518761 }
  • MAP_DUPLICATE_KEY error in XOA backup - VM's wont START now!

    Solved
    28
    2
    0 Votes
    28 Posts
    6k Views
    J
    I have, what is hopefully a final update to this issue(s). we upgraded to xoa version .99 a few weeks ago and the problem has now gone away. we suspect that some changes were made for timeouts in xoa that have resolved this , and a few other related problems.
  • Replication retention & max chain size

    9
    0 Votes
    9 Posts
    390 Views
    M
    @Andrew Got it. As we only have a single production server there is no shared storage so I guess the idea of a pool is mute.
  • Manually delete S3 Backblaze backups?

    3
    0 Votes
    3 Posts
    158 Views
    M
    @olivierlambert Thanks! Seems obvious now that I clicked on it but I didn't know I could get to them through there.
  • Mirror Backup Failing On Excluded VM

    2
    1
    0 Votes
    2 Posts
    51 Views
    olivierlambertO
    Ping @florent
  • backup deleting manual snapshots

    6
    0 Votes
    6 Posts
    215 Views
    M
    @olivierlambert If the issue turns up again.... It was a weird one! As I deleted the previous backup jobs I can't repeat. Just had to move on. Perhaps someone else stumbles upon this and finds this thread...... I will report if it happens again (will try it wit the new backup)