• Designing a backup strategy

    4
    0 Votes
    4 Posts
    262 Views
    M
    @florent What are your thoughts on my last three questions please.
  • Advice on backup for windows fileservers

    7
    0 Votes
    7 Posts
    427 Views
    R
    @DustinB thanks all, the agent based backups seem to work the way we need, we removed the older alike backup appliances so fully on xoa backup now. I saw alike is going opensource but almost no activity on the github account, i expect that the product will discontinue as it is allready pretty outdated, nice chance for XOA to fill the gap.
  • The "paths[1]" argument must be of type string. Received undefined

    7
    1
    0 Votes
    7 Posts
    609 Views
    Tristis OrisT
    @stephane-m-dev that happens again for 1 vm. { "data": { "type": "VM", "id": "316e7303-c9c9-9bb6-04ef-83948ee1b19e", "name_label": "name" }, "id": "1732299284886", "message": "backup VM", "start": 1732299284886, "status": "failure", "tasks": [ { "id": "1732299284997", "message": "clean-vm", "start": 1732299284997, "status": "failure", "warnings": [ { "data": { "path": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "error": { "generatedMessage": true, "code": "ERR_ASSERTION", "actual": false, "expected": true, "operator": "==" } }, "message": "VHD check error" }, { "data": { "alias": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd" }, "message": "missing target of alias" } ], "end": 1732299341663, "result": { "code": "ERR_INVALID_ARG_TYPE", "message": "The \"paths[1]\" argument must be of type string. Received undefined", "name": "TypeError", "stack": "TypeError [ERR_INVALID_ARG_TYPE]: The \"paths[1]\" argument must be of type string. Received undefined\n at resolve (node:path:1169:7)\n at normalize (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/path.js:21:27)\n at NfsHandler.__unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/abstract.js:412:32)\n at NfsHandler.unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/node_modules/limit-concurrency-decorator/index.js:97:24)\n at checkAliases (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:132:25)\n at async Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:284:5)\n at async Promise.all (index 1)\n at async RemoteAdapter.cleanVm (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:283:3)" } }, { "id": "1732299285125", "message": "clean-vm", "start": 1732299285125, "status": "failure", "warnings": [ { "data": { "path": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "error": { "generatedMessage": true, "code": "ERR_ASSERTION", "actual": false, "expected": true, "operator": "==" } }, "message": "VHD check error" }, { "data": { "alias": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd" }, "message": "missing target of alias" } ], "end": 1732299343111, "result": { "code": "ERR_INVALID_ARG_TYPE", "message": "The \"paths[1]\" argument must be of type string. Received undefined", "name": "TypeError", "stack": "TypeError [ERR_INVALID_ARG_TYPE]: The \"paths[1]\" argument must be of type string. Received undefined\n at resolve (node:path:1169:7)\n at normalize (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/path.js:21:27)\n at NfsHandler.__unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/fs/dist/abstract.js:412:32)\n at NfsHandler.unlink (/opt/xo/xo-builds/xen-orchestra-202411191133/node_modules/limit-concurrency-decorator/index.js:97:24)\n at checkAliases (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:132:25)\n at async Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:284:5)\n at async Promise.all (index 3)\n at async RemoteAdapter.cleanVm (file:///opt/xo/xo-builds/xen-orchestra-202411191133/@xen-orchestra/backups/_cleanVm.mjs:283:3)" } }, { "id": "1732299343953", "message": "snapshot", "start": 1732299343953, "status": "success", "end": 1732299346495, "result": "ee646d05-83b2-31d8-e54b-0d3b0cf7df1d" }, { "data": { "id": "4b6d24a3-0b1e-48d5-aac2-a06e3a8ee485", "isFull": false, "type": "remote" }, "id": "1732299346495:0", "message": "export", "start": 1732299346495, "status": "success", "tasks": [ { "id": "1732299353253", "message": "transfer", "start": 1732299353253, "status": "success", "end": 1732299450434, "result": { "size": 9674571776 } }, { "id": "1732299501828:0", "message": "clean-vm", "start": 1732299501828, "status": "success", "warnings": [ { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241104T180802Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241102T180758Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241103T180648Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241105T181019Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" } ], "end": 1732299518747, "result": { "merge": false } } ], "end": 1732299518760 }, { "data": { "id": "8da40b08-636f-450d-af15-3264b9692e1f", "isFull": false, "type": "remote" }, "id": "1732299346496", "message": "export", "start": 1732299346496, "status": "success", "tasks": [ { "id": "1732299353244", "message": "transfer", "start": 1732299353244, "status": "success", "end": 1732299450546, "result": { "size": 9674571776 } }, { "id": "1732299451765", "message": "clean-vm", "start": 1732299451765, "status": "success", "warnings": [ { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241101T181520Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "parent": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd", "child": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" }, "message": "parent VHD is missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241103T180648Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241103T180648Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241104T180802Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241104T180802Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241102T180758Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241102T180758Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" }, { "data": { "backup": "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/20241105T181019Z.json", "missingVhds": [ "/xo-vm-backups/316e7303-c9c9-9bb6-04ef-83948ee1b19e/vdis/90d0b5ca-9364-4011-adc4-b8c74a534da9/53843891-126f-4f0c-b645-8e8aa0a41b36/20241105T181019Z.alias.vhd" ] }, "message": "some VHDs linked to the backup are missing" } ], "end": 1732299501791, "result": { "merge": false } } ], "end": 1732299501828 } ], "infos": [ { "message": "Transfer data using NBD" } ], "end": 1732299518760 } ], "end": 1732299518761 }
  • Replication retention & max chain size

    9
    0 Votes
    9 Posts
    726 Views
    M
    @Andrew Got it. As we only have a single production server there is no shared storage so I guess the idea of a pool is mute.
  • Manually delete S3 Backblaze backups?

    3
    0 Votes
    3 Posts
    277 Views
    M
    @olivierlambert Thanks! Seems obvious now that I clicked on it but I didn't know I could get to them through there.
  • Mirror Backup Failing On Excluded VM

    2
    1
    0 Votes
    2 Posts
    122 Views
    olivierlambertO
    Ping @florent
  • backup deleting manual snapshots

    6
    0 Votes
    6 Posts
    384 Views
    M
    @olivierlambert If the issue turns up again.... It was a weird one! As I deleted the previous backup jobs I can't repeat. Just had to move on. Perhaps someone else stumbles upon this and finds this thread...... I will report if it happens again (will try it wit the new backup)
  • VM RAM & CPU allocation for health check on DR host

    5
    1
    0 Votes
    5 Posts
    403 Views
    A
    @McHenry Look at this post: VM limits - CPU Limits
  • Backup VM with active USB Dongle

    2
    1
    0 Votes
    2 Posts
    190 Views
    ForzaF
    @msupport we use the AnywhereUSB for this reason. Haven't tested the performance with USB disks, but it works good for all our license keys.
  • xcp-ng host RAM

    5
    2
    0 Votes
    5 Posts
    468 Views
    M
    @Andrew Got it. So the DR host may have fewer resources than the production host, which would work for a failure of one or two VMs but not for a total failure of the production host whereby all VMs need to run on the DR host. Is it only RAM that is a consideration between hosts? Meaning if my production host has 16 vCPUs and my DR host only has 8 vCPUs can I still set the VM to use 16 vCPUs and it will work on both hosts?
  • 0 Votes
    12 Posts
    1k Views
    R
    @CodeMercenary i have seen this as well, one of our backup repos was almost out of space, this decreased the backup speed, we thought of issues on the xoa side but it was just related to the repo itself…. So it is good to check both. The 1 gig part makes sense.
  • MESSAGE_METHOD_UNKNOWN(VDI.get_cbt_enabled)

    Solved
    11
    0 Votes
    11 Posts
    748 Views
    olivierlambertO
    Excellent Happy to see our fix solved your problem and enhanced our compatibility with older XenServer version. Don't forget to upgrade, the path to get to XCP-ng is relatively straightforward
  • Health Check - No Hosts Available

    11
    3
    0 Votes
    11 Posts
    907 Views
    stephane-m-devS
    Is it PV, HVM or a PVHVM VM? Health Check can't work on HVM because there are no tools.
  • 0 Votes
    5 Posts
    418 Views
    julien-fJ
    @cirosantos0 Indeed, our detection is wrong in this case, I will put this in our backlog, but don't expect it to be fixed soon. PRs are welcome though
  • Backups (Config & VMs) Fail Following Updates

    7
    0 Votes
    7 Posts
    1k Views
    DustyArmstrongD
    An update, if anyone ever comes across this via search engine. Turns out it was my container's timezone. The image was set to pure UTC, no timezone, by default, so I believe when it was writing files to my network storage it introduced a discrepancy. My network share was recording the file metadata accurately to real-time, so I assume when it came time to do another backup, the file time XO expected was different, making it think it was "stale" or still being "held". Have now run both scheduled metadata and VM backups without any errors . In summary: make sure your time, date and timezone are set correctly!
  • 0 Votes
    3 Posts
    250 Views
    julien-fJ
    @florent It was working before, this is likely something that has been broken when the handling of snapshots has been moved from VMs to VDIs. We need to fix this.
  • 0 Votes
    3 Posts
    253 Views
    julien-fJ
    @probain Noted in backlog
  • 0 Votes
    3 Posts
    268 Views
    olivierlambertO
    Hi, That's because our doc isn't fully up to date with the new naming (even XO in the app itself). See this recap: [image: Schema-new-wording-backup-2.png] More details in here: https://xen-orchestra.com/blog/xen-orchestra-5-83/#backup-nomenclature
  • Introduce old version / snapshot of remote to XOA

    5
    0 Votes
    5 Posts
    331 Views
    olivierlambertO
    I don't think it's a good approach, BR weren't meant for that. If you want to send them offsite, mirror backup is the best option. About immutability, we have an external program doing it if you want.
  • NFS Remote encryption problem

    32
    0 Votes
    32 Posts
    7k Views
    D
    @stephane-m-dev understood. My suggestion regarding the different behavior (does not work with disk share, does work with user share) on unraid would be to put this in a "tip" in the documentation section about setting up Remotes.