XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VM must be a snapshot

    Scheduled Pinned Locked Moved Backup
    6 Posts 2 Posters 263 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A Offline
      andyh
      last edited by andyh

      We are using XO from sources (latest) against a pool with two XCP-NG hosts XCP-ng 8.2.1 (latest)

      We have a VM that looks to be failing its delta backup to an offsite S3 target, we receive the error 'VM must be a snapshot'. Checked both backup and main pool health, no warnings are showing for snapshots or VDIs

      We did try removing the existing snapshot and migrating the VM between hosts, but the error reappears.

      Logfile -

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1724150430187",
        "jobId": "5e0a3dca-5c7f-4459-9b57-6af74a01d812",
        "jobName": "Production every other day",
        "message": "backup",
        "scheduleId": "1f6ca79c-0e6e-460e-bad5-5e984ec28ef5",
        "start": 1724150430187,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "45df57b9-e2d9-ed2f-e467-345bd6f10296"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "45df57b9-e2d9-ed2f-e467-345bd6f10296",
              "name_label": "server"
            },
            "id": "1724150431672",
            "message": "backup VM",
            "start": 1724150431672,
            "status": "failure",
            "tasks": [
              {
                "id": "1724150431700",
                "message": "clean-vm",
                "start": 1724150431700,
                "status": "success",
                "end": 1724150433318,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1724150434097",
                "message": "clean-vm",
                "start": 1724150434097,
                "status": "success",
                "end": 1724150435469,
                "result": {
                  "merge": false
                }
              }
            ],
            "end": 1724150435471,
            "result": {
              "generatedMessage": false,
              "code": "ERR_ASSERTION",
              "actual": false,
              "expected": true,
              "operator": "strictEqual",
              "message": "VM must be a snapshot",
              "name": "AssertionError",
              "stack": "AssertionError [ERR_ASSERTION]: VM must be a snapshot\n    at Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202408191446/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:245:20)\n    at Function.from (<anonymous>)\n    at asyncMap (/opt/xo/xo-builds/xen-orchestra-202408191446/@xen-orchestra/async-map/index.js:23:28)\n    at Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202408191446/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:233:13)\n    at Function.from (<anonymous>)\n    at asyncMap (/opt/xo/xo-builds/xen-orchestra-202408191446/@xen-orchestra/async-map/index.js:23:28)\n    at IncrementalXapiVmBackupRunner._removeUnusedSnapshots (file:///opt/xo/xo-builds/xen-orchestra-202408191446/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:219:11)\n    at IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202408191446/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:354:16)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202408191446/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
            }
          }
        ],
        "end": 1724150435472
      }
      
      A 1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Thanks for the report, latest commit on master right?

        A 1 Reply Last reply Reply Quote 0
        • A Offline
          andyh @olivierlambert
          last edited by

          @olivierlambert Just checked, we are one commit behind on Master.

          I will look to update now.

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            @julien-f could it be a recent bug on master?

            1 Reply Last reply Reply Quote 0
            • A Offline
              andyh
              last edited by

              With the latest commit on Master (72592), here is the resultant complete backup log

              {
                "data": {
                  "mode": "delta",
                  "reportWhen": "failure"
                },
                "id": "1724152335898",
                "jobId": "5e0a3dca-5c7f-4459-9b57-6af74a01d812",
                "jobName": "Production every other day",
                "message": "backup",
                "scheduleId": "1f6ca79c-0e6e-460e-bad5-5e984ec28ef5",
                "start": 1724152335898,
                "status": "failure",
                "infos": [
                  {
                    "data": {
                      "vms": [
                        "45df57b9-e2d9-ed2f-e467-345bd6f10296"
                      ]
                    },
                    "message": "vms"
                  }
                ],
                "tasks": [
                  {
                    "data": {
                      "type": "VM",
                      "id": "45df57b9-e2d9-ed2f-e467-345bd6f10296",
                      "name_label": "sv-uts"
                    },
                    "id": "1724152337383",
                    "message": "backup VM",
                    "start": 1724152337383,
                    "status": "failure",
                    "tasks": [
                      {
                        "id": "1724152337409",
                        "message": "clean-vm",
                        "start": 1724152337409,
                        "status": "success",
                        "end": 1724152339005,
                        "result": {
                          "merge": false
                        }
                      },
                      {
                        "id": "1724152339972",
                        "message": "snapshot",
                        "start": 1724152339972,
                        "status": "success",
                        "end": 1724152344659,
                        "result": "3d4f7f45-753f-6f58-464a-53dc1d63c054"
                      },
                      {
                        "data": {
                          "id": "179129c4-10c2-40f8-ba64-534ff4dc7da4",
                          "isFull": true,
                          "type": "remote"
                        },
                        "id": "1724152344660",
                        "message": "export",
                        "start": 1724152344660,
                        "status": "success",
                        "tasks": [
                          {
                            "id": "1724152350897",
                            "message": "transfer",
                            "start": 1724152350897,
                            "status": "success",
                            "end": 1724158256392,
                            "result": {
                              "size": 80103683584
                            }
                          },
                          {
                            "id": "1724158257369",
                            "message": "clean-vm",
                            "start": 1724158257369,
                            "status": "success",
                            "end": 1724158259162,
                            "result": {
                              "merge": false
                            }
                          }
                        ],
                        "end": 1724158259163
                      }
                    ],
                    "end": 1724158259163,
                    "result": {
                      "generatedMessage": false,
                      "code": "ERR_ASSERTION",
                      "actual": false,
                      "expected": true,
                      "operator": "strictEqual",
                      "message": "VM must be a snapshot",
                      "name": "AssertionError",
                      "stack": "AssertionError [ERR_ASSERTION]: VM must be a snapshot\n    at Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202408201207/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:245:20)\n    at Function.from (<anonymous>)\n    at asyncMap (/opt/xo/xo-builds/xen-orchestra-202408201207/@xen-orchestra/async-map/index.js:23:28)\n    at Array.<anonymous> (file:///opt/xo/xo-builds/xen-orchestra-202408201207/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:233:13)\n    at Function.from (<anonymous>)\n    at asyncMap (/opt/xo/xo-builds/xen-orchestra-202408201207/@xen-orchestra/async-map/index.js:23:28)\n    at IncrementalXapiVmBackupRunner._removeUnusedSnapshots (file:///opt/xo/xo-builds/xen-orchestra-202408201207/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:219:11)\n    at IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202408201207/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:384:18)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202408201207/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                    }
                  }
                ],
                "end": 1724158259164
              }
              
              1 Reply Last reply Reply Quote 0
              • A Offline
                andyh @andyh
                last edited by andyh

                The problem still existed on latest as of earlier today.

                I removed the VM from the original (Smart Mode) backup job and cleaned up any VDI's and detached backups. As a test I created a separate backup of the VM in question pointing to the same target, this backup was successful.

                Hopefully I've worked around the issue, I will try adding the VM back into a Smart Mode job in the coming days.

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post