XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    CBT: the thread to centralize your feedback

    Scheduled Pinned Locked Moved Backup
    439 Posts 37 Posters 386.6k Views 29 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      manilx @manilx
      last edited by

      @manilx P.S: Theses mods do not survive a host update, right?

      R A 2 Replies Last reply Reply Quote 0
      • R Offline
        rtjdamen @manilx
        last edited by

        @manilx nope, but i have talked with a dev about it and they are looking to make it a setting somewhere, donโ€™t know the status of that. Good to see this works for you!

        1 Reply Last reply Reply Quote 1
        • A Online
          Andrew Top contributor @manilx
          last edited by

          @manilx I have not tested that, but I would say that's correct. Upgrades are rather destructive for custom changes to system scripts and custom settings. This is to ensure that scripts and settings are set to standard known good values on install or upgrade.

          I keep notes on my custom settings/scripts/configs so I can check them after an upgrade or a new install.

          M 1 Reply Last reply Reply Quote 1
          • M Offline
            manilx @Andrew
            last edited by

            CR failed on all VM's with
            ScreenShot 2024-07-28 at 10.23.29.png

            Next one was ok.

            This happens sometimes, it's not consistent.

            {
              "data": {
                "mode": "delta",
                "reportWhen": "failure"
              },
              "id": "1722153616340",
              "jobId": "4c084697-6efd-4e35-a4ff-74ae50824c8b",
              "jobName": "CR",
              "message": "backup",
              "scheduleId": "b1cef1e3-e313-409b-ad40-017076f115ce",
              "start": 1722153616340,
              "status": "failure",
              "infos": [
                {
                  "data": {
                    "vms": [
                      "52e64134-62e3-9682-4e3f-296a1198db4d",
                      "43a4d905-7d13-85b8-bed3-f6b805ff26ac",
                      "b5d74e0b-388c-019a-6994-e174c9ca7a51",
                      "d6a5d420-72e6-5c87-a3af-b5eb5c4a44dd",
                      "131ee7f6-4d58-31d9-39a8-53727cc3dc68"
                    ]
                  },
                  "message": "vms"
                }
              ],
              "tasks": [
                {
                  "data": {
                    "type": "VM",
                    "id": "52e64134-62e3-9682-4e3f-296a1198db4d",
                    "name_label": "XO"
                  },
                  "id": "1722153619552",
                  "message": "backup VM",
                  "start": 1722153619552,
                  "status": "failure",
                  "tasks": [
                    {
                      "id": "1722153619599",
                      "message": "snapshot",
                      "start": 1722153619599,
                      "status": "success",
                      "end": 1722153622486,
                      "result": "6b6036ae-708e-4cb0-2681-12165ba19919"
                    },
                    {
                      "data": {
                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                        "isFull": false,
                        "name_label": "TBS-h574TX",
                        "type": "SR"
                      },
                      "id": "1722153622486:0",
                      "message": "export",
                      "start": 1722153622486,
                      "status": "interrupted"
                    }
                  ],
                  "infos": [
                    {
                      "message": "will delete snapshot data"
                    },
                    {
                      "data": {
                        "vdiRef": "OpaqueRef:f35bea93-45b3-f4bd-2752-3853850ff73a"
                      },
                      "message": "Snapshot data has been deleted"
                    }
                  ],
                  "end": 1722153637891,
                  "result": {
                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                    "name": "Error",
                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                  }
                },
                {
                  "data": {
                    "type": "VM",
                    "id": "43a4d905-7d13-85b8-bed3-f6b805ff26ac",
                    "name_label": "Bitwarden"
                  },
                  "id": "1722153619556",
                  "message": "backup VM",
                  "start": 1722153619556,
                  "status": "failure",
                  "tasks": [
                    {
                      "id": "1722153619603",
                      "message": "snapshot",
                      "start": 1722153619603,
                      "status": "success",
                      "end": 1722153624616,
                      "result": "58f1ac5b-7de0-8276-3872-b2a7d5a26ec2"
                    },
                    {
                      "data": {
                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                        "isFull": false,
                        "name_label": "TBS-h574TX",
                        "type": "SR"
                      },
                      "id": "1722153624616:0",
                      "message": "export",
                      "start": 1722153624616,
                      "status": "interrupted"
                    }
                  ],
                  "infos": [
                    {
                      "message": "will delete snapshot data"
                    },
                    {
                      "data": {
                        "vdiRef": "OpaqueRef:81a61f30-99a0-25bc-35ec-25cadb323a09"
                      },
                      "message": "Snapshot data has been deleted"
                    }
                  ],
                  "end": 1722153655152,
                  "result": {
                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                    "name": "Error",
                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                  }
                },
                {
                  "data": {
                    "type": "VM",
                    "id": "b5d74e0b-388c-019a-6994-e174c9ca7a51",
                    "name_label": "Docker Server"
                  },
                  "id": "1722153637896",
                  "message": "backup VM",
                  "start": 1722153637896,
                  "status": "failure",
                  "tasks": [
                    {
                      "id": "1722153637925",
                      "message": "snapshot",
                      "start": 1722153637925,
                      "status": "success",
                      "end": 1722153639557,
                      "result": "d820e4ad-462f-7043-4f1f-ee21ed986e8d"
                    },
                    {
                      "data": {
                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                        "isFull": false,
                        "name_label": "TBS-h574TX",
                        "type": "SR"
                      },
                      "id": "1722153639558",
                      "message": "export",
                      "start": 1722153639558,
                      "status": "interrupted"
                    }
                  ],
                  "infos": [
                    {
                      "message": "will delete snapshot data"
                    },
                    {
                      "data": {
                        "vdiRef": "OpaqueRef:5f21b5e1-8423-3bbc-7361-6319bb25e97d"
                      },
                      "message": "Snapshot data has been deleted"
                    }
                  ],
                  "end": 1722153675901,
                  "result": {
                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                    "name": "Error",
                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                  }
                },
                {
                  "data": {
                    "type": "VM",
                    "id": "d6a5d420-72e6-5c87-a3af-b5eb5c4a44dd",
                    "name_label": "Media Server"
                  },
                  "id": "1722153655156",
                  "message": "backup VM",
                  "start": 1722153655156,
                  "status": "failure",
                  "tasks": [
                    {
                      "id": "1722153655188",
                      "message": "snapshot",
                      "start": 1722153655188,
                      "status": "success",
                      "end": 1722153656817,
                      "result": "6c572cec-ed70-d994-a88b-bc6066c06b0b"
                    },
                    {
                      "data": {
                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                        "isFull": false,
                        "name_label": "TBS-h574TX",
                        "type": "SR"
                      },
                      "id": "1722153656818",
                      "message": "export",
                      "start": 1722153656818,
                      "status": "interrupted"
                    }
                  ],
                  "infos": [
                    {
                      "message": "will delete snapshot data"
                    },
                    {
                      "data": {
                        "vdiRef": "OpaqueRef:c755b6ed-5d00-397c-62a8-db643c3fbdcd"
                      },
                      "message": "Snapshot data has been deleted"
                    }
                  ],
                  "end": 1722153660309,
                  "result": {
                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                    "name": "Error",
                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                  }
                },
                {
                  "data": {
                    "type": "VM",
                    "id": "131ee7f6-4d58-31d9-39a8-53727cc3dc68",
                    "name_label": "Unifi"
                  },
                  "id": "1722153660312",
                  "message": "backup VM",
                  "start": 1722153660312,
                  "status": "failure",
                  "tasks": [
                    {
                      "id": "1722153660341",
                      "message": "snapshot",
                      "start": 1722153660341,
                      "status": "success",
                      "end": 1722153662203,
                      "result": "b0dac528-8914-141b-5d37-8b68bdeb7fe0"
                    },
                    {
                      "data": {
                        "id": "0d9ee24c-ea59-e0e6-8c04-a9a65c22f110",
                        "isFull": false,
                        "name_label": "TBS-h574TX",
                        "type": "SR"
                      },
                      "id": "1722153662204",
                      "message": "export",
                      "start": 1722153662204,
                      "status": "interrupted"
                    }
                  ],
                  "infos": [
                    {
                      "message": "will delete snapshot data"
                    },
                    {
                      "data": {
                        "vdiRef": "OpaqueRef:595f2f1f-ec64-1d43-b2de-574fcd621576"
                      },
                      "message": "Snapshot data has been deleted"
                    }
                  ],
                  "end": 1722153669757,
                  "result": {
                    "message": "can't create a stream from a metadata VDI, fall back to a base ",
                    "name": "Error",
                    "stack": "Error: can't create a stream from a metadata VDI, fall back to a base \n    at Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/xapi/vdi.mjs:202:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:57:32\n    at async Promise.all (index 0)\n    at async cancelableMap (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_cancelableMap.mjs:11:12)\n    at async exportIncrementalVm (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_incrementalVm.mjs:26:3)\n    at async IncrementalXapiVmBackupRunner._copy (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:44:25)\n    at async IncrementalXapiVmBackupRunner.run (file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:369:9)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202407261701/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38"
                  }
                }
              ],
              "end": 1722153675901
            }
            
            R 1 Reply Last reply Reply Quote 0
            • R Offline
              rtjdamen @manilx
              last edited by

              @manilx to me it sounds like a problem @florent showed me and he is working on, the snapshot data is deleted before the backup is started or finished. not shure if this is resolved in this weeks updat.

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates ๐Ÿช Co-Founder CEO
                last edited by

                Yes, a fix is coming in XOA latest tomorrow ๐Ÿ™‚

                R D 3 Replies Last reply Reply Quote 0
                • R Offline
                  rtjdamen @olivierlambert
                  last edited by

                  @olivierlambert unfortunately not!
                  Got error canโ€™t create a stream from a metadata VDI. Fall back to base

                  1 Reply Last reply Reply Quote 0
                  • R Offline
                    rtjdamen @olivierlambert
                    last edited by

                    @olivierlambert i updated yesterday to the latest version, during the night our backups did run but still some errors.

                    I did not see the stream error, however it seems like the same behavior is occuring as we saw with the stream error, but now with error can't create a stream from a metadata VDI
                    some of these do have a hanging export job in XOA
                    4eb8368a-57f8-4ba3-9060-9327b5a5ffa6-image.png

                    fe0a22bf-c898-4f7e-90c0-dec47e934c07-image.png

                    1 Reply Last reply Reply Quote 0
                    • D Offline
                      DG @olivierlambert
                      last edited by

                      @olivierlambert I updated today to confirm cb6cf and also got this error but only once in multiple backups.

                      Both server runs 8.2.1 version with latest updates.

                      6be093a9-14ef-4344-98a3-fc2dcb3fad3d-image.png

                      1 Reply Last reply Reply Quote 0
                      • V Offline
                        Vinylrider
                        last edited by

                        We have upgraded from commit f2188 to cb6cf and have 3 hosts. With none of these hosts backups worked anymore after doing the upgrade. When reverting back to f2188 all works again.
                        We also deleted orphaned VDIs and let the garbage collector do its job but it did not helped.

                        Hosts/Errors :
                        1.) XCP-ng 8.2.1 : "VDI must be free or attached to exactly one VM"
                        2.) XCP-ng 8.2.1 (with latest updates): "VDI must be free or attached to exactly one VM"
                        3.) XenServer 7.1.0 : "MESSAGE_METHOD_UNKOWN(VDI.get_cbt_enabled)

                        With the Xenserver 7.1.0 CBT is not enabled (and can not be enabled).

                        D 1 Reply Last reply Reply Quote 0
                        • D Offline
                          DG @Vinylrider
                          last edited by

                          @Vinylrider @olivierlambert I found that only backups on the host with the latest updates are having problems eventually with the backup.

                          These patches with respective versions was not applied to others 2 hosts.

                          xapi-core 1.249.36
                          xapi-tests 1.249.36
                          xapi-xe 1.249.36
                          xen-dom0-libs 4.13.5
                          xen-dom0-tools 4.13.5
                          xen-hypervisor 4.13.5
                          xen-libs 4.13.5
                          xen-tools 4.13.5
                          xsconsole 10.1.13

                          1 Reply Last reply Reply Quote 0
                          • A Online
                            Andrew Top contributor @olivierlambert
                            last edited by

                            @olivierlambert Running XO source master (commit d0bd6) and Delta backup to S3 is looking for an off-line host in the pool, so the backup fails. This host was evacuated and in maintenance mode and being rebooted by XO. The VM being backed up was running on a different host and the master was not on the off-line host. There are several other running hosts in the pool.

                            There's no reason XO/XCP should be doing anything with this host...

                            "error": "HOST_OFFLINE(OpaqueRef:65b7a047-094b-4c7a-a503-2823e92b9fe4)"

                            F 1 Reply Last reply Reply Quote 0
                            • F Online
                              flakpyro @Andrew
                              last edited by

                              So with the latest XO update released this week i experience a new behavior when trying to run a backup after a VM has moved from Host A to Host B (while staying on the same shared NFS SR)

                              The new error is "Error: can't create a stream from a metadata VDI, fall back to a base " it then retries and runs a full backup.

                              M 1 Reply Last reply Reply Quote 0
                              • M Offline
                                manilx @flakpyro
                                last edited by

                                On my CR job I got this error again:
                                IMG_1594.jpeg on all VMโ€™s
                                Next run was ok.
                                Running commit cb6cf

                                R 1 Reply Last reply Reply Quote 0
                                • R Offline
                                  rtjdamen @manilx
                                  last edited by

                                  @manilx we see this error on some backups as well. ot so often as we saw them prior to this version so it seems like it has been a bit better.
                                  as a fix i tried setting the retry on backups and this resolves it in most of the situations but sometimes i get this error

                                  5cf0d961-3cdc-446a-9c06-887c919fe987-image.png

                                  Also we still have the VDI in use errors now and then, vdi-data-destroy is not done in that situation leaving it a normal snapshot with CBT, not such a big deal as it is only on a very small number of vms, however they showup as orphan vdi's in XOA Health page what makes it a bit weird, i think they should not be visible there.

                                  1 Reply Last reply Reply Quote 0
                                  • J Offline
                                    jimmymiller
                                    last edited by jimmymiller

                                    Has anyone seen issues migrating VDIs once CBT is enabled? We're seeing VDI_CBT_ENABLED errors when we try to live migrate disks between SRs. Obviously disabling CBT on the disk allows for the migration to move forward. 'Users' who have limited access don't seem to see specifics on the error but us as admins get a VDI_CBT_ENABLED error. Ideally I think we'd want to be able to still migrate VDIs with CBT enabled or maybe as a part of a VDI migration process CBT would be disabled temporarily, migrated then re-enabled?

                                    User errors:
                                    Screenshot 2024-08-07 at 17.42.07.png

                                    Admins see:

                                    {
                                      "id": "7847a7c3-24a3-4338-ab3a-0c1cdbb3a12a",
                                      "resourceSet": "q0iE-x7MpAg",
                                      "sr_id": "5d671185-66f6-a292-e344-78e5106c3987"
                                    }
                                    {
                                      "code": "VDI_CBT_ENABLED",
                                      "params": [
                                        "OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515"
                                      ],
                                      "task": {
                                        "uuid": "9860d266-d91a-9d0e-ec2a-a7752fa01a6d",
                                        "name_label": "Async.VDI.pool_migrate",
                                        "name_description": "",
                                        "allowed_operations": [],
                                        "current_operations": {},
                                        "created": "20240807T21:33:29Z",
                                        "finished": "20240807T21:33:29Z",
                                        "status": "failure",
                                        "resident_on": "OpaqueRef:8d372a96-f37c-4596-9610-1beaf26af9db",
                                        "progress": 1,
                                        "type": "<none/>",
                                        "result": "",
                                        "error_info": [
                                          "VDI_CBT_ENABLED",
                                          "OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515"
                                        ],
                                        "other_config": {},
                                        "subtask_of": "OpaqueRef:NULL",
                                        "subtasks": [],
                                        "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vdi.ml)(line 470))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4696))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 199))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 203))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 42))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 51))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4708))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4711))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1503))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4705))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
                                      },
                                      "message": "VDI_CBT_ENABLED(OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515)",
                                      "name": "XapiError",
                                      "stack": "XapiError: VDI_CBT_ENABLED(OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515)
                                        at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)
                                        at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)
                                        at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1033:24)
                                        at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1067:14
                                        at Array.forEach (<anonymous>)
                                        at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1057:12)
                                        at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1230:14)"
                                    }```
                                    R 1 Reply Last reply Reply Quote 0
                                    • R Offline
                                      rtjdamen @jimmymiller
                                      last edited by

                                      @jimmymiller as part of the migration the cbt should be disabled allready, this feature has been created as part of the first release, however it seems that there is a bug that does only disable cbt but leaves the metadata only snapshots on the vm, i believe this is causing the issue.

                                      I think this pull request is created to solve this in the next release
                                      https://github.com/vatesfr/xen-orchestra/pull/7903

                                      MelissaFrncJrg opened this pull request in vatesfr/xen-orchestra

                                      open feat(xo-web/disks): allow user to delete snapshots before migrating VDI #7903

                                      F 1 Reply Last reply Reply Quote 0
                                      • F Online
                                        flakpyro @rtjdamen
                                        last edited by

                                        So testing CBT in our test environment with migrations and this is what i have observed:

                                        Host 1 and Host 2 are in a pool together with a shared NFS SR. If TestVM-01 i son Host 1 using the NFS SR with CBT backups enabled all is fine. Clicking on the VM and then disks shows that CBT is enabled on the drives. If i migrate the VM over to host 2, CBT is disabled and the VM is migrated successfully. On the next backup job run however the job will initially fail with the errror "can't create a stream from a metadata VDI, fall back to a base "...after a retry then the job will run.

                                        If multiple jobs exist for a VM. Say a backup job and a replication job, will that result in 2 CBT snapshots then? That is a ton of space savings vs keeping 2 regular snapshots with the old backup method and cuts down on GC time and storage IO by quite a bit!

                                        R 1 Reply Last reply Reply Quote 0
                                        • R Offline
                                          rtjdamen @flakpyro
                                          last edited by

                                          @flakpyro thatโ€™s exactly the reason we were asking for the cbt option to come available. Itโ€™s a huge difference in storage usage and the amount of writes done to the storage. Huge improvement!

                                          1 Reply Last reply Reply Quote 0
                                          • olivierlambertO Offline
                                            olivierlambert Vates ๐Ÿช Co-Founder CEO
                                            last edited by

                                            In theory, migrating a VM to another host (but keeping the same shared SR) shouldn't re-trigger a full. It only happens when the VDI is migrated to another SR (the VDI UUID will change and the metadata will be lost).

                                            At least, we can try to reproduce this internally (I couldn't on my prod).

                                            F 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post