XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Moving VMs from one Pool to another

    Scheduled Pinned Locked Moved Xen Orchestra
    6 Posts 2 Posters 2.3k Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      phil
      last edited by

      Hi,

      I recently setup a new Pool with XCP-NG8 and a connected iSCSI Storage. When moving VMs from my old Pool (XCP-NG 7.6) to the new one some (most) VDIs seem to double.
      On the new Storage I have one VDI connected to the VM, displayed in green and another one of the same size that is not connected to a VM and displayed in yellow or orange.

      080fb3f8-a504-46cc-b2e1-750653c77337-image.png

      From XO I can not delete the orange copy, but I get a "no such VDI" Error
      051313a4-b01e-4da8-94a6-ed0f27a88159-image.png

      From Command Line I can forget and then delete it, but that will also destroy the linked VDI (green one 🙂 ) as it's missing a "parent VDI" from now.

      Do you know why this happens and how I can get rid of the VDIs that are not connected? And also is there a way to still use the VDIs that are Missing a parent now?

      Best regards
      Philipp

      1 Reply Last reply Reply Quote 1
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        You can't remove them. They'll be garbage collected automatically, or check for orphan disks in your "Disks" tab of this view.

        1 Reply Last reply Reply Quote 0
        • P Offline
          phil
          last edited by

          Hi Olivier,

          thank you for answering.
          When should this Garbage Collection occur?Does it take a couple of days? When I check for "orphaned Disks" in the Disks Section it gives me only two of those VDIs

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            You can remove orphaned VDIs yourself 🙂 Then rescan the SR and wait a bit (20 min in general should be enough)

            1 Reply Last reply Reply Quote 0
            • P Offline
              phil
              last edited by phil

              That's pretty much what I tried in first place. I can "forget" the VDI, rescan and then delete destroy it. Problem is, it also kills the VDI I still need.
              In the Disk tab it looks like this for example
              24fd4976-90cd-4fef-9787-339d769c9237-image.png
              UUID of the first (attached) one: a91d9216-39ee-4c1b-a66d-df5a88f35185
              UUID of the second one: f111463f-7449-4d60-bf12-2634b5ce2b00

              with vhd-util I can see, the second is a parent to the first, also is the second one "read only"

              c55b3bd6-06df-4400-8f1f-9dd0fbd41fea-image.png

              1 Reply Last reply Reply Quote 1
              • P Offline
                phil
                last edited by

                Next thing I tried was Exporting the affected VM, delete it and import it again. This fails because after deleting the VM both VDIs / VHDs are still there. Also I can not delete them. The error says:

                f6ff6ad4-8a59-4424-9bb5-ff498f87989f-image.png

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post