XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. jimmymiller
    J
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 8
    • Posts 26
    • Groups 0

    jimmymiller

    @jimmymiller

    4
    Reputation
    15
    Profile views
    26
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    jimmymiller Unfollow Follow

    Latest posts made by jimmymiller

    • RE: CBT: the thread to centralize your feedback

      Has anyone seen issues migrating VDIs once CBT is enabled? We're seeing VDI_CBT_ENABLED errors when we try to live migrate disks between SRs. Obviously disabling CBT on the disk allows for the migration to move forward. 'Users' who have limited access don't seem to see specifics on the error but us as admins get a VDI_CBT_ENABLED error. Ideally I think we'd want to be able to still migrate VDIs with CBT enabled or maybe as a part of a VDI migration process CBT would be disabled temporarily, migrated then re-enabled?

      User errors:
      Screenshot 2024-08-07 at 17.42.07.png

      Admins see:

      {
        "id": "7847a7c3-24a3-4338-ab3a-0c1cdbb3a12a",
        "resourceSet": "q0iE-x7MpAg",
        "sr_id": "5d671185-66f6-a292-e344-78e5106c3987"
      }
      {
        "code": "VDI_CBT_ENABLED",
        "params": [
          "OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515"
        ],
        "task": {
          "uuid": "9860d266-d91a-9d0e-ec2a-a7752fa01a6d",
          "name_label": "Async.VDI.pool_migrate",
          "name_description": "",
          "allowed_operations": [],
          "current_operations": {},
          "created": "20240807T21:33:29Z",
          "finished": "20240807T21:33:29Z",
          "status": "failure",
          "resident_on": "OpaqueRef:8d372a96-f37c-4596-9610-1beaf26af9db",
          "progress": 1,
          "type": "<none/>",
          "result": "",
          "error_info": [
            "VDI_CBT_ENABLED",
            "OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515"
          ],
          "other_config": {},
          "subtask_of": "OpaqueRef:NULL",
          "subtasks": [],
          "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vdi.ml)(line 470))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4696))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 199))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 203))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 42))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 51))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4708))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4711))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1503))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 4705))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))"
        },
        "message": "VDI_CBT_ENABLED(OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515)",
        "name": "XapiError",
        "stack": "XapiError: VDI_CBT_ENABLED(OpaqueRef:aeaa21fc-344d-45f1-9409-8e1e1cf3f515)
          at Function.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)
          at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)
          at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1033:24)
          at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1067:14
          at Array.forEach (<anonymous>)
          at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1057:12)
          at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1230:14)"
      }```
      posted in Backup
      J
      jimmymiller
    • RE: Shared SR (two pools)

      @olivierlambert Okay. I'll give it a shot.

      posted in Xen Orchestra
      J
      jimmymiller
    • RE: Shared SR (two pools)

      @HolgiB For this use, it's actually a virtual TrueNAS instance sitting on a LUN mapped to the source XCP-ng pool. I know there are in-OS options using zfs send|receive, but the point is to get an understanding of what we would do without that convenience.

      I know Xen and VMware do things differently, but having VMFS in the mix allowed us to unmount a datastore, move the mapping to a new host, mount that datastore, then just point that host at the existing LUN and quickly import the VMX (for a full VM config) or VMDK (with configuring a new VM to use those existing disks). This completely eliminated the need to truly copy the data--we were just changing the host that had access to it. We didn't use it very often because VMware handled moving VMs with big disks pretty well, but it was our ace-in-the-hole if the option for storage vMotion wasn't available.

      posted in Xen Orchestra
      J
      jimmymiller
    • RE: Shared SR (two pools)

      @olivierlambert Well even a cold migration seemed to fail. Bah!

      The LUN/SR is dedicated to just the one VM. 1 x 16g disk for the OS, 20 x 1.5T disks for data. Inside the VM, I'm using ZFS as a stripe to bring them all together into a single zpool. I know because this is zfs I could theoretically do a zfs replication job to another VM, but I'm also using this as a test to figure out how I'm going to move those larger VMs we have that don't have the convenience of in-OS replication option. For our larger VMs we almost always dedicate LUNs specifically them and we have block-based replication options on our arrays so in theory we should be able to fool any device into thinking the replica is a legit pre-existing SR.

      No snaps -- the data on this VM is purely an offsite backup target so we didn't feel the need to backup the backup of the backup.

      Let me try testing the forget SR + connect to different pool. I swear I tried this before but when I went to try creating the SR, it flagged the LUN as having a preexisting SR, but it forced me to reprovision a new SR and wouldn't let me map the existing.

      posted in Xen Orchestra
      J
      jimmymiller
    • Shared SR (two pools)

      Re: Shared SR between two pools?

      I have a need to move a sizable (i.e. 30T) VM between two pools of compute nodes. I can do this move cold, but I'd rather not have the VM offline for several days, which is what it's going to look like if I do a standard cold VM migration.

      As I understand it, SRs are essentially locked to a specific pool (particularly the pool master). Is it possible to basically unmount (i.e. forget) the SR from one pool, remount it on the target pool then just import the VM while still basically continuing to reside on the same LUN?

      VMware made this pretty easy with VMFS/VMX/VMDKs, but it seems like Xen may not be as flexible.

      posted in Xen Orchestra
      J
      jimmymiller
    • RE: Migrating VM Status

      Well I guess that was the coalesce process because now that one has just stopped. Any ideas on how to find out why it worked fine for essentially 2 days then stopped?

      I get the impression a live migration of a 30T VM may not be the best way to go about moving something of this size between pools? Maybe a warm migration will work better, but I'm also curious how much capacity we're going to need on the source in order to complete this move.

      posted in Management
      J
      jimmymiller
    • RE: Migrating VM Status

      @Danp

      Hrm. xe task-list is showing nothing, but there is clearly something still happening based on the stats.

      Screenshot 2024-06-19 at 11.47.39.png

      posted in Management
      J
      jimmymiller
    • Migrating VM Status

      I'm in the process of live migrating a large VM (~30T) from one host to another. The process had been going smoothly for the last 2 days, but now the task no longer appears in XO. The VM status is showing "Busy (migrate_send)", but the task isn't visible in the XO task list. Is there a timeout in XO for tasks running a long time? Is there a way to actually verify the status of the task and whether the VM is still moving? According to the states, there is still IO on the SR so it appears to still be in progress.

      posted in Management
      J
      jimmymiller
    • OIDC with EntraID

      Has anyone out there gotten the XO OIDC plugin to work with EntraID? My security folks are in search of any documentation that could help them configure things on their end to help things work with XO. Tech support folks are looking into this as well, but I figured I'd put something out there to see if the broader community has made it work.

      posted in Management
      J
      jimmymiller
    • RE: Installing Guest Tools on AlamaLinux 9 Issue

      @stevewest15 Not to call this a "fix" because I know they aren't always the latest 'n greatest, but have you tried just installing from the EPEL repo? Seems to work for us.

      dnf install epel-release-9-5.el9.noarch -y
      yum install xe-guest-utilities-latest.x86_64 -y
      systemctl enable xe-linux-distribution.service
      systemctl start xe-linux-distribution.service
      
      posted in Management
      J
      jimmymiller