Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @gduperrey Installed on home lab via rolling pool update and both host updated no issues and vms migrated back to 2nd host as expected this time. fingers crossed work servers have the same luck. I do have open support ticket from last round of updates for work servers. Waiting for response before installing patches.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    M
    @planedrop Pretty sure the official and recommended way is to use the V2V (VMWare to Vates) tool that is baked into Xen Orchestra. I used that tool back when I migrated from our old VMWare Cluster to the new XCP-ng environment. It worked without any issues. Best approach IMO would be to disconnect the unneeded virtual disks on the source VM on VMWare side and then use V2V tool to migrate over the VM. I did it this way for a VM with disk bigger than 2TB aswell. After migrating the VM I created an LVM volume on the XCP-ng VM with the required size and rsync-ed the remaining data from VMDK > 2TB to target XCP-ng LVM volume. Documentation: https://docs.xen-orchestra.com/v2v-migration-guide
  • 3k Topics
    28k Posts
    F
    @florent Since both jobs were test jobs that i have been running manually, i do have an unnamed and disabled schedule on both that does look identical, so i unintentionally did have multiple jobs on the same schedule. I have since named the schedule within my test job so to each is unique. Updating to "f445b" shows improvement: I was able to replicate from pool A to Pool B, then run a backup job which was incremental. I then ran the replication job again which was incremental and did not create a new VM! Unfortunately though after this i ran the backup job again which resulted in a full backup from the replica rather than a delta, not sure why. The snapshot from the first backup job run was also not removed, leaving 2 snapshots behind, one from each backup run. [image: 1774618641909-e38c23b0-4259-4d16-9ed3-fa44d8490541-image-resized.jpeg] I then tried the process again. Ran the CR job, which was a delta (this part seemed fixed!) then ran the backup job after, same behavior, a full ran instead of a delta and the previous backup snapshot was left behind leaving the VM looking like: [image: 1774618909029-7edf1dba-e729-4b52-abcc-ffde76d8a0bb-image-resized.jpeg] So it seems one problem solved but another remains.
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 33 Topics
    98 Posts
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.