Categories

  • All news regarding Xen and XCP-ng ecosystem

    137 Topics
    4k Posts
    G
    @gduperrey Nothing really to add, my 3 host Intel production pool updated just fine. The load balancer is always a little weird, but I'm sure it is calculated based on CPU and RAM assigned to each VM, where I split things up based on workload. It's a small system, and the real workload is handled by 3 Windows VMs so I tend to split them up onto one of the three hosts. I may get to my lab in the next couple of days, but it isn't doing work so testing is kind of pointless right now. The only thing "doing work" is a VM with XO from sources.
  • Everything related to the virtualization platform

    1k Topics
    13k Posts
    florentF
    @idar21 @idar21 said in Xen Orchestra 5.110 V2V not working: Don't intend to bump in but the new migration tool isn't working as per the release notes. I had similar issues, there is no warm migration. My testing against esxi v7, resulted in: .Abrupt power off of source VM on esxi. .VM disks start copying. I can see disk copy progress in tasks. .Migration tasks fails but multiple disks of the source VM keeps on copying. .when all the disks are copied, there is no VM with the name available in xcp. .All disks are labeled orphaned under health in xo. .Where is the pause/resume function as stated in the release notes. I don't think the tool has been tested properly. The only difference from older migration tool to this one is progress of disk copying. Otherwise nothing new. The old tool could only do cold migrations and had issues with vms with multiple disks. The new can also only do cold migrations and still has issues with multiple disks migrations. First, I would like to say again that latest can be fresh, and that we know that we ask for our users to be more inventive with latest, in exchange for faster features. Even more for users from source. The documentation is still in the work, and will be ready for sure before this reach "xoa stable". The resume part don't have a dedicated interface : you do a first migration without enabling the "stop source", and then, later you launch the same migration with stop source enabled ( or VM stopped ) and it will reuse the already transfered data if the prerequisites are validated. Then debugging an issue with migration is quite complex, since it's involve multiple systems, and we won't have any access, nor control on the vmware part. It's even harder without a tunnel. I will need you to look at your journalctl and check for errors during migration . Also are the failing disks sharing some specific configuration? what storage do they uses ? Is there something relevant on the xcp side ?
  • 3k Topics
    25k Posts
    olivierlambertO
    Feedback for you @florent
  • Our hyperconverged storage solution

    34 Topics
    674 Posts
    henri9813H
    Hello, @DustinB The https://vates.tech/xostor/ says: The maximum size of any single Virtual Disk Image (VDI) will always be limited by the smallest disk in your cluster. But in this case, maybe it can be stored in the "2TB disks" ? Maybe others can answer, i didn't test it.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^