Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    M
    I updated my test environment and performed a few tests: migrating VMs back and forth between VHD based NFS SR and QCOW2 based iSCSI SR --> VMs got converted between VHD and QCOW2 just fine, live migration worked creation of rather big QCOW2 based VMs (2.5+ TB) NBD-enabled delta backups of a mixed set of VMs (small, big, QCOW2, VHD) All tests worked fine so far. Only thing that I noticed: When converting VHD-based VMs to QCOW2 format I was not able to storage migrate more than 2 VMs at a time. XO said something about "not enough memory". That might be related to my dom0 in test environment only having 4GB of RAM. Maybe not related to VHD to QCOW2 migration path. I never saw this error in my live environment where all node's dom0 have 8GB RAM. Update candidate looks good so far from my point of view.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    J
    @CyaVMware said: Have a simple question for someone. Looking at migrating a client to XCP. Currently they have a data disk that is exactly 2TB in VMware. My question is the limitation anything OVER 2TB, or is it 2TB AND above? I'd prefer not to use the VMware converter tool to shrink the disk to 1.99TB if I don't have to. VHD format has a strict limit of roughly 2TB (around 2040 GB), so if your clients VM disks are larger than this then you’re going to have trouble with VHD format. However QCOW2 support is almost here about to reach RC2 state, once this reaches stable you’ll be able to safely reach a max of around a 16TB.
  • 3k Topics
    28k Posts
    D
    ok thanks for the answer. So if understand correctly: 100% is a per core limit, given that i have a AMD Ryzen 9 5950X 16-Core Processor (32 socket) on the host, the pool host maximum limit is 3200% right ?
  • Our hyperconverged storage solution

    45 Topics
    732 Posts
    DAYELAD
    Hello, I’m experiencing an issue on an XCP-ng cluster using XOSTOR. Environment: 3-node XCP-ng cluster XOSTOR distributed storage (2x2Tob nvme on each host) XOA for management Management network 1Gb/s Storage Network 10Gb/s MTU 1500 everywhere (no jumbo frames) So during VM migrations, creation, destroy XOA loses connection to my host pool, VMs keep running normally Hosts remain reachable (SSH / HTTPS / ping OK), Connection comes back after some time 30s to 1min. Observations: No significant CPU or RAM saturation No obvious disk latency issues (iostat looks normal) No errors reported on NICs xapi process remains active (no crash or freeze) The problem is intermittent and seems random. i've monitored nic with iftop and i see no bandwith bottleneck et and can see that XOSTOR is using 10gb network only. Has anyone experienced similar behavior with XOSTOR? And how to Fix it ? Thanks in advance for your help.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.