Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    rzrR
    New update candidates for you to test! We are continuing to refine the next batch of update with planned fixes. This release batch contains fixes on the major storage feature previously announced, read the RC2 announcement for QCOW2 image format support for 2TiB+ images. What changed Storage QCOW2 image format support is the major feature of this release batch, check related announcement in forum. Some fixes have been applied to fix issues found during the testing phase. sm: 3.2.12-17.6 Limit QCOW2 VDI max size to be 16TiB with metadata to allow compatibility with EXTSR (EXTSR is limited to 16TiB unique file size) If a full QCOW2 VDI is allocated, XCP-ng would not be able to migrate it to an EXTSR with this limitation. In the future, while EXTSR will remain limited to this maximum size, other SR types will evolve towards higher limits. For this, we'll have to work on the existing assumption that all SR which support the QCOW2 image-format share the same maximum size limit for VDIs, and to catch migration attempts towards SRs whic cannot receive disks bigger than their maximum limit. blktap: 3.55.5-6.6 Update the package's license. Versions: blktap: 3.55.5-6.5.xcpng8.3 -> 3.55.5-6.6.xcpng8.3 sm: 3.2.12-17.5.xcpng8.3 -> 3.2.12-17.6.xcpng8.3 Test on XCP-ng 8.3 If you are using XOSTOR, please refer to our documentation for the update method. yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates yum update --enablerepo=xcp-ng-testing,xcp-ng-candidates reboot The usual update rules apply: pool coordinator first, etc. What to test The most important change is related to storage: adding QCOW2 support also affects the codebase managing VHD disks. What matters here is, above all, to detect any regression on VHD support (we tested it deeply, but on this matter there's no such thing as too much testing). Of course, you are also welcome to test the QCOW2 image format support. See the dedicated thread for more information. And, as usual, normal use and anything else you want to test. Test window before official release of the updates ~3 days We would like to thank users who reported feedback since our last call for testing, in less than 24h: @acebmxer, @Andrew, @MajorP93.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    M
    Thank you !!! I used your Tip, Booted a CD and got it running on XCP-ng Note: chroot + dracut is not an option here, as dracut is not present in the OXE image. Main issue was the disk device naming (sda → xvda). Fix was: Boot via Rocky rescue Mount root partition manually Update root=/dev/sdaX → root=/dev/xvdaX in /boot/grub2/grub.cfg Replace all /dev/sdaX with /dev/xvdaX in /etc/fstab After that the system boots fine. Installation from guest tools via local guest tool iso went thru without any issues. Network config is managed by OXE itself, so changes must be done via OXE CLI (not persistent via Linux/nmcli), but this will be done by the admin of OXE and that's not me. So far it run's. We will do further testing and update this post if issues occur, in case someone else is searching for this. Fingers crossed there will be none. Again many thanks for pointing me in the right direction and the very quick response.
  • 3k Topics
    28k Posts
    D
    ok thanks for the answer. So if understand correctly: 100% is a per core limit, given that i have a AMD Ryzen 9 5950X 16-Core Processor (32 socket) on the host, the pool host maximum limit is 3200% right ?
  • Our hyperconverged storage solution

    45 Topics
    732 Posts
    DAYELAD
    Hello, I’m experiencing an issue on an XCP-ng cluster using XOSTOR. Environment: 3-node XCP-ng cluster XOSTOR distributed storage (2x2Tob nvme on each host) XOA for management Management network 1Gb/s Storage Network 10Gb/s MTU 1500 everywhere (no jumbo frames) So during VM migrations, creation, destroy XOA loses connection to my host pool, VMs keep running normally Hosts remain reachable (SSH / HTTPS / ping OK), Connection comes back after some time 30s to 1min. Observations: No significant CPU or RAM saturation No obvious disk latency issues (iostat looks normal) No errors reported on NICs xapi process remains active (no crash or freeze) The problem is intermittent and seems random. i've monitored nic with iftop and i see no bandwith bottleneck et and can see that XOSTOR is using 10gb network only. Has anyone experienced similar behavior with XOSTOR? And how to Fix it ? Thanks in advance for your help.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.