Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @rzr One thing i am noticing after this updates. Is i am seeing alot more traffic on my truenas. I dont see this load anywhere in xo(sources) or xoa. I have looked at the vms themselvs and each host. But when they are idle i see this in truenas. The gaps are me shutting down vms and starting each one to find the problem vms but it seems to be any vm. I didnt notice any performance issues just noticed the graph in truenas when its usally flat line there the occasional spike here and there not the big mess to the left in first screenshot. 5 vms running My xoa is on local storage on master host. and used for these test. All vms are powered off except xoa and xo [image: 1777136679245-screenshot_20260425_130107.png] Here i booted the xo vm and left it idle. The spike after is me live migrate back to vhd SR and then left idle. [image: 1777139689147-screenshot_20260425_135258-1.png] The gap in the middle is xo idle on vhd only SR. [image: 1777139783667-screenshot_20260425_135604.png] live Migrate xo back to qcow2 only SR [image: 1777140649536-screenshot_20260425_140908.png] Migration back to qcow2 completed [image: 1777142234850-screenshot_20260425_143122.png] Left xo ldle after migration to qcow2 sr. [image: 1777142635855-screenshot_20260425_144314.png] Again all vms booted and idle.... [image: 1777143315095-screenshot_20260425_145451.png]
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    M
    Thank you @olivierlambert !!! I used your Tip, Booted a CD and got it running on XCP-ng Note: chroot + dracut is not an option here, as dracut is not present in the OXE image. Main issue was the disk device naming (sda → xvda). Fix was: Boot via Rocky rescue Mount root partition manually Update root=/dev/sdaX → root=/dev/xvdaX in /boot/grub2/grub.cfg Replace all /dev/sdaX with /dev/xvdaX in /etc/fstab After that the system boots fine. Installation from guest tools via local guest tool iso went thru without any issues. Network config is managed by OXE itself, so changes must be done via OXE CLI (not persistent via Linux/nmcli), but this will be done by the admin of OXE and that's not me. So far it run's. We will do further testing and update this post if issues occur, in case someone else is searching for this. Fingers crossed there will be none. Again many thanks for pointing me in the right direction and the very quick response.
  • 3k Topics
    28k Posts
    P
    @delacosta456 said: the pool host maximum limit is 3200% right right
  • Our hyperconverged storage solution

    45 Topics
    732 Posts
    DAYELAD
    Hello, I’m experiencing an issue on an XCP-ng cluster using XOSTOR. Environment: 3-node XCP-ng cluster XOSTOR distributed storage (2x2Tob nvme on each host) XOA for management Management network 1Gb/s Storage Network 10Gb/s MTU 1500 everywhere (no jumbo frames) So during VM migrations, creation, destroy XOA loses connection to my host pool, VMs keep running normally Hosts remain reachable (SSH / HTTPS / ping OK), Connection comes back after some time 30s to 1min. Observations: No significant CPU or RAM saturation No obvious disk latency issues (iostat looks normal) No errors reported on NICs xapi process remains active (no crash or freeze) The problem is intermittent and seems random. i've monitored nic with iftop and i see no bandwith bottleneck et and can see that XOSTOR is using 10gb network only. Has anyone experienced similar behavior with XOSTOR? And how to Fix it ? Thanks in advance for your help.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.