Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @stormi Left side of chart is all VMS running. 1.5gb/s each vm's vdi ranges from 128gb - 256gb allocated. Actual disk spaced used not sure) [image: 1777315109407-screenshot_20260425_130107.png] The 200mb/s - 300mb/s on far right is just XO-CE running idle. [image: 1777315216499-screenshot_20260425_144314.png] So if each vm is consuming 300mb/s ish times 4 -5 vms would get close to the 1.5gb/s.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    H
    @stormi internal slack channel where people share all kinds of security related things, I can ask poster where he saw it
  • 3k Topics
    28k Posts
    M
    I noticed the same: for VMs that use UEFI the boot time is greatly increased compared to BIOS mode. I noticed that VMs that have UEFI enabled show some "installing xen timer" and "spinlock" events in kernel log: Apr 27 11:35:06 xo01 kernel: Xen: using vcpuop timer interface Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 0 Apr 27 11:35:06 xo01 kernel: smpboot: CPU0: AMD EPYC 7702P 64-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 27 11:35:06 xo01 kernel: cpu 0 spinlock event irq 52 Apr 27 11:35:06 xo01 kernel: Performance Events: PMU not available due to virtualization, using software events only. Apr 27 11:35:06 xo01 kernel: signal: max sigframe size: 1776 Apr 27 11:35:06 xo01 kernel: rcu: Hierarchical SRCU implementation. Apr 27 11:35:06 xo01 kernel: rcu: Max phase no-delay instances is 1000. Apr 27 11:35:06 xo01 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 27 11:35:06 xo01 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 27 11:35:06 xo01 kernel: smp: Bringing up secondary CPUs ... Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 1 Apr 27 11:35:06 xo01 kernel: smpboot: x86: Booting SMP configuration: Apr 27 11:35:06 xo01 kernel: .... node #0, CPUs: #1 Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 2 Apr 27 11:35:06 xo01 kernel: #2 Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 3 Apr 27 11:35:06 xo01 kernel: #3 Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 4 Apr 27 11:35:06 xo01 kernel: #4 Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 5 Apr 27 11:35:06 xo01 kernel: #5 Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 6 Apr 27 11:35:06 xo01 kernel: #6 Apr 27 11:35:06 xo01 kernel: installing Xen timer for CPU 7 Apr 27 11:35:06 xo01 kernel: #7 Apr 27 11:35:06 xo01 kernel: cpu 1 spinlock event irq 81 Apr 27 11:35:06 xo01 kernel: cpu 2 spinlock event irq 82 Apr 27 11:35:06 xo01 kernel: cpu 3 spinlock event irq 83 Apr 27 11:35:06 xo01 kernel: cpu 4 spinlock event irq 84 Apr 27 11:35:06 xo01 kernel: cpu 5 spinlock event irq 85 Apr 27 11:35:06 xo01 kernel: cpu 6 spinlock event irq 86 Apr 27 11:35:06 xo01 kernel: cpu 7 spinlock event irq 87 Those events seem to be what is slowing down boot. Ever since I started using XCP-ng I was able to observe this behavior. Once the VM is fully booted performance seems to be normal. Maybe worth investigating at some point. Best regards
  • Our hyperconverged storage solution

    45 Topics
    732 Posts
    DAYELAD
    Hello, I’m experiencing an issue on an XCP-ng cluster using XOSTOR. Environment: 3-node XCP-ng cluster XOSTOR distributed storage (2x2Tob nvme on each host) XOA for management Management network 1Gb/s Storage Network 10Gb/s MTU 1500 everywhere (no jumbo frames) So during VM migrations, creation, destroy XOA loses connection to my host pool, VMs keep running normally Hosts remain reachable (SSH / HTTPS / ping OK), Connection comes back after some time 30s to 1min. Observations: No significant CPU or RAM saturation No obvious disk latency issues (iostat looks normal) No errors reported on NICs xapi process remains active (no crash or freeze) The problem is intermittent and seems random. i've monitored nic with iftop and i see no bandwith bottleneck et and can see that XOSTOR is using 10gb network only. Has anyone experienced similar behavior with XOSTOR? And how to Fix it ? Thanks in advance for your help.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.