Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @stormi Left side of chart is all VMS running. 1.5gb/s each vm's vdi ranges from 128gb - 256gb allocated. Actual disk spaced used not sure) [image: 1777315109407-screenshot_20260425_130107.png] The 200mb/s - 300mb/s on far right is just XO-CE running idle. [image: 1777315216499-screenshot_20260425_144314.png] So if each vm is consuming 300mb/s ish times 4 -5 vms would get close to the 1.5gb/s.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    stormiS
    Hi. We are aware of this publication and have reviewed every of its claims over the last days. A few of the reported issues do represent real privilege escalation paths. However, they rely on XAPI’s advanced RBAC roles feature, which is not enabled or exposed by default in Xen Orchestra, XO Lite, or any of our standard documentation. In practice, the escalation path requires a specific setup: an XCP-ng pool connected to Active Directory for its user management, where a user is given access to the management network and is explicitly granted VM configuration rights (vm-admin XAPI role) via XAPI roles. Such a user could gain elevated host-level privileges beyond what was intended. As we don't actively promote or recommend this configuration, we believe very few users are using it. For the small group that might be, patched packages are in the testing phase, and we will release them shortly. CVEs are being assigned by the Xen Project (which is the parent project of the XAPI Project) to the vulnerabilities, all requiring this vm-admin XAPI role. Most of the other claims stem from misunderstandings of how XAPI roles are designed to work (~65 of the 89 claims), or describe bugs that don’t translate to actual security impact (~15 of them). On the disclosure process: we always appreciate coordinated security research, but responsible disclosure typically involves a reasonable grace period (often two weeks or more) to allow time for review, patching, and coordinated release. In this case, we received an email just 24 hours before public publication, and the initial contact came with strange conditions. That doesn’t align with standard responsible disclosure practices. Note: This is not intended as an official statement. I have a clear view of the security impact, but since this is an informal, unfiltered write-up, please pardon any minor mistakes in how I’ve reported it.
  • 3k Topics
    28k Posts
    G
    @escape222 Maybe try again as a BIOS boot VM, but the times for BIOS boot are nearing an end, so many things have climbed on UEFI only these days that a problem like this is going to be hard to ignore.
  • Our hyperconverged storage solution

    45 Topics
    732 Posts
    DAYELAD
    Hello, I’m experiencing an issue on an XCP-ng cluster using XOSTOR. Environment: 3-node XCP-ng cluster XOSTOR distributed storage (2x2Tob nvme on each host) XOA for management Management network 1Gb/s Storage Network 10Gb/s MTU 1500 everywhere (no jumbo frames) So during VM migrations, creation, destroy XOA loses connection to my host pool, VMs keep running normally Hosts remain reachable (SSH / HTTPS / ping OK), Connection comes back after some time 30s to 1min. Observations: No significant CPU or RAM saturation No obvious disk latency issues (iostat looks normal) No errors reported on NICs xapi process remains active (no crash or freeze) The problem is intermittent and seems random. i've monitored nic with iftop and i see no bandwith bottleneck et and can see that XOSTOR is using 10gb network only. Has anyone experienced similar behavior with XOSTOR? And how to Fix it ? Thanks in advance for your help.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.