Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    stormiS
    @pkgw Our initial theory is that you might have applied updates at some point which had replaced the sm package with one that didn't support qcow2. Then a next update would have brought it back, but the metadata lost.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    olivierlambertO
    Note that we have 0 XO or XCP-ng dev based in the US neither
  • 3k Topics
    28k Posts
    P
    [17:00 30] xoa@xoa:~$ sudo journalctl -u xo-server -f Apr 30 16:23:13 xoa xo-server[569]: 2026-04-30T14:23:13.264Z xo:main INFO Setting up / → /usr/local/lib/node_modules/@xen-orchestra/web/dist Apr 30 16:26:27 xoa xo-server[569]: 2026-04-30T14:26:27.942Z xo:rest-api:error-handler INFO [GET] /users/370331c9-fe77-49db-b2a0-e38f776607bd (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.549Z xo:rest-api:error-handler INFO [GET] /pools (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.555Z xo:rest-api:error-handler INFO [GET] /hosts (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.563Z xo:rest-api:error-handler INFO [GET] /vms (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.627Z xo:rest-api:error-handler INFO [GET] /tasks (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.812Z xo:rest-api:error-handler INFO [GET] /alarms (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.812Z xo:rest-api:error-handler INFO [GET] /vm-controllers (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.813Z xo:rest-api:error-handler INFO [GET] /vdis (403) Apr 30 16:26:28 xoa xo-server[569]: 2026-04-30T14:26:28.813Z xo:rest-api:error-handler INFO [GET] /srs (403)
  • Our hyperconverged storage solution

    45 Topics
    732 Posts
    DAYELAD
    Hello, I’m experiencing an issue on an XCP-ng cluster using XOSTOR. Environment: 3-node XCP-ng cluster XOSTOR distributed storage (2x2Tob nvme on each host) XOA for management Management network 1Gb/s Storage Network 10Gb/s MTU 1500 everywhere (no jumbo frames) So during VM migrations, creation, destroy XOA loses connection to my host pool, VMs keep running normally Hosts remain reachable (SSH / HTTPS / ping OK), Connection comes back after some time 30s to 1min. Observations: No significant CPU or RAM saturation No obvious disk latency issues (iostat looks normal) No errors reported on NICs xapi process remains active (no crash or freeze) The problem is intermittent and seems random. i've monitored nic with iftop and i see no bandwith bottleneck et and can see that XOSTOR is using 10gb network only. Has anyone experienced similar behavior with XOSTOR? And how to Fix it ? Thanks in advance for your help.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.