Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    F
    News about development of XCP-ng Center? In particular any way to customize the level of alert to see? I now see level 4 alert that is more or less "info" and I prefere to not see as "alert".
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    olivierlambertO
    Technically speaking, this could be an improvement of the RPU yeah, for non-agile VMs (to shut them down). IDK if it's already in the backlog somewhere or not, pinging @gregoire
  • 3k Topics
    28k Posts
    A
    @florent said: @acebmxer said: connectNbdClientIfPossible yes connectNbdClientIfPossible is the nearest error message from the real cause Now, why does it fails ? nbd should be enabled on the network used for backups , do you have a default backup network defined ? xo must have at least one VIF on this network See post - https://xcp-ng.org/forum/post/104549 I do have backup network defined. I believe before doing the last round I did have NBD enabled at the pool level on the PIFs and the backup network defined. During trouble shooting it mentioned about NBD being the issue so i disabled NBD, I left the backup pool network defined. I send PM of chat working to the conclusion. Also i did re-add a vhd first SR and migrated the vm "Docker of Things" to that since that was one of the vms i was having constant issues with. the last backups logs... 2026-04-20T04_19_10.531Z - backup NG.txt 2026-04-20T17_04_48.232Z - backup NG.txt 2026-04-20T21_24_14.738Z - backup NG.txt 2026-04-21T02_22_08.642Z - backup NG.txt 2026-04-21T04_16_26.094Z - backup NG.txt First success full with NBD disabled 2026-04-21T04_24_12.036Z - backup NG.txt Just ran again this morning expected it to pass but 2 vms failed - 2026-04-21T10_10_14.175Z - backup NG.txt With NBD disabled looks like the Host does the backup not xo itself... [image: 1776766448024-screenshot_20260421_061258-resized.png] [image: 1776766457364-screenshot_20260421_061319-resized.png]
  • Our hyperconverged storage solution

    44 Topics
    731 Posts
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.