Subcategories

  • All Xen related stuff

    565 Topics
    5k Posts
    AlexanderKA
    @amosgiture xcp-ng 8.2 and 8.2.1 is working without any issue. Have you checked the logs? it is recognized as xenserver 8.2.1 [image: 1747995496573-91f8952f-3e5e-4ed2-9990-2dc7ffc8a8a9-image.png]
  • The integrated web UI to manage XCP-ng

    18 Topics
    261 Posts
    G
    Confirmed by trying to install a Windows Server 2025 with UEFI and it did not boot the CD from the ISO SR (SMB share). Started over to be able to grab screen shots of the process for documentation, Debian 12 from the latest ISO worked just fine in BIOS mode. Overall, pretty pleased at where XO Lite is going, it's complete enough to get started, easier if you deploy XOA (as it has always been), but you can do everything in a semi GUI/Text based workflow now which opens this up to more users. And once some form of XO is running, it's all back to the same as it has been which is certainly one of the easiest systems to get up and running.
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    90 Topics
    1k Posts
    R
    On vmware u would need als vcenter for this kind of features. And as u can easy deploy an empty xoa, why would this be an issue?
  • Hardware related section

    114 Topics
    1k Posts
    olivierlambertO
    I was just thinking about the potential reasons why it doesn't work, and it wasn't a correct guess
  • The place to discuss new additions into XCP-ng

    239 Topics
    3k Posts
    TeddyAstieT
    Hello ! Xen supports 3 virtualization modes, PV (deprecated), HVM (used in XCP-ng) and PVH. While HVM is supported in XCP-ng (and used), PVH hasn't been integrated yet, but today in XCP-ng 8.3 we have some early support for it. The PVH mode has been officially introduced in Xen 4.10 as leaner, simpler variant of HVM (it was initially named HVM-lite) with little to no emulation, only PV devices, and less overall complexity. It aims to be a great and simpler alternative to traditional HVM for modern guests. A quick comparison of all modes PV mode : needs specific guest support only PV devices (no legacy hardware) relies on PV MMU (less efficient than VT-x EPT/AMD-V NPT overall, but works without virtualization technologies) unsafe against Spectre-style attacks supports: direct kernel boot, pygrub deprecated HVM mode : emulate a real-behaving machine (using QEMU) including legacy platform hardware (IOAPIC, HPET, PIT, PIC, ...) including (maybe legacy) I/O hardware (network card, storage ...) some can be disabled by the guest (PVHVM), but they exist at the start of the guest relies on VT-x/AMD-V traditional PC boot flow (BIOS/UEFI) optional PV devices (opt-in by guest; PVHVM) performs better than PV mode on most machines compatible with pretty much all guests (including Windows and legacy OS) PVH mode : relies on VT-x/AMD-V (regarding that, on the Xen side, it's using the same code as HVM) minimal emulation (e.g no QEMU), way simpler overall, lower overhead only PV devices support : direct kernel boot (like PV), PVH-GRUB, or UEFI boot (PVH-OVMF) needs guest support (but much less intrusive than PV) works with most Linux distros and most BSD; doesn't work with Windows (yet) Installation Keep in mind that this is very experimental and not officially supported. PVH vncterm patches (optional) While XCP-ng 8.3 actually has support for PVH, due to a XAPI bug, you will not be able to access the guest console. I provide a patched XAPI with a patched console. # Download repo file for XCP-ng 8.3 wget https://koji.xcp-ng.org/repos/user/8/8.3/xcpng-users.repo -O /etc/yum.repos.d/xcpng-users.repo # You may need to update to testing repositories. yum update --enablerepo=xcp-ng-testing # Installing the patched XAPI packages (you should see `.pvh` XAPI packages) yum update --enablerepo=xcp-ng-tae2 This is optional, but you probably want that to see what's going on in your guest without having to rely on SSH or xl console. Making/converting into a PVH guest You can convert any guest into a PVH guest by modifying its domain-type parameter. xe vm-param-set uuid={UUID} domain-type=pvh And revert this change by changing it back to HVM xe vm-param-set uuid={UUID} domain-type=hvm PVH OVMF (boot using UEFI) You also need a PVH-specific OVMF build that can be used to boot the guest in UEFI mode. Currently, there is no package available for getting it, but I provide a custom-built OVMF with PVH support https://nextcloud.vates.tech/index.php/s/L8a4meCLp8aZnGZ You need to place this file in the host as /var/lib/xcp/guest/pvh-ovmf.elf (create all missing parents). Then sets it as PV-kernel xe vm-param-set uuid={UUID} PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf Once done, you can boot your guest as usual. Tested guests On many Linux distros, you need to add console=hvc0 in the cmdline, otherwise, you may not have access to a PV console. Alpine Linux Debian Known limitations Some stats shows "no stats" (XAPI bug ?) No support for booting from ISO, you can workaround this by importing your iso as a disk and using it as read-only disk No live migration support (or at least, don't expect it to work properly) No PCI passthrough support No actual display (only PV console)
  • Multiple PCI-E passthrough problem

    3
    0 Votes
    3 Posts
    325 Views
    kasn_codeK
    So I managed to get the VM to see both drives. In dmesg I can see something like this [ 1.160149] ahci 0000:00:08.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x20 impl SATA mode [ 1.161043] ahci 0000:00:08.0: flags: 64bit ncq sntf ilck led clo only pmp fbs pio slum part [ 1.187888] ahci 0000:00:09.0: SSS flag set, parallel bus scan disabled [ 2.186211] ahci 0000:00:09.0: controller reset failed (0xffffffff) [ 2.187307] ahci: probe of 0000:00:09.0 failed with error -5 So I removed the pci and did a rescan with root@mars-test-uefi:~# echo "1" > /sys/bus/pci/devices/0000\:00\:09.0/remove root@mars-test-uefi:~# echo "1" > /sys/bus/pci/rescan Now when I checked dmesg it shows up properly [ 795.538214] ahci 0000:00:09.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x4 impl SATA mode [ 795.538219] ahci 0000:00:09.0: flags: 64bit ncq sntf ilck led clo only pmp fbs pio slum part fdisk -l also shows up both drives now too root@mars-test-uefi:~# fdisk -l Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: ST8000DM004-2U91 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WDC WD80EFZZ-68B Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes It seems like on boot it tries to find the device but it fails to, but on a remove/rescan it seems like its fine? The question now becomes, how do I make this work ON boot.
  • This topic is deleted!

    1
    0 Votes
    1 Posts
    2 Views
    No one has replied
  • Upgrade from 8.1 to 8.2

    Solved
    12
    1
    0 Votes
    12 Posts
    576 Views
    olivierlambertO
    Thanks a lot for your feedback!
  • Ubuntu desktop issue

    1
    0 Votes
    1 Posts
    162 Views
    No one has replied
  • Host dbsync failed and XAPI restart everytime after patches installation

    4
    0 Votes
    4 Posts
    193 Views
    olivierlambertO
    Really, you should because it would have prevent you this situation in the first place, but also some other future questions or problem you might have That's one of the few capital rules in XCP-ng world. Your master must ALWAYS be more recent than the slaves (or at the same level obviously), otherwise slaves won't be able to connect. I think you really need some assistance on best practices and such, please contact us so we can assist more generally. If this basic requirement is not understand, you might have other/deeper issues.
  • 0 Votes
    8 Posts
    2k Views
    olivierlambertO
    Any of the host: if they are in the same pool, that's logical. Only the master is needed to be reach. XAPI will probably be in "Starting state" as long as all SR aren't plugged. If you have the SR on a VM on another host than the master, reboot the master only, you should be able to connect sooner Alternatively, check https://docs.xcp-ng.org/troubleshooting/
  • One VM forcibly restarts

    5
    0 Votes
    5 Posts
    399 Views
    richiebyteR
    On this one VM yeah, all other VM's have been fine with the Citrix tools. I am not sure if the tools change anything on the VM level once installed, as the problem did continue after attaching a fresh blank virtual disk to the problem VM.
  • Ubuntu VM struck at boot

    2
    1
    0 Votes
    2 Posts
    287 Views
    planedropP
    How long have you waited for it? Like is it truly stuck or maybe just really slow to boot?