Subcategories

  • All Xen related stuff

    564 Topics
    5k Posts
    olivierlambertO
    Our most promising lead is that's due to the fact they do not have a feature Intel got, called iPAT. In very short (and probably too short to be entirely correct), is the fact that the grant tables in the guest (used to securely communicate between -in that case- the VM and the Dom0) is not cached by AMD CPU. And on AMD, there's no way to force a cache attribute on a guest memory access, unlike with Intel. So the grant table requests are not cached on AMD vs Intel, explain at least a part of the performance difference. What's next? Roger from Xen project pointed us in that direction, and he did a very crude patch that demonstrated that we tested internally, showing that's a promising lead (x5 perf in VM->Dom0 and near twice between VMs). Right now, we have multiple people working internally to make a "real" patch or at least something to "workaround" the issue if possible. So it's been few weeks since then, we are trying to figure (at Vates, again) what would be the best approach for AMD CPUs, to make a patch that could land upstream.
  • The integrated web UI to manage XCP-ng

    18 Topics
    261 Posts
    G
    Confirmed by trying to install a Windows Server 2025 with UEFI and it did not boot the CD from the ISO SR (SMB share). Started over to be able to grab screen shots of the process for documentation, Debian 12 from the latest ISO worked just fine in BIOS mode. Overall, pretty pleased at where XO Lite is going, it's complete enough to get started, easier if you deploy XOA (as it has always been), but you can do everything in a semi GUI/Text based workflow now which opens this up to more users. And once some form of XO is running, it's all back to the same as it has been which is certainly one of the easiest systems to get up and running.
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    90 Topics
    1k Posts
    R
    On vmware u would need als vcenter for this kind of features. And as u can easy deploy an empty xoa, why would this be an issue?
  • Hardware related section

    114 Topics
    1k Posts
    olivierlambertO
    I was just thinking about the potential reasons why it doesn't work, and it wasn't a correct guess
  • The place to discuss new additions into XCP-ng

    239 Topics
    3k Posts
    TeddyAstieT
    Hello ! Xen supports 3 virtualization modes, PV (deprecated), HVM (used in XCP-ng) and PVH. While HVM is supported in XCP-ng (and used), PVH hasn't been integrated yet, but today in XCP-ng 8.3 we have some early support for it. The PVH mode has been officially introduced in Xen 4.10 as leaner, simpler variant of HVM (it was initially named HVM-lite) with little to no emulation, only PV devices, and less overall complexity. It aims to be a great and simpler alternative to traditional HVM for modern guests. A quick comparison of all modes PV mode : needs specific guest support only PV devices (no legacy hardware) relies on PV MMU (less efficient than VT-x EPT/AMD-V NPT overall, but works without virtualization technologies) unsafe against Spectre-style attacks supports: direct kernel boot, pygrub deprecated HVM mode : emulate a real-behaving machine (using QEMU) including legacy platform hardware (IOAPIC, HPET, PIT, PIC, ...) including (maybe legacy) I/O hardware (network card, storage ...) some can be disabled by the guest (PVHVM), but they exist at the start of the guest relies on VT-x/AMD-V traditional PC boot flow (BIOS/UEFI) optional PV devices (opt-in by guest; PVHVM) performs better than PV mode on most machines compatible with pretty much all guests (including Windows and legacy OS) PVH mode : relies on VT-x/AMD-V (regarding that, on the Xen side, it's using the same code as HVM) minimal emulation (e.g no QEMU), way simpler overall, lower overhead only PV devices support : direct kernel boot (like PV), PVH-GRUB, or UEFI boot (PVH-OVMF) needs guest support (but much less intrusive than PV) works with most Linux distros and most BSD; doesn't work with Windows (yet) Installation Keep in mind that this is very experimental and not officially supported. PVH vncterm patches (optional) While XCP-ng 8.3 actually has support for PVH, due to a XAPI bug, you will not be able to access the guest console. I provide a patched XAPI with a patched console. # Download repo file for XCP-ng 8.3 wget https://koji.xcp-ng.org/repos/user/8/8.3/xcpng-users.repo -O /etc/yum.repos.d/xcpng-users.repo # You may need to update to testing repositories. yum update --enablerepo=xcp-ng-testing # Installing the patched XAPI packages (you should see `.pvh` XAPI packages) yum update --enablerepo=xcp-ng-tae2 This is optional, but you probably want that to see what's going on in your guest without having to rely on SSH or xl console. Making/converting into a PVH guest You can convert any guest into a PVH guest by modifying its domain-type parameter. xe vm-param-set uuid={UUID} domain-type=pvh And revert this change by changing it back to HVM xe vm-param-set uuid={UUID} domain-type=hvm PVH OVMF (boot using UEFI) You also need a PVH-specific OVMF build that can be used to boot the guest in UEFI mode. Currently, there is no package available for getting it, but I provide a custom-built OVMF with PVH support https://nextcloud.vates.tech/index.php/s/L8a4meCLp8aZnGZ You need to place this file in the host as /var/lib/xcp/guest/pvh-ovmf.elf (create all missing parents). Then sets it as PV-kernel xe vm-param-set uuid={UUID} PV-kernel=/var/lib/xcp/guest/pvh-ovmf.elf Once done, you can boot your guest as usual. Tested guests On many Linux distros, you need to add console=hvc0 in the cmdline, otherwise, you may not have access to a PV console. Alpine Linux Debian Known limitations Some stats shows "no stats" (XAPI bug ?) No support for booting from ISO, you can workaround this by importing your iso as a disk and using it as read-only disk No live migration support (or at least, don't expect it to work properly) No PCI passthrough support No actual display (only PV console)
  • What to do about Realtek RTL8125 RTL8126 RTL8127 drivers

    1
    0 Votes
    1 Posts
    63 Views
    No one has replied
  • XCP-NG Kubernetes micro8k

    2
    7
    0 Votes
    2 Posts
    63 Views
    olivierlambertO
    Hi, Can you explain what you are trying to achieve exactly? And if you have an issue, what is it? edit: also please edit your post to use Markdown syntax for code block
  • 8.3 Cannot boot from CD Rom

    19
    1
    0 Votes
    19 Posts
    511 Views
    olivierlambertO
    Reping @stormi
  • sr iso disconnect and crashed my hosts

    11
    0 Votes
    11 Posts
    115 Views
    olivierlambertO
    I already suggested you the solution, now it's up to you to live with those process or to decide to reboot (ideally after doing updates because it's very dangerous to NOT being up to date)
  • Install XCP-ng in old HP ProLiant DL160 G6 (gen 6)

    9
    0 Votes
    9 Posts
    228 Views
    S
    @john.c Yeah - like I said it is a good step in the right direction. Just doesn't solve my particular storage related problems.
  • Citrix tools after version 9.0 removed quiesced snapshot

    2
    0 Votes
    2 Posts
    88 Views
    TeddyAstieT
    @vkeven XCP-ng 8.1 release note says VSS and quiesced snapshots support is removed, because it never worked correctly and caused more harm than good. Note that Windows guest tools version 9 (the default for recent versions of Windows if you install Citrix drivers) already removed VSS support, even for older versions of CH / XCP-ng I am not sure if this VSS feature is bound to the PV drivers, or if it also needs hypervisor support. Though it is not recommended to stay on a old version of the guest agent.
  • Diagnosing frequent crashes on host

    15
    0 Votes
    15 Posts
    361 Views
    T
    @olivierlambert @olivierlambert said in Diagnosing frequent crashes on host: Maybe there's a usage that's slightly different since when it was "more solid" and now it's trigger more easily. Is your XCP-ng fully up to date? No; as said originally, I'm still on 8.2.1. I have been concerned about moving to 8.3 because it's a new installation, and I don't want to screw it up, but I'm willing to accept that it's the right thing to do.
  • Script to auto mount USBs on Boot/Reboot. Monitoring Multiple UPS

    7
    0 Votes
    7 Posts
    472 Views
    olivierlambertO
    Ping @stormi so we track this somewhere internally
  • Grub looking for /dev/vda instead of /dev/xvda

    1
    0 Votes
    1 Posts
    61 Views
    No one has replied
  • Storage migration logs

    2
    0 Votes
    2 Posts
    50 Views
    olivierlambertO
    Hi, Check the task view, you'll have the duration of the process visible.
  • reboot of host does it stop or kill running VM's?

    14
    0 Votes
    14 Posts
    1k Views
    N
    Could someone elaborate on the procedure to have all VMs on a host shutdown properly upon XCP-NG host shutdown please? I tried from the host prompt: xe host-disable xe host-shutdown and from XOA Host: shutdown, with warning (This will shutdown your host without evacuating its VMs. Do you want to continue?) and rightly so the host has seemingly become unavailable (ping to its IP stops) But then what happens is very odd: first the VM on it still pings for a couple minutes (yes after the host stops to answers the ping) then the VM stops pinging but AFAICS XCP-NG is not OFF Awkwardly, I just access to the IDRAC8 entreprise license on which XCP-Ng is running, and can't SEE the proper status of XCP-NG from it. AFAIK it's not pinging but it doesn't seem OFF either. At least the IDRAC shows it ON, and upon power cycling and reconnecting to the VM the logs shows it hasn't been cleanly shutdown. NB: the VM has xen-guest-agent running within a container, but from what I gathered, the agent in Linux guests has no role in VM shutdown: See https://xcp-ng.org/forum/topic/10631/understanding-xe-guest-utilities/16 Also, I doubled check Proxmox: it does clean shutdown VMs, either with a "shutdown -h now" command or when triggered from GUI. And that's with a VM that has Promox guest installed. In any case, it would be nice to have XCP-NG/XOA be able to do the same.
  • ACPI Error: SMBus/IPMI/GenericSerialBus

    5
    0 Votes
    5 Posts
    146 Views
    ForzaF
    @dinhngtu Yes, looks like it. I stopped Netdata and the problem went away. But it is strange it started after the latest set of updates.
  • Migrate windows from Xeon Silver to older Xeon or AMD?

    Solved
    3
    0 Votes
    3 Posts
    133 Views
    G
    @olivierlambert I was looking for a way to mark this solved, can't find it. I haven't moved things, but after migrating my big lab to my mini-lab, I'm confident that the warm migration is the way to go. It was fast and seamless as long as you have the right network adapters set up. I had to fool with one of my networks to make a VM function, but that was certainly something I overlooked while setting up the mini-lab. A little testing before moving the VMs should make this go easily if using the old servers is the option for this project.
  • Veeam and XCP-ng

    Solved
    34
    0 Votes
    34 Posts
    12k Views
    planedropP
    @MAnon This is a valid point actually, and without additional work, you couldn't just restore to another hypervisor. However, check this blog post: https://xen-orchestra.com/blog/xen-orchestra-5-100/?utm_campaign=mail_5.100&utm_term=logo&ct=YTo1OntzOjY6InNvdXJjZSI7YToyOntpOjA7czo1OiJlbWFpbCI7aToxO2k6NjU7fXM6NToiZW1haWwiO2k6NjU7czo0OiJzdGF0IjtzOjIyOiI2NzIzODI1NDE4ZjVmMjE5NDI2OTYwIjtzOjQ6ImxlYWQiO3M6NToiODM5ODciO3M6NzoiY2hhbm5lbCI7YToxOntzOjU6ImVtYWlsIjtpOjY1O319 Veeam is likely going to properly support XCP-ng. And for what it's worth, you can use agent based Veeam backups in the VMs and that works fine.
  • 0 Votes
    30 Posts
    2k Views
    D
    @mickwilli yes sorry, i post a new subject and the update february fix lot of bugs, but not the freeze, but i found the solution now, i installed a new vm with W11 PRO 23H2, no bugs it's fine thanks to all, all bugs fixed in previous version 23H2, it's Microsoft, the past Is better than the future
  • Unable to attach empty optical drive to VM.

    2
    1
    0 Votes
    2 Posts
    103 Views
    A
    I've managed to at least solve part of my issue. Using this article, I managed to pull together the information I needed in order to remove the Optical Drive from the VM. It refereced xe vbd-list. I found the manpage for that command, and noted that I could get the information I needed to remove the drive. For future me to reference - because I know I'll somehow do this again in the future. List all Virtual Block Devices (vbd's) associated to the vm (you can do this by vm-uuid, or vm-label) [20:42 xcp-ng-1 ~]# xe vbd-list vm-uuid="3eb63bb4-29d1-f3a7-44a1-37fdb3711454" params="all" Output should show the following. uuid ( RO) : 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0 vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 7821ef6d-4778-4478-8cf4-e950577eaf4f vdi-name-label ( RO): SCSI 2:0:0:0 allowed-operations (SRO): attach; eject current-operations (SRO): empty ( RO): false device ( RO): userdevice ( RW): 3 bootable ( RW): false mode ( RW): RO type ( RW): CD unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> uuid ( RO) : 4d0f16c4-9cf5-5df5-083b-ec1222f97abc vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 3f89c727-f471-4ec3-8a7c-f7b7fc478148 vdi-name-label ( RO): [ESXI]veeam01-flat.vmdk allowed-operations (SRO): attach current-operations (SRO): empty ( RO): false device ( RO): xvda userdevice ( RW): 0 bootable ( RW): false mode ( RW): RW type ( RW): Disk unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): owner: io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> Look for the device with type ( RW): CD. Take that uuid. In this case, the uuid was 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0. Destroy the vbd: xe vbd-destroy uuid="7443c2f0-7c04-ab88-ccfd-29f0831c1aa0" Once this was done, the vm started without issue.
  • The HA doesn't work

    16
    0 Votes
    16 Posts
    349 Views
    S
    @tjkreidl Hello, Didn't I get half of ok too? 28 machines impacted, 15 left ok, and 13 with the error msg
  • Failure to Boot

    3
    0 Votes
    3 Posts
    126 Views
    D
    @Davidj-0 Zero changes. This is ran on a MS-01 with 2x 2TB NVME running in mirror RAID. All I use this for is to mess around with VMs and self host some services. I was still learning stuff so never back up anything because I was still building it out. Don’t feel like starting over, but have no idea what this fault even means to attempt to recover what I have done.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    7
    1 Votes
    7 Posts
    353 Views
    D
    @bleader Thank you for the thorough explanation it greatly helps to understand how the team works to keep these systems secure and functional. From a generalist standpoint, I'll use publicly availability tools to check for and report on any known vulnerabilities within my network (public and private) to me, and then I'll either address those vulnerabilities by either a patch or more commonly a configuration change within a given system. These could include my UPSs or switches, Hypervisors, client devices (laptops etc). Addressing these is a huge portion of the work that I have to address in a day to day, and knowing what would be normal convention to saying "hey I found this issue with a commodity vulnerability scanner, is it going to be addressed" is useful.
  • All NICs on XCP-NG Node Running in Promiscuous Mode

    7
    0 Votes
    7 Posts
    356 Views
    bleaderB
    Running tcpdump switches the interface to promiscuous to allow all traffic that reaches the NIC to be dumped. So I assume the issue you had on your switches allowed traffic to reach the host, that was forwarding it to the VMs, and wasn't dropped because tcpdump switched the VIF into promiscuous mode. If it seems resolved, that's good, otherwise let us know if we need to investigate further on this