Subcategories

  • All Xen related stuff

    558 Topics
    5k Posts
    TheNorthernLightT
    @AtaxyaNetwork Since I cant get it to mount on boot, I have no idea if there is something special about it (I suspect it just has some existing folder structures, but I'm not 100% sure). Also, I wish I had Dell support, but we buy all of our hardware second-hand. So no support contract here.
  • The integrated web UI to manage XCP-ng

    16 Topics
    253 Posts
    StwarfS
    @olivierlambert I did reboot the system. The fact that I can access everything fine with the using the external url but not when I try to use the local IP address of the machine is strange. I'm at work currently but I'm going to do a deeper dive when I get home and search through everything in my home network. I just don't know what else to check.
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    87 Topics
    1k Posts
    S
    Putting this first so you know what you're in for before you start reading: TL/DR: I've migrated some VMs from Hyper-V to XCP-ng and (probably via my own ineptitude) we are experiencing worse performance. How can I fix this? Recently, we started the process of migrating from a Hyper-V environment, to XCP-ng. We only have a couple of clients on our servers, both of which are relatively simple Windows server DC + RDS setups. One of these clients was long overdue for an OS upgrade, so we just built a new pair of VMs for them. The other, we migrated using the guide found here https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#from-hyper-v We may have made a few mis-steps along the way... As the client is reporting (and we are observing) that performance is slow compared to pre-migration, I'll dump all relevant info I can think of below: VMs DC Windows Server 2016 4 vCPU 8GB RAM 250GB storage Apps/RDS Windows Server 2022 8 vCPU 24GB RAM 1.8TB Storage across multiple drives Hyper-V Environment HP DL380 Gen9 Dual Xeon E5-2630 v4 @ 2.2Ghz 256GB RAM All VM storage on SSD in RAID5 (NTFS) XCP-ng Environment HP DL380 Gen9 Dual Xeon E5-2680 v4 @ 2.4Ghz 256GB RAM Storage (all thin provisioned with ext4): VM OS on SSD RAID5 User Profiles on SSD RAID5 Frequently Accessed data on SSD RAID5 Old/Archived data on NFS drive over 10GB link Daily delta backups via XO Process Export and Convert From Hyper-V, we exported the VMs and drives. At this stage, we are not clear on how to "remove all the Hyper-V tools from the VM", so no vmic services were removed prior to conversion. These services have since been disabled, but were not running anyway as there was no Hyper-V host to trigger them. We then converted the exported drives and imported into XCP-ng VM Creation From XO, we created virtual machines with the same spec but no drives attached. It appears these were created with "other os" instead of the Windows Server 2026/2022 templates. For the VM with multiple drives, the OS was attached first, Citrix VM tools installed, and then the other drives attached. Other Config Network Migration removed the network config, so both VMs were re-ip'ed after the installation of Citrix drivers. This also broke the connection between the DC and the RDS VMs, which was fixed with the powershell command Test-ComputerSecureChannel -Repair File Restoration We pulled the "Files and Folders" from our managed backup provider (Cove), bringing the server up to date. Outlook had a fit over the ost files. Redownloading and indexing all emails was causing a major load, so we set a group policy enforcing outlook not to use cached mode, and then removed all ost files from the VM (backed up separately with Cove's 365 integration). Error Events VSS/ESENT were creating error events, related to hyper-v backup management. This was resolved by removing the hyper-v CLSID via a registry edit. Findings and queries - Where to start? Secure Boot Neither VM is presently using Secure Boot. I've seen some (AI generated) suggestions that this may impact performance - am I likely to see an improvement if I enable this? Templates I have read that using the correct template should give a performance boost due to the relevant "keys" being installed. Is there a way to fix this without re-creating VMs and re-attaching storage? Hyper-V integration If the services are disabled, are they likely to cause issues? If I need to remove pre-conversion, am I correct in assuming it will only be on the OS drive? How can I remove these tools? Disk Usage Monitoring the resource use, I frequently see drives being read at ~100MB/s and thought this might be the cause. I traced this back to the backup manager and stopped backups temporarily to test. With backups disabled, performance did not seem to improve. As there has been no change in backup frequency compared to pre-migration, I would not expect this to be the cause. What else should I consider?
  • Hardware related section

    108 Topics
    1k Posts
    olivierlambertO
    @efny I thought it wouldn't but in fact since I have it (almost a year) I had 0 bug. It's a Core i7-1255U.
  • The place to discuss new additions into XCP-ng

    236 Topics
    3k Posts
    D
    @gduperrey Got it, thank you for your assistance, this is super helpful.
  • Diagnosing frequent crashes on host

    15
    0 Votes
    15 Posts
    108 Views
    T
    @olivierlambert @olivierlambert said in Diagnosing frequent crashes on host: Maybe there's a usage that's slightly different since when it was "more solid" and now it's trigger more easily. Is your XCP-ng fully up to date? No; as said originally, I'm still on 8.2.1. I have been concerned about moving to 8.3 because it's a new installation, and I don't want to screw it up, but I'm willing to accept that it's the right thing to do.
  • 8.3 Cannot boot from CD Rom

    17
    1
    0 Votes
    17 Posts
    237 Views
    olivierlambertO
    All of that makes sense then, thanks a lot for your feedback! Pinging @stormi so we can triage your input
  • Script to auto mount USBs on Boot/Reboot. Monitoring Multiple UPS

    7
    0 Votes
    7 Posts
    378 Views
    olivierlambertO
    Ping @stormi so we track this somewhere internally
  • Install XCP-ng in old HP ProLiant DL160 G6 (gen 6)

    3
    0 Votes
    3 Posts
    52 Views
    I
    @nick.lloyd Thankyou...i'll try the last version, reading forums people says HP was problematic, thats why i was asking for help.
  • Grub looking for /dev/vda instead of /dev/xvda

    1
    0 Votes
    1 Posts
    32 Views
    No one has replied
  • Storage migration logs

    2
    0 Votes
    2 Posts
    33 Views
    olivierlambertO
    Hi, Check the task view, you'll have the duration of the process visible.
  • reboot of host does it stop or kill running VM's?

    14
    0 Votes
    14 Posts
    821 Views
    N
    Could someone elaborate on the procedure to have all VMs on a host shutdown properly upon XCP-NG host shutdown please? I tried from the host prompt: xe host-disable xe host-reboot and from XOA Host: shutdown, with warning (This will shutdown your host without evacuating its VMs. Do you want to continue?) and rightly so the host has seemingly become unavailable (ping to its IP stops) But then what happens is very odd: first the VM on it still pings for a couple minutes (yes after the host stops to answers the ping) then the VM stops pinging but AFAICS XCP-NG is not OFF Awkwardly, I just access to the IDRAC8 entreprise license on which XCP-Ng is running, and can't SEE the proper status of XCP-NG from it. AFAIK it's not pinging but it doesn't seem OFF either. At least the IDRAC shows it ON, and upon power cycling and reconnecting to the VM the logs shows it hasn't been cleanly shutdown. NB: the VM has xen-guest-agent running within a container, but from what I gathered, the agent in Linux guests has no role in VM shutdown: See https://xcp-ng.org/forum/topic/10631/understanding-xe-guest-utilities/16 Also, I doubled check Proxmox: it does clean shutdown VMs, either with a "shutdown -h now" command or when triggered from GUI. And that's with a VM that has Promox guest installed. In any case, it would be nice to have XCP-NG/XOA be able to do the same.
  • ACPI Error: SMBus/IPMI/GenericSerialBus

    5
    0 Votes
    5 Posts
    74 Views
    ForzaF
    @dinhngtu Yes, looks like it. I stopped Netdata and the problem went away. But it is strange it started after the latest set of updates.
  • Migrate windows from Xeon Silver to older Xeon or AMD?

    Solved
    3
    0 Votes
    3 Posts
    101 Views
    G
    @olivierlambert I was looking for a way to mark this solved, can't find it. I haven't moved things, but after migrating my big lab to my mini-lab, I'm confident that the warm migration is the way to go. It was fast and seamless as long as you have the right network adapters set up. I had to fool with one of my networks to make a VM function, but that was certainly something I overlooked while setting up the mini-lab. A little testing before moving the VMs should make this go easily if using the old servers is the option for this project.
  • Veeam and XCP-ng

    Solved
    34
    0 Votes
    34 Posts
    11k Views
    planedropP
    @MAnon This is a valid point actually, and without additional work, you couldn't just restore to another hypervisor. However, check this blog post: https://xen-orchestra.com/blog/xen-orchestra-5-100/?utm_campaign=mail_5.100&utm_term=logo&ct=YTo1OntzOjY6InNvdXJjZSI7YToyOntpOjA7czo1OiJlbWFpbCI7aToxO2k6NjU7fXM6NToiZW1haWwiO2k6NjU7czo0OiJzdGF0IjtzOjIyOiI2NzIzODI1NDE4ZjVmMjE5NDI2OTYwIjtzOjQ6ImxlYWQiO3M6NToiODM5ODciO3M6NzoiY2hhbm5lbCI7YToxOntzOjU6ImVtYWlsIjtpOjY1O319 Veeam is likely going to properly support XCP-ng. And for what it's worth, you can use agent based Veeam backups in the VMs and that works fine.
  • 0 Votes
    30 Posts
    1k Views
    D
    @mickwilli yes sorry, i post a new subject and the update february fix lot of bugs, but not the freeze, but i found the solution now, i installed a new vm with W11 PRO 23H2, no bugs it's fine thanks to all, all bugs fixed in previous version 23H2, it's Microsoft, the past Is better than the future
  • Unable to attach empty optical drive to VM.

    2
    1
    0 Votes
    2 Posts
    71 Views
    A
    I've managed to at least solve part of my issue. Using this article, I managed to pull together the information I needed in order to remove the Optical Drive from the VM. It refereced xe vbd-list. I found the manpage for that command, and noted that I could get the information I needed to remove the drive. For future me to reference - because I know I'll somehow do this again in the future. List all Virtual Block Devices (vbd's) associated to the vm (you can do this by vm-uuid, or vm-label) [20:42 xcp-ng-1 ~]# xe vbd-list vm-uuid="3eb63bb4-29d1-f3a7-44a1-37fdb3711454" params="all" Output should show the following. uuid ( RO) : 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0 vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 7821ef6d-4778-4478-8cf4-e950577eaf4f vdi-name-label ( RO): SCSI 2:0:0:0 allowed-operations (SRO): attach; eject current-operations (SRO): empty ( RO): false device ( RO): userdevice ( RW): 3 bootable ( RW): false mode ( RW): RO type ( RW): CD unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> uuid ( RO) : 4d0f16c4-9cf5-5df5-083b-ec1222f97abc vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 3f89c727-f471-4ec3-8a7c-f7b7fc478148 vdi-name-label ( RO): [ESXI]veeam01-flat.vmdk allowed-operations (SRO): attach current-operations (SRO): empty ( RO): false device ( RO): xvda userdevice ( RW): 0 bootable ( RW): false mode ( RW): RW type ( RW): Disk unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): owner: io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> Look for the device with type ( RW): CD. Take that uuid. In this case, the uuid was 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0. Destroy the vbd: xe vbd-destroy uuid="7443c2f0-7c04-ab88-ccfd-29f0831c1aa0" Once this was done, the vm started without issue.
  • The HA doesn't work

    16
    0 Votes
    16 Posts
    290 Views
    S
    @tjkreidl Hello, Didn't I get half of ok too? 28 machines impacted, 15 left ok, and 13 with the error msg
  • Failure to Boot

    3
    0 Votes
    3 Posts
    85 Views
    D
    @Davidj-0 Zero changes. This is ran on a MS-01 with 2x 2TB NVME running in mirror RAID. All I use this for is to mess around with VMs and self host some services. I was still learning stuff so never back up anything because I was still building it out. Don’t feel like starting over, but have no idea what this fault even means to attempt to recover what I have done.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    7
    1 Votes
    7 Posts
    185 Views
    D
    @bleader Thank you for the thorough explanation it greatly helps to understand how the team works to keep these systems secure and functional. From a generalist standpoint, I'll use publicly availability tools to check for and report on any known vulnerabilities within my network (public and private) to me, and then I'll either address those vulnerabilities by either a patch or more commonly a configuration change within a given system. These could include my UPSs or switches, Hypervisors, client devices (laptops etc). Addressing these is a huge portion of the work that I have to address in a day to day, and knowing what would be normal convention to saying "hey I found this issue with a commodity vulnerability scanner, is it going to be addressed" is useful.
  • All NICs on XCP-NG Node Running in Promiscuous Mode

    7
    0 Votes
    7 Posts
    247 Views
    bleaderB
    Running tcpdump switches the interface to promiscuous to allow all traffic that reaches the NIC to be dumped. So I assume the issue you had on your switches allowed traffic to reach the host, that was forwarding it to the VMs, and wasn't dropped because tcpdump switched the VIF into promiscuous mode. If it seems resolved, that's good, otherwise let us know if we need to investigate further on this
  • Debian VM Takes down Host

    3
    0 Votes
    3 Posts
    80 Views
    P
    @Andrew Ok, thanks I will give that a try.
  • Does XCP-NG support NVMe/TCP?

    4
    0 Votes
    4 Posts
    187 Views
    M
    @olivierlambert Thanks!
  • 0 Votes
    1 Posts
    40 Views
    No one has replied
  • DC topology info

    11
    0 Votes
    11 Posts
    239 Views
    I
    @bleader yes, Thank you.