Subcategories

  • All Xen related stuff

    559 Topics
    5k Posts
    A
    Let me update the procedure with my experience: install XCP-NG 8.3 download XenServer Driver Nvidia 16.9 (latest supported by y M60) unzip driver and copy host driver (NVIDIA-vGPU-xenserver-8-535.230.02.x86_64.iso) I used winscp to copy the driver to the tmp directory. download XenServer iso file (https://www.xenserver.com/downloads | XenServer8_2024-12-09.iso) copy the file (vgpu-7.4.16-1.xs8.x86_64.rpm) in the packages directory yum localinstall vgpu-7.4.16-1.xs8.x86_64.rpm yum localinstall NVIDIA-vGPU-xenserver-8-535.230.02.x86_64.rpm reboot install guest driver on the VM client (539.19_grid_win10_win11_server2019_server2022_dch_64bit_international.exe) I preferred to use yum localinstall to be able to remove or update packages faster. I have a question regarding vGPU, why i have profiles where i can have up to 4 heads? I don't have any option to add more than one display and i don't even understand how can i use them. Thanks for the procedure, i'm a total newbie with vGPu.
  • The integrated web UI to manage XCP-ng

    17 Topics
    259 Posts
    olivierlambertO
    No. I'm saying that all VMs (HVM) are using RFB consoles, readable with VNC protocol (used by XO or a VNC client). There's no text console since it's like a screen in HVM. If you are stuck on "Loading in progress", it's not a console issue, it's a VM issue. You can use a LiveCD to check what's going on.
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    88 Topics
    1k Posts
    S
    @dinhngtu it still wouldn't boot with the correct template. That said, I have been testing other VMs which were created on xcp-ng, with the correct templates, and get better performance out of them when they are migrated to a hyper-v host. I believe that this is related to SMAPIv1 limitations - which are known and being actively worked on with SMAPIv3. Unfortunately for us at this point, xcp-ng may not fit our business - hopefully one day soon it will be, but until then, it is what it is.
  • Hardware related section

    110 Topics
    1k Posts
    stormiS
    @2planks said in NIC not working on new HPE DL360 Gen11: Has this issue happened before? I don't remember such an issue on this hardware. Can you share the outputs of lspci -v, lsmod, dmesg, and dmidecode?
  • The place to discuss new additions into XCP-ng

    238 Topics
    3k Posts
    ForzaF
    Would be very interesting to see the performance on EPYC systems.
  • Install XCP-ng in old HP ProLiant DL160 G6 (gen 6)

    9
    0 Votes
    9 Posts
    82 Views
    S
    @john.c Yeah - like I said it is a good step in the right direction. Just doesn't solve my particular storage related problems.
  • Citrix tools after version 9.0 removed quiesced snapshot

    2
    0 Votes
    2 Posts
    39 Views
    TeddyAstieT
    @vkeven XCP-ng 8.1 release note says VSS and quiesced snapshots support is removed, because it never worked correctly and caused more harm than good. Note that Windows guest tools version 9 (the default for recent versions of Windows if you install Citrix drivers) already removed VSS support, even for older versions of CH / XCP-ng I am not sure if this VSS feature is bound to the PV drivers, or if it also needs hypervisor support. Though it is not recommended to stay on a old version of the guest agent.
  • Diagnosing frequent crashes on host

    15
    0 Votes
    15 Posts
    172 Views
    T
    @olivierlambert @olivierlambert said in Diagnosing frequent crashes on host: Maybe there's a usage that's slightly different since when it was "more solid" and now it's trigger more easily. Is your XCP-ng fully up to date? No; as said originally, I'm still on 8.2.1. I have been concerned about moving to 8.3 because it's a new installation, and I don't want to screw it up, but I'm willing to accept that it's the right thing to do.
  • 8.3 Cannot boot from CD Rom

    17
    1
    0 Votes
    17 Posts
    287 Views
    olivierlambertO
    All of that makes sense then, thanks a lot for your feedback! Pinging @stormi so we can triage your input
  • Script to auto mount USBs on Boot/Reboot. Monitoring Multiple UPS

    7
    0 Votes
    7 Posts
    406 Views
    olivierlambertO
    Ping @stormi so we track this somewhere internally
  • Grub looking for /dev/vda instead of /dev/xvda

    1
    0 Votes
    1 Posts
    37 Views
    No one has replied
  • Storage migration logs

    2
    0 Votes
    2 Posts
    38 Views
    olivierlambertO
    Hi, Check the task view, you'll have the duration of the process visible.
  • reboot of host does it stop or kill running VM's?

    14
    0 Votes
    14 Posts
    854 Views
    N
    Could someone elaborate on the procedure to have all VMs on a host shutdown properly upon XCP-NG host shutdown please? I tried from the host prompt: xe host-disable xe host-reboot and from XOA Host: shutdown, with warning (This will shutdown your host without evacuating its VMs. Do you want to continue?) and rightly so the host has seemingly become unavailable (ping to its IP stops) But then what happens is very odd: first the VM on it still pings for a couple minutes (yes after the host stops to answers the ping) then the VM stops pinging but AFAICS XCP-NG is not OFF Awkwardly, I just access to the IDRAC8 entreprise license on which XCP-Ng is running, and can't SEE the proper status of XCP-NG from it. AFAIK it's not pinging but it doesn't seem OFF either. At least the IDRAC shows it ON, and upon power cycling and reconnecting to the VM the logs shows it hasn't been cleanly shutdown. NB: the VM has xen-guest-agent running within a container, but from what I gathered, the agent in Linux guests has no role in VM shutdown: See https://xcp-ng.org/forum/topic/10631/understanding-xe-guest-utilities/16 Also, I doubled check Proxmox: it does clean shutdown VMs, either with a "shutdown -h now" command or when triggered from GUI. And that's with a VM that has Promox guest installed. In any case, it would be nice to have XCP-NG/XOA be able to do the same.
  • ACPI Error: SMBus/IPMI/GenericSerialBus

    5
    0 Votes
    5 Posts
    92 Views
    ForzaF
    @dinhngtu Yes, looks like it. I stopped Netdata and the problem went away. But it is strange it started after the latest set of updates.
  • Migrate windows from Xeon Silver to older Xeon or AMD?

    Solved
    3
    0 Votes
    3 Posts
    102 Views
    G
    @olivierlambert I was looking for a way to mark this solved, can't find it. I haven't moved things, but after migrating my big lab to my mini-lab, I'm confident that the warm migration is the way to go. It was fast and seamless as long as you have the right network adapters set up. I had to fool with one of my networks to make a VM function, but that was certainly something I overlooked while setting up the mini-lab. A little testing before moving the VMs should make this go easily if using the old servers is the option for this project.
  • Veeam and XCP-ng

    Solved
    34
    0 Votes
    34 Posts
    11k Views
    planedropP
    @MAnon This is a valid point actually, and without additional work, you couldn't just restore to another hypervisor. However, check this blog post: https://xen-orchestra.com/blog/xen-orchestra-5-100/?utm_campaign=mail_5.100&utm_term=logo&ct=YTo1OntzOjY6InNvdXJjZSI7YToyOntpOjA7czo1OiJlbWFpbCI7aToxO2k6NjU7fXM6NToiZW1haWwiO2k6NjU7czo0OiJzdGF0IjtzOjIyOiI2NzIzODI1NDE4ZjVmMjE5NDI2OTYwIjtzOjQ6ImxlYWQiO3M6NToiODM5ODciO3M6NzoiY2hhbm5lbCI7YToxOntzOjU6ImVtYWlsIjtpOjY1O319 Veeam is likely going to properly support XCP-ng. And for what it's worth, you can use agent based Veeam backups in the VMs and that works fine.
  • 0 Votes
    30 Posts
    1k Views
    D
    @mickwilli yes sorry, i post a new subject and the update february fix lot of bugs, but not the freeze, but i found the solution now, i installed a new vm with W11 PRO 23H2, no bugs it's fine thanks to all, all bugs fixed in previous version 23H2, it's Microsoft, the past Is better than the future
  • Unable to attach empty optical drive to VM.

    2
    1
    0 Votes
    2 Posts
    78 Views
    A
    I've managed to at least solve part of my issue. Using this article, I managed to pull together the information I needed in order to remove the Optical Drive from the VM. It refereced xe vbd-list. I found the manpage for that command, and noted that I could get the information I needed to remove the drive. For future me to reference - because I know I'll somehow do this again in the future. List all Virtual Block Devices (vbd's) associated to the vm (you can do this by vm-uuid, or vm-label) [20:42 xcp-ng-1 ~]# xe vbd-list vm-uuid="3eb63bb4-29d1-f3a7-44a1-37fdb3711454" params="all" Output should show the following. uuid ( RO) : 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0 vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 7821ef6d-4778-4478-8cf4-e950577eaf4f vdi-name-label ( RO): SCSI 2:0:0:0 allowed-operations (SRO): attach; eject current-operations (SRO): empty ( RO): false device ( RO): userdevice ( RW): 3 bootable ( RW): false mode ( RW): RO type ( RW): CD unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> uuid ( RO) : 4d0f16c4-9cf5-5df5-083b-ec1222f97abc vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 3f89c727-f471-4ec3-8a7c-f7b7fc478148 vdi-name-label ( RO): [ESXI]veeam01-flat.vmdk allowed-operations (SRO): attach current-operations (SRO): empty ( RO): false device ( RO): xvda userdevice ( RW): 0 bootable ( RW): false mode ( RW): RW type ( RW): Disk unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): owner: io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> Look for the device with type ( RW): CD. Take that uuid. In this case, the uuid was 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0. Destroy the vbd: xe vbd-destroy uuid="7443c2f0-7c04-ab88-ccfd-29f0831c1aa0" Once this was done, the vm started without issue.
  • The HA doesn't work

    16
    0 Votes
    16 Posts
    298 Views
    S
    @tjkreidl Hello, Didn't I get half of ok too? 28 machines impacted, 15 left ok, and 13 with the error msg
  • Failure to Boot

    3
    0 Votes
    3 Posts
    89 Views
    D
    @Davidj-0 Zero changes. This is ran on a MS-01 with 2x 2TB NVME running in mirror RAID. All I use this for is to mess around with VMs and self host some services. I was still learning stuff so never back up anything because I was still building it out. Don’t feel like starting over, but have no idea what this fault even means to attempt to recover what I have done.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    7
    1 Votes
    7 Posts
    245 Views
    D
    @bleader Thank you for the thorough explanation it greatly helps to understand how the team works to keep these systems secure and functional. From a generalist standpoint, I'll use publicly availability tools to check for and report on any known vulnerabilities within my network (public and private) to me, and then I'll either address those vulnerabilities by either a patch or more commonly a configuration change within a given system. These could include my UPSs or switches, Hypervisors, client devices (laptops etc). Addressing these is a huge portion of the work that I have to address in a day to day, and knowing what would be normal convention to saying "hey I found this issue with a commodity vulnerability scanner, is it going to be addressed" is useful.
  • All NICs on XCP-NG Node Running in Promiscuous Mode

    7
    0 Votes
    7 Posts
    260 Views
    bleaderB
    Running tcpdump switches the interface to promiscuous to allow all traffic that reaches the NIC to be dumped. So I assume the issue you had on your switches allowed traffic to reach the host, that was forwarding it to the VMs, and wasn't dropped because tcpdump switched the VIF into promiscuous mode. If it seems resolved, that's good, otherwise let us know if we need to investigate further on this
  • Debian VM Takes down Host

    3
    0 Votes
    3 Posts
    87 Views
    P
    @Andrew Ok, thanks I will give that a try.
  • Does XCP-NG support NVMe/TCP?

    4
    0 Votes
    4 Posts
    197 Views
    M
    @olivierlambert Thanks!
  • 0 Votes
    1 Posts
    40 Views
    No one has replied