Subcategories

  • All Xen related stuff

    572 Topics
    6k Posts
    D
    @olivierlambert Thanks for the quick response, Oliver. We will update both hosts soon and test the export/import again. We'll be maintain you informed.
  • The integrated web UI to manage XCP-ng

    19 Topics
    271 Posts
    lsouai-vatesL
    @olivierlambert can you close this thread?
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    93 Topics
    1k Posts
    V
    @iLix The sizing of the VM in XCP-NG was bigger compared to the source
  • Hardware related section

    118 Topics
    1k Posts
    Q
    @ThierryEscande are you able to provide the beta code to @redorangefreak or me? I'm on their team and we are also seeing issues when attempting to use the RAID card at the XCP-ng level as the backing for a local storage repository. No VDIs can be created on it (workaround attempt).
  • The place to discuss new additions into XCP-ng

    240 Topics
    3k Posts
    olivierlambertO
  • Install XCP-ng in old HP ProLiant DL160 G6 (gen 6)

    9
    0 Votes
    9 Posts
    378 Views
    S
    @john.c Yeah - like I said it is a good step in the right direction. Just doesn't solve my particular storage related problems.
  • Citrix tools after version 9.0 removed quiesced snapshot

    2
    0 Votes
    2 Posts
    161 Views
    TeddyAstieT
    @vkeven XCP-ng 8.1 release note says VSS and quiesced snapshots support is removed, because it never worked correctly and caused more harm than good. Note that Windows guest tools version 9 (the default for recent versions of Windows if you install Citrix drivers) already removed VSS support, even for older versions of CH / XCP-ng I am not sure if this VSS feature is bound to the PV drivers, or if it also needs hypervisor support. Though it is not recommended to stay on a old version of the guest agent.
  • Diagnosing frequent crashes on host

    15
    0 Votes
    15 Posts
    537 Views
    T
    @olivierlambert @olivierlambert said in Diagnosing frequent crashes on host: Maybe there's a usage that's slightly different since when it was "more solid" and now it's trigger more easily. Is your XCP-ng fully up to date? No; as said originally, I'm still on 8.2.1. I have been concerned about moving to 8.3 because it's a new installation, and I don't want to screw it up, but I'm willing to accept that it's the right thing to do.
  • Script to auto mount USBs on Boot/Reboot. Monitoring Multiple UPS

    7
    0 Votes
    7 Posts
    580 Views
    olivierlambertO
    Ping @stormi so we track this somewhere internally
  • Grub looking for /dev/vda instead of /dev/xvda

    1
    0 Votes
    1 Posts
    98 Views
    No one has replied
  • Storage migration logs

    2
    0 Votes
    2 Posts
    92 Views
    olivierlambertO
    Hi, Check the task view, you'll have the duration of the process visible.
  • reboot of host does it stop or kill running VM's?

    14
    0 Votes
    14 Posts
    1k Views
    N
    Could someone elaborate on the procedure to have all VMs on a host shutdown properly upon XCP-NG host shutdown please? I tried from the host prompt: xe host-disable xe host-shutdown and from XOA Host: shutdown, with warning (This will shutdown your host without evacuating its VMs. Do you want to continue?) and rightly so the host has seemingly become unavailable (ping to its IP stops) But then what happens is very odd: first the VM on it still pings for a couple minutes (yes after the host stops to answers the ping) then the VM stops pinging but AFAICS XCP-NG is not OFF Awkwardly, I just access to the IDRAC8 entreprise license on which XCP-Ng is running, and can't SEE the proper status of XCP-NG from it. AFAIK it's not pinging but it doesn't seem OFF either. At least the IDRAC shows it ON, and upon power cycling and reconnecting to the VM the logs shows it hasn't been cleanly shutdown. NB: the VM has xen-guest-agent running within a container, but from what I gathered, the agent in Linux guests has no role in VM shutdown: See https://xcp-ng.org/forum/topic/10631/understanding-xe-guest-utilities/16 Also, I doubled check Proxmox: it does clean shutdown VMs, either with a "shutdown -h now" command or when triggered from GUI. And that's with a VM that has Promox guest installed. In any case, it would be nice to have XCP-NG/XOA be able to do the same.
  • ACPI Error: SMBus/IPMI/GenericSerialBus

    5
    0 Votes
    5 Posts
    259 Views
    ForzaF
    @dinhngtu Yes, looks like it. I stopped Netdata and the problem went away. But it is strange it started after the latest set of updates.
  • Migrate windows from Xeon Silver to older Xeon or AMD?

    Solved
    3
    0 Votes
    3 Posts
    170 Views
    G
    @olivierlambert I was looking for a way to mark this solved, can't find it. I haven't moved things, but after migrating my big lab to my mini-lab, I'm confident that the warm migration is the way to go. It was fast and seamless as long as you have the right network adapters set up. I had to fool with one of my networks to make a VM function, but that was certainly something I overlooked while setting up the mini-lab. A little testing before moving the VMs should make this go easily if using the old servers is the option for this project.
  • Veeam and XCP-ng

    Solved
    34
    0 Votes
    34 Posts
    13k Views
    planedropP
    @MAnon This is a valid point actually, and without additional work, you couldn't just restore to another hypervisor. However, check this blog post: https://xen-orchestra.com/blog/xen-orchestra-5-100/?utm_campaign=mail_5.100&utm_term=logo&ct=YTo1OntzOjY6InNvdXJjZSI7YToyOntpOjA7czo1OiJlbWFpbCI7aToxO2k6NjU7fXM6NToiZW1haWwiO2k6NjU7czo0OiJzdGF0IjtzOjIyOiI2NzIzODI1NDE4ZjVmMjE5NDI2OTYwIjtzOjQ6ImxlYWQiO3M6NToiODM5ODciO3M6NzoiY2hhbm5lbCI7YToxOntzOjU6ImVtYWlsIjtpOjY1O319 Veeam is likely going to properly support XCP-ng. And for what it's worth, you can use agent based Veeam backups in the VMs and that works fine.
  • 0 Votes
    30 Posts
    2k Views
    D
    @mickwilli yes sorry, i post a new subject and the update february fix lot of bugs, but not the freeze, but i found the solution now, i installed a new vm with W11 PRO 23H2, no bugs it's fine thanks to all, all bugs fixed in previous version 23H2, it's Microsoft, the past Is better than the future
  • Unable to attach empty optical drive to VM.

    2
    1
    0 Votes
    2 Posts
    155 Views
    A
    I've managed to at least solve part of my issue. Using this article, I managed to pull together the information I needed in order to remove the Optical Drive from the VM. It refereced xe vbd-list. I found the manpage for that command, and noted that I could get the information I needed to remove the drive. For future me to reference - because I know I'll somehow do this again in the future. List all Virtual Block Devices (vbd's) associated to the vm (you can do this by vm-uuid, or vm-label) [20:42 xcp-ng-1 ~]# xe vbd-list vm-uuid="3eb63bb4-29d1-f3a7-44a1-37fdb3711454" params="all" Output should show the following. uuid ( RO) : 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0 vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 7821ef6d-4778-4478-8cf4-e950577eaf4f vdi-name-label ( RO): SCSI 2:0:0:0 allowed-operations (SRO): attach; eject current-operations (SRO): empty ( RO): false device ( RO): userdevice ( RW): 3 bootable ( RW): false mode ( RW): RO type ( RW): CD unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> uuid ( RO) : 4d0f16c4-9cf5-5df5-083b-ec1222f97abc vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 3f89c727-f471-4ec3-8a7c-f7b7fc478148 vdi-name-label ( RO): [ESXI]veeam01-flat.vmdk allowed-operations (SRO): attach current-operations (SRO): empty ( RO): false device ( RO): xvda userdevice ( RW): 0 bootable ( RW): false mode ( RW): RW type ( RW): Disk unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): owner: io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> Look for the device with type ( RW): CD. Take that uuid. In this case, the uuid was 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0. Destroy the vbd: xe vbd-destroy uuid="7443c2f0-7c04-ab88-ccfd-29f0831c1aa0" Once this was done, the vm started without issue.
  • The HA doesn't work

    16
    0 Votes
    16 Posts
    485 Views
    S
    @tjkreidl Hello, Didn't I get half of ok too? 28 machines impacted, 15 left ok, and 13 with the error msg
  • Failure to Boot

    3
    0 Votes
    3 Posts
    180 Views
    D
    @Davidj-0 Zero changes. This is ran on a MS-01 with 2x 2TB NVME running in mirror RAID. All I use this for is to mess around with VMs and self host some services. I was still learning stuff so never back up anything because I was still building it out. Don’t feel like starting over, but have no idea what this fault even means to attempt to recover what I have done.
  • Security Assessments and Hardening of XCP-ng

    security assessment
    7
    1 Votes
    7 Posts
    566 Views
    D
    @bleader Thank you for the thorough explanation it greatly helps to understand how the team works to keep these systems secure and functional. From a generalist standpoint, I'll use publicly availability tools to check for and report on any known vulnerabilities within my network (public and private) to me, and then I'll either address those vulnerabilities by either a patch or more commonly a configuration change within a given system. These could include my UPSs or switches, Hypervisors, client devices (laptops etc). Addressing these is a huge portion of the work that I have to address in a day to day, and knowing what would be normal convention to saying "hey I found this issue with a commodity vulnerability scanner, is it going to be addressed" is useful.
  • All NICs on XCP-NG Node Running in Promiscuous Mode

    7
    0 Votes
    7 Posts
    458 Views
    bleaderB
    Running tcpdump switches the interface to promiscuous to allow all traffic that reaches the NIC to be dumped. So I assume the issue you had on your switches allowed traffic to reach the host, that was forwarding it to the VMs, and wasn't dropped because tcpdump switched the VIF into promiscuous mode. If it seems resolved, that's good, otherwise let us know if we need to investigate further on this
  • Debian VM Takes down Host

    3
    0 Votes
    3 Posts
    175 Views
    P
    @Andrew Ok, thanks I will give that a try.
  • Does XCP-NG support NVMe/TCP?

    4
    0 Votes
    4 Posts
    327 Views
    M
    @olivierlambert Thanks!
  • 0 Votes
    1 Posts
    72 Views
    No one has replied
  • DC topology info

    11
    0 Votes
    11 Posts
    415 Views
    I
    @bleader yes, Thank you.