Subcategories

  • All Xen related stuff

    584 Topics
    6k Posts
    olivierlambertO
    @edisoninfo Great! This warning is a life saver and did perfectly its job, allowing you to discover a hidden issue Glad you found the root cause!
  • The integrated web UI to manage XCP-ng

    23 Topics
    339 Posts
    C
    @lsouai-vates Great! Thanks for addressing this
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    104 Topics
    1k Posts
    florentF
    @idar21 @idar21 said in Xen Orchestra 5.110 V2V not working: Don't intend to bump in but the new migration tool isn't working as per the release notes. I had similar issues, there is no warm migration. My testing against esxi v7, resulted in: .Abrupt power off of source VM on esxi. .VM disks start copying. I can see disk copy progress in tasks. .Migration tasks fails but multiple disks of the source VM keeps on copying. .when all the disks are copied, there is no VM with the name available in xcp. .All disks are labeled orphaned under health in xo. .Where is the pause/resume function as stated in the release notes. I don't think the tool has been tested properly. The only difference from older migration tool to this one is progress of disk copying. Otherwise nothing new. The old tool could only do cold migrations and had issues with vms with multiple disks. The new can also only do cold migrations and still has issues with multiple disks migrations. First, I would like to say again that latest can be fresh, and that we know that we ask for our users to be more inventive with latest, in exchange for faster features. Even more for users from source. The documentation is still in the work, and will be ready for sure before this reach "xoa stable". The resume part don't have a dedicated interface : you do a first migration without enabling the "stop source", and then, later you launch the same migration with stop source enabled ( or VM stopped ) and it will reuse the already transfered data if the prerequisites are validated. Then debugging an issue with migration is quite complex, since it's involve multiple systems, and we won't have any access, nor control on the vmware part. It's even harder without a tunnel. I will need you to look at your journalctl and check for errors during migration . Also are the failing disks sharing some specific configuration? what storage do they uses ? Is there something relevant on the xcp side ?
  • Hardware related section

    128 Topics
    1k Posts
    olivierlambertO
    It is, if you don't use ZFS. ZFS is a memory hog.
  • The place to discuss new additions into XCP-ng

    244 Topics
    3k Posts
    DustyArmstrongD
    Testing the agent out on Arch Linux (mainly due to the spotty 'support' in the AUR/generally) and it is working fine - better than what I had before (which did not report VM info properly). I've set it up as a systemd service to replace the previous one I had, also working as expected. This would be fun to contribute towards.
  • reboot of host does it stop or kill running VM's?

    14
    0 Votes
    14 Posts
    2k Views
    N
    Could someone elaborate on the procedure to have all VMs on a host shutdown properly upon XCP-NG host shutdown please? I tried from the host prompt: xe host-disable xe host-shutdown and from XOA Host: shutdown, with warning (This will shutdown your host without evacuating its VMs. Do you want to continue?) and rightly so the host has seemingly become unavailable (ping to its IP stops) But then what happens is very odd: first the VM on it still pings for a couple minutes (yes after the host stops to answers the ping) then the VM stops pinging but AFAICS XCP-NG is not OFF Awkwardly, I just access to the IDRAC8 entreprise license on which XCP-Ng is running, and can't SEE the proper status of XCP-NG from it. AFAIK it's not pinging but it doesn't seem OFF either. At least the IDRAC shows it ON, and upon power cycling and reconnecting to the VM the logs shows it hasn't been cleanly shutdown. NB: the VM has xen-guest-agent running within a container, but from what I gathered, the agent in Linux guests has no role in VM shutdown: See https://xcp-ng.org/forum/topic/10631/understanding-xe-guest-utilities/16 Also, I doubled check Proxmox: it does clean shutdown VMs, either with a "shutdown -h now" command or when triggered from GUI. And that's with a VM that has Promox guest installed. In any case, it would be nice to have XCP-NG/XOA be able to do the same.
  • ACPI Error: SMBus/IPMI/GenericSerialBus

    5
    0 Votes
    5 Posts
    457 Views
    ForzaF
    @dinhngtu Yes, looks like it. I stopped Netdata and the problem went away. But it is strange it started after the latest set of updates.
  • Migrate windows from Xeon Silver to older Xeon or AMD?

    Solved
    3
    0 Votes
    3 Posts
    268 Views
    G
    @olivierlambert I was looking for a way to mark this solved, can't find it. I haven't moved things, but after migrating my big lab to my mini-lab, I'm confident that the warm migration is the way to go. It was fast and seamless as long as you have the right network adapters set up. I had to fool with one of my networks to make a VM function, but that was certainly something I overlooked while setting up the mini-lab. A little testing before moving the VMs should make this go easily if using the old servers is the option for this project.
  • Veeam and XCP-ng

    Solved
    34
    0 Votes
    34 Posts
    18k Views
    planedropP
    @MAnon This is a valid point actually, and without additional work, you couldn't just restore to another hypervisor. However, check this blog post: https://xen-orchestra.com/blog/xen-orchestra-5-100/?utm_campaign=mail_5.100&utm_term=logo&ct=YTo1OntzOjY6InNvdXJjZSI7YToyOntpOjA7czo1OiJlbWFpbCI7aToxO2k6NjU7fXM6NToiZW1haWwiO2k6NjU7czo0OiJzdGF0IjtzOjIyOiI2NzIzODI1NDE4ZjVmMjE5NDI2OTYwIjtzOjQ6ImxlYWQiO3M6NToiODM5ODciO3M6NzoiY2hhbm5lbCI7YToxOntzOjU6ImVtYWlsIjtpOjY1O319 Veeam is likely going to properly support XCP-ng. And for what it's worth, you can use agent based Veeam backups in the VMs and that works fine.
  • 0 Votes
    30 Posts
    5k Views
    D
    @mickwilli yes sorry, i post a new subject and the update february fix lot of bugs, but not the freeze, but i found the solution now, i installed a new vm with W11 PRO 23H2, no bugs it's fine thanks to all, all bugs fixed in previous version 23H2, it's Microsoft, the past Is better than the future
  • Unable to attach empty optical drive to VM.

    2
    1
    0 Votes
    2 Posts
    248 Views
    A
    I've managed to at least solve part of my issue. Using this article, I managed to pull together the information I needed in order to remove the Optical Drive from the VM. It refereced xe vbd-list. I found the manpage for that command, and noted that I could get the information I needed to remove the drive. For future me to reference - because I know I'll somehow do this again in the future. List all Virtual Block Devices (vbd's) associated to the vm (you can do this by vm-uuid, or vm-label) [20:42 xcp-ng-1 ~]# xe vbd-list vm-uuid="3eb63bb4-29d1-f3a7-44a1-37fdb3711454" params="all" Output should show the following. uuid ( RO) : 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0 vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 7821ef6d-4778-4478-8cf4-e950577eaf4f vdi-name-label ( RO): SCSI 2:0:0:0 allowed-operations (SRO): attach; eject current-operations (SRO): empty ( RO): false device ( RO): userdevice ( RW): 3 bootable ( RW): false mode ( RW): RO type ( RW): CD unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> uuid ( RO) : 4d0f16c4-9cf5-5df5-083b-ec1222f97abc vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 3f89c727-f471-4ec3-8a7c-f7b7fc478148 vdi-name-label ( RO): [ESXI]veeam01-flat.vmdk allowed-operations (SRO): attach current-operations (SRO): empty ( RO): false device ( RO): xvda userdevice ( RW): 0 bootable ( RW): false mode ( RW): RW type ( RW): Disk unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): owner: io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> Look for the device with type ( RW): CD. Take that uuid. In this case, the uuid was 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0. Destroy the vbd: xe vbd-destroy uuid="7443c2f0-7c04-ab88-ccfd-29f0831c1aa0" Once this was done, the vm started without issue.
  • The HA doesn't work

    16
    0 Votes
    16 Posts
    869 Views
    S
    @tjkreidl Hello, Didn't I get half of ok too? 28 machines impacted, 15 left ok, and 13 with the error msg
  • Failure to Boot

    3
    0 Votes
    3 Posts
    318 Views
    D
    @Davidj-0 Zero changes. This is ran on a MS-01 with 2x 2TB NVME running in mirror RAID. All I use this for is to mess around with VMs and self host some services. I was still learning stuff so never back up anything because I was still building it out. Don’t feel like starting over, but have no idea what this fault even means to attempt to recover what I have done.
  • All NICs on XCP-NG Node Running in Promiscuous Mode

    7
    0 Votes
    7 Posts
    679 Views
    bleaderB
    Running tcpdump switches the interface to promiscuous to allow all traffic that reaches the NIC to be dumped. So I assume the issue you had on your switches allowed traffic to reach the host, that was forwarding it to the VMs, and wasn't dropped because tcpdump switched the VIF into promiscuous mode. If it seems resolved, that's good, otherwise let us know if we need to investigate further on this
  • Debian VM Takes down Host

    3
    0 Votes
    3 Posts
    281 Views
    P
    @Andrew Ok, thanks I will give that a try.
  • Does XCP-NG support NVMe/TCP?

    4
    0 Votes
    4 Posts
    499 Views
    M
    @olivierlambert Thanks!
  • 0 Votes
    1 Posts
    138 Views
    No one has replied
  • DC topology info

    11
    0 Votes
    11 Posts
    754 Views
    I
    @bleader yes, Thank you.
  • Beginner advice - coming from Debian

    8
    1 Votes
    8 Posts
    664 Views
    D
    @WillEndure said in Beginner advice - coming from Debian: @DustinB @DustinB said in Beginner advice - coming from Debian: Why are you keen on keeping raw XEN on Debian? Not committed to the idea - its just what I currently have and invested a bit of time into setting it up and understanding it since before XCP-ng was around. Time is a factor too because you can waste a lot of it setting stuff like this up! But overall yes, I should probably move over to XCP-ng for my host. Got it, sunk-cost fallacy.
  • Copying a VM from 8.2 to 8.3 and back

    2
    0 Votes
    2 Posts
    217 Views
    stormiS
    I think this part of the doc describes your issue: https://docs.xcp-ng.org/releases/release-8-3/#a-uefi-vm-started-once-on-xcp-ng-83-cant-start-if-moved-back-to-xcp-ng-821
  • Unable to find logs in XenCenter or Xen Orchestra

    Solved
    5
    0 Votes
    5 Posts
    431 Views
    S
    @olivierlambert thanks i got it.
  • PCIe card removal and failure to boot from NVMe

    Solved
    14
    1 Votes
    14 Posts
    887 Views
    olivierlambertO
    Okay weird, at east glad to know it works now
  • how to use template created in another host machine?

    2
    0 Votes
    2 Posts
    111 Views
    olivierlambertO
    If the machines are on the same pool no problem. If they are not, you need to export the template and import it in the other pool.
  • Openstack vs xcp-ng (XO)

    3
    0 Votes
    3 Posts
    438 Views
    I
    @olivierlambert got it.
  • XCP-ng host - Power management

    11
    2
    0 Votes
    11 Posts
    1k Views
    A
    @tjkreidl We don't need performance, but we do need to test how XCP-ng pools, networking, migration, live migration, backup, import from VMware and so on work. It's just a playground where we can have relatively many XCP-ng hosts, but it's not about performance, it's about efficiency and low requirements, because it's just a playground where we learn, validate how things work, and prepare the process for the final migration from VMware to XCP-ng. We originally had two R630s ready for this, then 4, but that would have been unnecessary, given the power consumption, to have physical hypervisors, so in the end we decided to virtualize it all. Well, on ESXi it's because XCP-ng works seamlessly there in nested virtualization.