Subcategories

  • All Xen related stuff

    606 Topics
    6k Posts
    F
    @redakula said in Coral TPU PCI Passthrough: Frigate.nvr which is one of the popular uses for the Coral do not recommend it for new installs either. Frigate updated their recommendations because of the Google decision to sunset the device and because there are alternative options available to frigate for image inferencing. The coral is still supported though, and frigate is not the only use case or platform that can benefit from an accelerator. At the end of the day if you've already got the hardware, and it's efficient enough to run, then not using it is a waste or resources that could be allocated to other VMs.
  • The integrated web UI to manage XCP-ng

    26 Topics
    348 Posts
    olivierlambertO
    It's not meant to be used like that. If you are behind a NAT, the right approach is to have your XOA behind the NAT and inside the same network than the hosts. That's because hosts will always use and return their internal IPs to connect to some resources (stats, consoles etc.). XOA deals with that easily as being the "main control point" for all hosts behind your NAT (or a XO proxy if you prefer).
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    116 Topics
    1k Posts
    florentF
    @yeopil21 you can target vsphere it should work The issue here is that XO fail to link one of your datastore to the datacenter is it XO from source or a XOA ? you should have in your server logs something like can't find datacenter for datastore with the datacenters and the datastore name as detected by XO are you using an admin account on vsphere or is it a limited one ?
  • Hardware related section

    159 Topics
    2k Posts
    T
    @comdirect Use this command: (replace sda in the command below with the relevant device) cat /sys/block/sda/queue/scheduler The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq] For multiple drives use: grep "" /sys/block/*/queue/scheduler
  • The place to discuss new additions into XCP-ng

    246 Topics
    3k Posts
    yannY
    @Tristis-Oris yes it is likely you're using a newer kernel, we likely need to rebuild the agent using a newer version of the netlink crate.
  • reboot of host does it stop or kill running VM's?

    14
    0 Votes
    14 Posts
    3k Views
    N
    Could someone elaborate on the procedure to have all VMs on a host shutdown properly upon XCP-NG host shutdown please? I tried from the host prompt: xe host-disable xe host-shutdown and from XOA Host: shutdown, with warning (This will shutdown your host without evacuating its VMs. Do you want to continue?) and rightly so the host has seemingly become unavailable (ping to its IP stops) But then what happens is very odd: first the VM on it still pings for a couple minutes (yes after the host stops to answers the ping) then the VM stops pinging but AFAICS XCP-NG is not OFF Awkwardly, I just access to the IDRAC8 entreprise license on which XCP-Ng is running, and can't SEE the proper status of XCP-NG from it. AFAIK it's not pinging but it doesn't seem OFF either. At least the IDRAC shows it ON, and upon power cycling and reconnecting to the VM the logs shows it hasn't been cleanly shutdown. NB: the VM has xen-guest-agent running within a container, but from what I gathered, the agent in Linux guests has no role in VM shutdown: See https://xcp-ng.org/forum/topic/10631/understanding-xe-guest-utilities/16 Also, I doubled check Proxmox: it does clean shutdown VMs, either with a "shutdown -h now" command or when triggered from GUI. And that's with a VM that has Promox guest installed. In any case, it would be nice to have XCP-NG/XOA be able to do the same.
  • ACPI Error: SMBus/IPMI/GenericSerialBus

    5
    0 Votes
    5 Posts
    884 Views
    ForzaF
    @dinhngtu Yes, looks like it. I stopped Netdata and the problem went away. But it is strange it started after the latest set of updates.
  • Migrate windows from Xeon Silver to older Xeon or AMD?

    Solved
    3
    0 Votes
    3 Posts
    484 Views
    G
    @olivierlambert I was looking for a way to mark this solved, can't find it. I haven't moved things, but after migrating my big lab to my mini-lab, I'm confident that the warm migration is the way to go. It was fast and seamless as long as you have the right network adapters set up. I had to fool with one of my networks to make a VM function, but that was certainly something I overlooked while setting up the mini-lab. A little testing before moving the VMs should make this go easily if using the old servers is the option for this project.
  • Veeam and XCP-ng

    Solved
    34
    0 Votes
    34 Posts
    22k Views
    planedropP
    @MAnon This is a valid point actually, and without additional work, you couldn't just restore to another hypervisor. However, check this blog post: https://xen-orchestra.com/blog/xen-orchestra-5-100/?utm_campaign=mail_5.100&utm_term=logo&ct=YTo1OntzOjY6InNvdXJjZSI7YToyOntpOjA7czo1OiJlbWFpbCI7aToxO2k6NjU7fXM6NToiZW1haWwiO2k6NjU7czo0OiJzdGF0IjtzOjIyOiI2NzIzODI1NDE4ZjVmMjE5NDI2OTYwIjtzOjQ6ImxlYWQiO3M6NToiODM5ODciO3M6NzoiY2hhbm5lbCI7YToxOntzOjU6ImVtYWlsIjtpOjY1O319 Veeam is likely going to properly support XCP-ng. And for what it's worth, you can use agent based Veeam backups in the VMs and that works fine.
  • 0 Votes
    30 Posts
    8k Views
    D
    @mickwilli yes sorry, i post a new subject and the update february fix lot of bugs, but not the freeze, but i found the solution now, i installed a new vm with W11 PRO 23H2, no bugs it's fine thanks to all, all bugs fixed in previous version 23H2, it's Microsoft, the past Is better than the future
  • Unable to attach empty optical drive to VM.

    2
    1
    0 Votes
    2 Posts
    471 Views
    A
    I've managed to at least solve part of my issue. Using this article, I managed to pull together the information I needed in order to remove the Optical Drive from the VM. It refereced xe vbd-list. I found the manpage for that command, and noted that I could get the information I needed to remove the drive. For future me to reference - because I know I'll somehow do this again in the future. List all Virtual Block Devices (vbd's) associated to the vm (you can do this by vm-uuid, or vm-label) [20:42 xcp-ng-1 ~]# xe vbd-list vm-uuid="3eb63bb4-29d1-f3a7-44a1-37fdb3711454" params="all" Output should show the following. uuid ( RO) : 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0 vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 7821ef6d-4778-4478-8cf4-e950577eaf4f vdi-name-label ( RO): SCSI 2:0:0:0 allowed-operations (SRO): attach; eject current-operations (SRO): empty ( RO): false device ( RO): userdevice ( RW): 3 bootable ( RW): false mode ( RW): RO type ( RW): CD unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> uuid ( RO) : 4d0f16c4-9cf5-5df5-083b-ec1222f97abc vm-uuid ( RO): 3eb63bb4-29d1-f3a7-44a1-37fdb3711454 vm-name-label ( RO): veeam01 vdi-uuid ( RO): 3f89c727-f471-4ec3-8a7c-f7b7fc478148 vdi-name-label ( RO): [ESXI]veeam01-flat.vmdk allowed-operations (SRO): attach current-operations (SRO): empty ( RO): false device ( RO): xvda userdevice ( RW): 0 bootable ( RW): false mode ( RW): RW type ( RW): Disk unpluggable ( RW): false currently-attached ( RO): false attachable ( RO): <expensive field> storage-lock ( RO): false status-code ( RO): 0 status-detail ( RO): qos_algorithm_type ( RW): qos_algorithm_params (MRW): qos_supported_algorithms (SRO): other-config (MRW): owner: io_read_kbs ( RO): <expensive field> io_write_kbs ( RO): <expensive field> Look for the device with type ( RW): CD. Take that uuid. In this case, the uuid was 7443c2f0-7c04-ab88-ccfd-29f0831c1aa0. Destroy the vbd: xe vbd-destroy uuid="7443c2f0-7c04-ab88-ccfd-29f0831c1aa0" Once this was done, the vm started without issue.
  • The HA doesn't work

    16
    0 Votes
    16 Posts
    2k Views
    S
    @tjkreidl Hello, Didn't I get half of ok too? 28 machines impacted, 15 left ok, and 13 with the error msg
  • Failure to Boot

    3
    0 Votes
    3 Posts
    520 Views
    D
    @Davidj-0 Zero changes. This is ran on a MS-01 with 2x 2TB NVME running in mirror RAID. All I use this for is to mess around with VMs and self host some services. I was still learning stuff so never back up anything because I was still building it out. Don’t feel like starting over, but have no idea what this fault even means to attempt to recover what I have done.
  • All NICs on XCP-NG Node Running in Promiscuous Mode

    7
    0 Votes
    7 Posts
    1k Views
    bleaderB
    Running tcpdump switches the interface to promiscuous to allow all traffic that reaches the NIC to be dumped. So I assume the issue you had on your switches allowed traffic to reach the host, that was forwarding it to the VMs, and wasn't dropped because tcpdump switched the VIF into promiscuous mode. If it seems resolved, that's good, otherwise let us know if we need to investigate further on this
  • Debian VM Takes down Host

    3
    0 Votes
    3 Posts
    440 Views
    P
    @Andrew Ok, thanks I will give that a try.
  • Does XCP-NG support NVMe/TCP?

    4
    0 Votes
    4 Posts
    876 Views
    M
    @olivierlambert Thanks!
  • 0 Votes
    1 Posts
    242 Views
    No one has replied
  • DC topology info

    11
    0 Votes
    11 Posts
    1k Views
    I
    @bleader yes, Thank you.
  • Beginner advice - coming from Debian

    8
    1 Votes
    8 Posts
    1k Views
    D
    @WillEndure said in Beginner advice - coming from Debian: @DustinB @DustinB said in Beginner advice - coming from Debian: Why are you keen on keeping raw XEN on Debian? Not committed to the idea - its just what I currently have and invested a bit of time into setting it up and understanding it since before XCP-ng was around. Time is a factor too because you can waste a lot of it setting stuff like this up! But overall yes, I should probably move over to XCP-ng for my host. Got it, sunk-cost fallacy.
  • Copying a VM from 8.2 to 8.3 and back

    2
    0 Votes
    2 Posts
    412 Views
    stormiS
    I think this part of the doc describes your issue: https://docs.xcp-ng.org/releases/release-8-3/#a-uefi-vm-started-once-on-xcp-ng-83-cant-start-if-moved-back-to-xcp-ng-821
  • Unable to find logs in XenCenter or Xen Orchestra

    Solved
    5
    0 Votes
    5 Posts
    724 Views
    S
    @olivierlambert thanks i got it.
  • PCIe card removal and failure to boot from NVMe

    Solved
    14
    1 Votes
    14 Posts
    2k Views
    olivierlambertO
    Okay weird, at east glad to know it works now
  • how to use template created in another host machine?

    2
    0 Votes
    2 Posts
    174 Views
    olivierlambertO
    If the machines are on the same pool no problem. If they are not, you need to export the template and import it in the other pool.
  • Openstack vs xcp-ng (XO)

    3
    0 Votes
    3 Posts
    666 Views
    I
    @olivierlambert got it.
  • XCP-ng host - Power management

    11
    2
    0 Votes
    11 Posts
    2k Views
    A
    @tjkreidl We don't need performance, but we do need to test how XCP-ng pools, networking, migration, live migration, backup, import from VMware and so on work. It's just a playground where we can have relatively many XCP-ng hosts, but it's not about performance, it's about efficiency and low requirements, because it's just a playground where we learn, validate how things work, and prepare the process for the final migration from VMware to XCP-ng. We originally had two R630s ready for this, then 4, but that would have been unnecessary, given the power consumption, to have physical hypervisors, so in the end we decided to virtualize it all. Well, on ESXi it's because XCP-ng works seamlessly there in nested virtualization.