• Created VM from Fast Clone, Now How to Separate

    Xen Orchestra
    5
    0 Votes
    5 Posts
    116 Views
    bvitnikB
    @hawkpro I believe it will be worse with continuous replication because your replica will be in a shut down state. When you decide to start it, you will have to shut down the original VM and start the replica. You will have a downtime during shut down and start up sequence. Downtime during VM migration is a necessity so there is nothing unexpected there. All types of migrations require a VM to be suspended for some time (usually seconds) during the switchover from one host or SR to the other host or SR. If you have extended downtimes of your VM migrations, then something is not quite right with your setup.
  • Host stuck at grub on reboot

    Hardware
    1
    0 Votes
    1 Posts
    32 Views
    No one has replied
  • Intel iGPU passthough

    Hardware
    45
    0 Votes
    45 Posts
    21k Views
    T
    I am in the same situation as @vhaelan. same as in same iGPU (alderlake) passed-through, same output for those latest commands, same OS (Fedora CoreOS on latest Xcp-Ng stable). Tried current avenues suggested in this thread with no progress. It seems vhaelan has settled with CoreOS under Proxmox (which works!). He also mentioned it works in Debian under XCP-ng, though I haven't tested that myself. I would appreciate additional suggestions for troubleshooting to take this further, in case anyone has any other ideas.
  • 0 Votes
    4 Posts
    101 Views
    W
    @simonp I'm not sure which one as I can see 2 config.tom file. 1st is under "/root/.config/xo-server/" config.toml.txt 2nd is under "/opt/xo/xo-server/" config.toml2.txt Both config.toml attached. Thank you. Best regards, Azren
  • Issues with new vm after latest 8.3 updates (priror to release)

    Solved XCP-ng
    4
    1 Votes
    4 Posts
    179 Views
    olivierlambertO
    No worries, it happens! Glad you found the problem
  • 0 Votes
    11 Posts
    2k Views
    DustyArmstrongD
    @Greg_E Thanks, I've got another thread up and it's potentially being addressed!
  • Minimums for XOstor disk configuration?

    XOSTOR
    6
    0 Votes
    6 Posts
    195 Views
    D
    And to really round this out, the MTBF for any of these is in the millions of hours (1.2-3M), that's a use time of 136.968 - 342.46 years respectively. Basically, if a drive dies, just replace it no matter what, but in the end the reliability of these drives is meant to outlast all of us. Unless you actually need some specific function provided in some form-factor or model, don't bother.
  • USB-Passthrough does not survive reboot of VM

    XCP-ng
    3
    1
    0 Votes
    3 Posts
    92 Views
    C
    @DustinB doesn't it use the exact same mechanism? I have to find out.
  • IPMI Info Outlet Air temp missing.

    Xen Orchestra
    10
    2
    0 Votes
    10 Posts
    312 Views
    J
    @acebmxer I’m sorry to say that if those Dells are at your workplace, the wrong edition of iDRAC was purchased. You see with at the very least iDRAC 9 access to the full granular IPMI sensor data, was placed behind an edition paywall by Dell Technologies. Outlet temperature is just one of the feeds, missing from the Enterprise or lower edition of iDRAC 9. You’ll get the temperature readings from the Dell iDRAC web browser based interface, but not IPMI with iDRAC 9 Enterprise. To obtain the full IPMI sensor data you need the Datacenter edition.
  • Unable to connect to V5

    Xen Orchestra
    3
    0 Votes
    3 Posts
    113 Views
    J
    @olivierlambert said: I think it was fixed since, are you sure you are using an uptodate XO? Commit 0be23 Anyway was directing attention to the GitHub issue, which was opened. The issue has another reference to it.
  • 0 Votes
    2 Posts
    94 Views
    olivierlambertO
    Good question! Ping @pdonias I assume
  • Commit 109e376 Implications

    Xen Orchestra
    5
    0 Votes
    5 Posts
    344 Views
    rzrR
    Latest XO can be new tested along XCP-ng (with OpenSSL 3): https://xcp-ng.org/forum/topic/9964/xcp-ng-8-3-updates-announcements-and-testing/363 Feedback welcome
  • 0 Votes
    2 Posts
    108 Views
    florentF
    @ItzJay in your journalctl -i xo-server log you should have something like nbdkit logs of ${diskPath} are in ${tmpDir} where tmpDir is something like /tmp/xo-serverXXXX/stderr can you post the file ? alternatively, you can open a support ticket
  • Backup and the replication - Functioning/Scale

    Backup
    20
    0 Votes
    20 Posts
    849 Views
    florentF
    thanks Andryi We us round robin when using NBD , but to be fair, it does not change the performance a lot in most of the case. The concurrency settings ( multiple connection to the same file ) is helping when there is a high latency between XO and the host. SO , @fcgo if you have thousand of VMs , you should enable NBD it will consume less resource on the DOM0 and XO , and it will be spread on all the possible hosts.
  • Mirror backup: Progress status and ETA

    Backup
    4
    1
    1 Votes
    4 Posts
    266 Views
    A
    @Forza Too funny. I came across this post and clicked on the URL you referenced......and that earlier question was from me! Well, nothing has changed. I'm doing mirror'ed backups and I'm still blind as a bat.
  • Automation of all CURD operations

    REST API
    6
    0 Votes
    6 Posts
    169 Views
    J
    @rama said: @olivierlambert thank you. but is it possible to keep tracking all the CURD operation like we have in terraform. but currently MCP have only Read tasks. Like if some new interns in my lab don't know about this and in this agentic framework if he/she need a VM's, delete or update it can be done very quick. it will save many hours. I hope this will be available in future or if you wish to do tell me how far it is. The plugin MCP Server is read only by design to keep using it safe, to have an MCP for reading and another for writing is best practice. If you desire to have a separate MCP server for the writing actions, feel free to suggest that in the feedback portal. You can even develop your own MCP server, which makes calls to the write side of the XO REST API. https://modelcontextprotocol.io/
  • Issues joining pool with less pif on the newest host

    XCP-ng
    13
    0 Votes
    13 Posts
    507 Views
    A
    @semarie This pool is still on 8.2.1, we are trying to add this host in order upgrade with little to no downtime.
  • Backups routinely get stuck on the health check

    Unsolved Backup
    10
    1
    0 Votes
    10 Posts
    775 Views
    D
    Hello @Austin.Payne, I wanted to share my experience. I had similar issues through multiple XO versions. However, after learning that health checks rely on the management port I did some more digging. TL;DR it was a network configuration, and not an XO or XCPNG problem. If you have a multi-nic setup and you have XO as a VM on your XCPNG host, I would recommend that whatever network you use for management is on the same NIC. Setup: XO is a VM on XCPNG Host (only one host in pool). Network setup: eth0 = 1GB NIC = Management interface for XCPNG host (192.168.0.0/24 network) eth1 = 10GB DAC = NIC for 192.168.0.0/24 network to pass through VMs (XO uses this NIC) eth1.200 = 10GB DAC = VLAN 200 on eth1 NIC for storage network (10.10.200.0/28). Both the XCPNG host and VMs (including XO VM) use this. IP setup: XCPNG host = 192.168.0.201 on eth0; 10.10.200.1 on eth1.200 XO VM = 192.168.0.202 on eth1; 10.10.200.2 on eth1.200 Remote NAS for backups = Different computer on 10.10.200.0/28 network In this setup, backups would always finish, but health checks would hang indefinitely. However, after changing the XCPNG host to use eth1 for the management interface instead of eth0, health checks starting passing flawlessly. I am not sure if the problem was having the XCPNG host connecting to the same network with two different NICs or if eth1 was the better NIC thus was more reliable during the health check (could also explain why backups would always succeed). It's also possible it was switch related. In this setup, eth0 was connected to a Cisco switch and eth1/eth1.200 was connected to a MIkroTIk switch. Again, not sure what actually solved it, but consolidating everything to a single NIC solved the issue for me (and physically unplugging eth0 after the eth1 consolidation). Hopefully sharing my experience helps solve this issue for you.
  • 0 Votes
    4 Posts
    152 Views
    P
    @kent you would have to rollback to early december 2025 XO/XOA (before the 10 of december) quite a long way I'm just waiting the devs to eventually fix it as we have other way to manage VDIs (API calls)
  • 4 Votes
    4 Posts
    445 Views
    dalemD
    Version 1.4.0 is released: https://codeberg.org/NiXOA/system/releases/tag/v1.4.0 It includes significant changes and improvements, including: dedicated getting started section, migration to valkey, only needing to clone system, and helper scripts. the xen-orchestra-ce nixpkg now references the libvhdi nixpkg, and the core flake now references and pulls from the xen-orchestra-ce repo as an overlay. System (the user input flake) now uses the Core repo as an overlay, reducing the need to clone both locally AND allowing system to pull new updates and releases from core. XO, and libvhdi as needed. The next goal is: Make an xsconsole-like TUI Automate package updates for libvhdi and xen-orchestra-ce using CI/CD pipelines Submit libvhdi and xen-orchestra-ce as official nixpkgs