XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • Marc.pezinM

      Xen Orchestra Lite

      Watching Ignoring Scheduled Pinned Locked Moved XO Lite
      71
      4 Votes
      71 Posts
      53k Views
      Marc.pezinM
      @tcorp8310 yes yum update should do the trick
    • Tristis OrisT

      VDI_IO_ERROR Continuous Replication on clean install.

      Watching Ignoring Scheduled Pinned Locked Moved Solved Xen Orchestra
      66
      2
      0 Votes
      66 Posts
      17k Views
      olivierlambertO
      Good to know it works now Thanks for the feedback!
    • cbaguzmanC

      VDI_IO_ERROR(Device I/O errors) when you run scheduled backup

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      66
      0 Votes
      66 Posts
      24k Views
      R
      @rauly94 Hello everyone. Anyone can help me on this issue. Now it started happening on 2 vm's instead of 1 vm. It is happening on the backup replication. Error: VDI_IO_ERROR(Device I/O errors)This is a XenServer/XCP-ng error Start: Jul 5, 2023, 09:03:18 AM End: Jul 5, 2023, 09:44:46 AM Duration: 41 minutes Error: VDI_IO_ERROR(Device I/O errors)This is a XenServer/XCP-ng error Start: Jul 5, 2023, 09:03:09 AM End: Jul 5, 2023, 09:45:12 AM Duration: 42 minutes Error: VDI_IO_ERROR(Device I/O errors)This is a XenServer/XCP-ng error Type: delta
    • stormiS

      XCP-ng 8.0.0 Release Candidate

      Watching Ignoring Scheduled Pinned Locked Moved News
      66
      5 Votes
      66 Posts
      43k Views
      borzelB
      anyone can request a change in https://github.com/xcp-ng/xenadmin/issues
    • olivierlambertO

      Citrix Hypervisor 8.0 landed

      Watching Ignoring Scheduled Pinned Locked Moved News
      65
      5 Votes
      65 Posts
      47k Views
      C
      @stormi dd stands for disk dump and does exactly that: Copy a stream of data. Fio however can be configured for precise workloads and read/write mixes, parallel workloads etc. So the first thing will only give you streamline benchmarks, what almost nobody cares about. The second can simulate realworld (VM/database...) workloads, where (controller) Caches and non magnetic storage (Flash, Optane, MRAM...) makes the real difference. Also use big amount of data, since caches can impact small ones extremely. Don't get me wrong: We need them and they can make huge differences, but as long as your benchmarks fully fit into them, it gives your nonsense/fake results. Also (consumer) SSDs start throttling after some 10 to a very few 100 GB of data written. Their caches fill up and they 'overheat'. You can spend days on benchmarks and how to do what.
    • D

      XSA-468: multiple Windows PV driver vulnerabilities - update now!

      Watching Ignoring Scheduled Pinned Locked Moved News
      65
      3 Votes
      65 Posts
      9k Views
      G
      @TrapoSAMA All of mine are 2022, but saw this in previous driver versions with 2025. Low priority on this so I haven't fixed it yet.
    • olivierlambertO

      First SMAPIv3 driver is available in preview

      Watching Ignoring Scheduled Pinned Locked Moved Development
      64
      5 Votes
      64 Posts
      27k Views
      nikadeN
      @cg Those HDD's will take their fair time to rebuild. Always stressful looking how far along it is while crossing your fingers that another drive wont pop during the process.
    • J

      Import from VMware fails after upgrade to XOA 5.91

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      64
      0 Votes
      64 Posts
      19k Views
      olivierlambertO
      Great news and also great feedback! I think we'll add this in our guide
    • S

      Cannot export OVAs

      Watching Ignoring Scheduled Pinned Locked Moved Solved Xen Orchestra
      64
      0 Votes
      64 Posts
      15k Views
      olivierlambertO
      Thanks for your feedback lads!
    • A

      XO Community edition backups dont work as of build 6b263

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      64
      0 Votes
      64 Posts
      8k Views
      R
      @florent Great work, thanks for the fixes
    • okynnorO

      nVidia Tesla P4 for vgpu and Plex encoding

      Watching Ignoring Scheduled Pinned Locked Moved Solved Compute vgpu
      63
      0 Votes
      63 Posts
      28k Views
      M
      @high-voltages Just wanted to thank you again, its working now with the commands i did in the screenshot.
    • stormiS

      XCP-ng 8.1.0 beta now available!

      Watching Ignoring Scheduled Pinned Locked Moved News
      63
      5 Votes
      63 Posts
      43k Views
      stormiS
      This thread is now dead, long live the 8.1 RC thread!
    • C

      Switching to XCP-NG, want to hear your problems

      Watching Ignoring Scheduled Pinned Locked Moved Migrate to XCP-ng
      62
      0 Votes
      62 Posts
      22k Views
      nikadeN
      @flakpyro said in Switching to XCP-NG, want to hear your problems: @CodeMercenary I am using V4 on the XO-Server to our backup remotes and it seems to work just fine. However using V4 as a storage SR was nothing but problems, as @nikade mentioned we had tons of NFS Server not responding issues which would lock up hosts and VMs causing downtime. Since moving to v3 that hasn't happened. Checking a host's NFS retransmissions stats after 9 days of uptime i see we have had some retransmissions but they have not caused any downtime or even any timeout messages to appear in dmesg on the host. [xcpng-prd-02 ~]# nfsstat -rc Client rpc stats: calls retrans authrefrsh 268513028 169 268537542 From what a gather from this blog post from redhat (https://www.redhat.com/sysadmin/using-nfsstat-nfsiostat) it seems like that amount of retransmissions is VERY low and not an issue. Thats fine, we've got a lot more and I haven't seen any "nfs server not responding" in dmesg yet. Using NFS v3 for both SR and backups now for a couple of years and it's been great, I think I had issues once or twice in like 5-6 years on the backup SR where the vhd file got locked by dom0, Vates helped out there as always and it was resolved quickly.
    • A

      Unable to export OVA

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      62
      0 Votes
      62 Posts
      13k Views
      DanpD
      @florent Nothing large. Either 10 or 20GB. I'm thinking it's due to the same issue as here.
    • stormiS

      New guest tools ISO for Linux and FreeBSD. Can you help with the tests?

      Watching Ignoring Scheduled Pinned Locked Moved Development
      62
      2 Votes
      62 Posts
      40k Views
      A
      @Pierre-Briec , @stormi I had a look at getting the xe-guest-utilities working on Ipfire v2 now (core 173, the latest version). Using a new /usr/sbin/xe-linux-distribution script, like suggested here, allows it to detect the ipfire. I then manually copied the binaries and scripts from the linux tar file, into the folders in Ipfire, since the install script did not seem to handle Ipfire properly. When starting the daemon using /etc/init.d/xe-linux-distribution, the next problem was that the "action" function does not exist in the /etc/init.d/functions file in Ipfire. So I just edited the script, replacing the "else" with an "fi" in the if testing where the functions file is sources, so that the locally defined action method is used. Then the agent started fine. Then I also saw the issue of the IP address not being reported. In my setup, there are two reasons for this. One is that Ipfire uses "red0", "green0", "blue0" etc as interface names, which the xe-guest-utilities will not consider. The other reason is that I do PCI passthrough of 3 network cards to the Ipfire, and hence does not use the "vif" interface/network that XCP-ng makes available to the Ipfire. Althought the "green0" is really on the same network as the "vif" in my setup. This was using the 7.30 tar file from the XCP ISO, I think. I then cloned the 7.33 / master version of xe-guest-utilities from github, and used that thereafter. I manually changed and built the xe-guest-utilities, adding "red", "green", "blue" to the list of interface prefixes that got considered, but it did not help. I suspect the reason is that these interfaces does not have a /sys/class/net/eth0/device/nodename entry, which contains a reference to the "vif" that XCP-ng knows about, as I understand it. So /sys/class/net/eth0/device/nodename exists, but the eth0 is not assigned any IP address, since it is not used by IPfire. While there is no /sys/class/net/green0/device/nodename entry. I am not sure who is "creating" this "nodename" entry, but I suspect is it Xen. And I suspect it is missing, since the green0 interface has no relationsship with the dom0 really. But then I also got more questions around what is actually meant to be displayed of "network" info in the XOA web UI. Is it only the network between dom0 and domU ? Or ideally all networks defined on domU ? (i.e. red0 and blue0 and orange0 ) ? And I also think I spotted a bug on the "Stats" page of XOA, since under "Disk throughput", it seems like always "xvda" and "xvdd" is displayed, even if the host only has one disk, "xvda". But that I should report as a bug, if I do not find it as already reported / known. While playing with this, I also noticed that the management agent version was not properly displayed, i.e. not at all. And this seems to be caused by the the version constants not being replaced while building the daemon. I am not a go build expert, so I'll investigate it a bit more. But it seems like I'm not the only one with that issue, because the same problem seems to exist with the xe-guest-utilities that are part of Alpine Linux 3.17 distribution. I do not think that there are that many running Ipfire on XCP-ng/Xen. I've been briefly involved in some pull requests against Ipfire, so I might look at making one for getting the xe-guest-utilities into Ipfire itself, but since the use is not high, I have a doubt it makes much sense. Thanks for a great tool in XCP-ng, I enjoy using it in my home setup. Regards Alf Høgemark
    • L

      Issue with SR and coalesce

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Backup
      62
      0 Votes
      62 Posts
      7k Views
      nikadeN
      @tjkreidl said in Issue with SR and coalesce: @nikade Am wondering still if one of the hosts isn't connected to that SR properly. Re-creating teh SR from scratch would do the trick, but a lot of work shuffling all the VMs to different SR storage. Might be worth it, of course, if it fixes the issue. Yeah maybe, but I think there would be some kind of indication in XO if the SR wasn't properly mounted on one of the hosts. Lets see what happends, its weird indeed that its not shown.
    • BenjiReisB

      IPv6 support in XCP-ng for the management interface - feedback wanted

      Watching Ignoring Scheduled Pinned Locked Moved News
      61
      6 Votes
      61 Posts
      30k Views
      jivanpalJ
      @BenjiReis I've finally taken the time to review this again now that I've updated to 8.3-rc1 via yum update, so here's some follow-up on the points I brought up previously: There is no way to configure IPv6 on the management interface via xsconsole, such as if one wants to switch between static configuration, autoconf via RAs, or DHCPv6. True but we'll soon release an new version of xsconsole adapted for IPV6 allowing to configure IPv6 for management interface There is apparently no support for IPv6 DNS servers, only IPv4. For example, if I try to add an IPv6 address like fd00::1 or [fd00::1] as a DNS server via xsconsole, there is apparently no change to the configuration. Editing /etc/resolv.conf works to achieve this (e.g. adding the line nameserver fd00::1), but this is known not to persist across reboots. Should be solved by the future xsconsole release as well Still not seeing any enhancements/changes in behaviour as of xsconsole 11.0.6-1.1.xcpng8.3. There is apparently no support for RDNSS (advertisement of DNS servers in RAs rather than via DHCPv6). DHCPv6 is one of the major blindspot for now indeed, I'm working on it but I don't have much knowledge on this so any hints are welcome if you spot if something is missing somewhere. Just to clarify, this isn't related to DHCPv6, but RAs (Router Advertisement packets). I personally don't have a DHCPv6 server on my network at all. RDNSS is described in RFC8106. Others may want to advertise DNS servers using DHCPv6, though, so that should still be tested as well. The "autoconf" option (available during installation, after choosing IPv6-only or dual-stack, and then being asked which mode to use to configure IPv6 addresses) appears to only be used at installation time to determine values such as the gateway's link-local address, the available address prefixes, and perform SLAAC and DAD, but then the resulting values are hard-coded and don't change according to changes in the environment, such as an upstream change in network prefix. (I will need to do some more testing to really confirm this, but this seems to be the case in my experience.) Compare this to when IPv4 is configured to use DHCP(v4), in which the management interface may have a different IPv4 address at different times, namely if it's assigned a different address by the DHCP server when it attempts to get or renew a lease. I'm not aware of this issue, i'll try to reproduce in our env. I haven't been able to reproduce this either, and my prefix has changed a couple of times since I said this was an issue. Perhaps I just imagined it, hit a weird edge case, or didn't wait for the valid lifetime of the old prefix to expire; my router doesn't reliably advertise the fact that an old prefix is no longer valid. Some repos are unreachable in IPv6-only environments, which I'm aware is already known, and I can get around this by using NAT64 (either with CLAT to perform 464XLAT; or with DNS64), but this fact is currently a blocker for me to move to being IPv6-only. We contacted the mirrors many times, still trying to have'em all advertising IPv4 and 6 and also trying to find a solution that could "smartly" redirect towards a compatible mirror. @stormi said in IPv6 support in XCP-ng for the management interface - feedback wanted: FYI, I have finally reviewed all mirrors that provide updates for XCP-ng and disabled the remaining 6 which didn't support IPv6 (and notified their owners. I'll enable them again if they enable IPv6). So, if you experience any issues installing updates via IPv6, tell us so that we investigate faulty mirrors. I personally haven't had any issues reaching repos since then, but I haven't explicitly tested this or looked through the mirrorlist. I also don't think this is much of an issue in practice, since 464XLAT can be used; this is no longer a blocker from me, as I've reviewed the way I'm deploying IPv6-only. It's very nice to see you motivate / put pressure on mirror maintainers to make their sites accessible over IPv6 though, especially indirectly by simply removing such sites from the mirrorlist. Speaking of NAT64, this is just a question, I haven't tested or looked into this myself: Does XCP-ng include a CLAT daemon and support for auto-configuring 464XLAT using either the "PREF64" RA option (RFC8781) or resolution of ipv4only.arpa via a DNS64 server (RFC7050)? Haven't tested either for now, feel free to do and report if you get here before me. I've got this working pretty easily by manually installing clatd from GitHub and its dependencies from EPEL and the other RHEL repos. It works, but isn't native. That being said, I don't know of any other Linux distros that natively support this yet. To my knowledge, there is ongoing work to implement this directly in Systemd. Clatd supports RFC7050, but doesn't support PREF64/RFC8781 as it's not particualrly feasible for it to do so, but hopefully Systemd is able to if/when it implements a CLAT. This also isn't reliable across reboots / DHCP lease renewals because I have no simple way to disable IPv4 on the management interface. I haven't tried this with an installation where I've selected "IPv6-only" in the installer. One practical issue I've experienced when using 464XLAT in this way is that XO Lite tries to contact the pool server in the frontend / client / web browser using JS fetch calls for URLs falling under https://localhost/, which would instead usually be under https://<pool server IPv4 address>/. These are the addresses that XO Lite will prompt the user to ensure that the browser trusts TLS certificates for if they are self-signed and no known CA has issued/signed them. As such, these don't work, since "localhost" from the XO Lite user's perspective isn't the same machine as the "localhost" that XO Lite is running on. If XO Lite supported making these calls using any of the pool servers' routable IPv6 addresses (e.g. ULAs or GUAs, but not LLAs), this would work just fine. I may find some time to test these things on an "IPv6-only" installation, but I expect that will be after 8.3 has reached general release.
    • D

      Very scary host reboot issue

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      60
      0 Votes
      60 Posts
      24k Views
      M
      @olivierlambert said in Very scary host reboot issue: I am very very busy so I don't have time to make a search by myself but maybe someone else around with few minutes could point you to the blog post talking about this edit: found it in few sec luckily: https://xcp-ng.org/blog/2024/01/26/january-2024-security-update/ Thanks. I'll check this out.
    • akurzawaA

      backblaze b2 / amazon s3 as remote in xoa

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      59
      2
      0 Votes
      59 Posts
      21k Views
      S
    • X

      Logs Partition Full

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      59
      0 Votes
      59 Posts
      32k Views
      X
      How can I determine which VM is causing this problem? Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830066:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830198:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830311:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830458:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830605:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830712:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830797:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.830885:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831006:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831094:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831185:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831260:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831333:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831407:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831478:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.831551:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884264:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884519:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884615:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.884721:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.923891:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924054:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924207:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924319:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924505:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924675:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924803:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.924913:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927026:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927185:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927303:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927436:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927657:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927772:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927903:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.927993:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928136:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928255:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928396:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail2 Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.928479:xen_platform_log xen platform: XENBUS|CacheCreateObject: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.932786:xen_platform_log xen platform: XENVIF|__AllocatePages: fail1 (c0000017) Nov 1 13:05:00 df-c01-node04 qemu-dm-25[29890]: 29890@1730462700.932906:xen_platform_log xen platform: XENVIF|ReceiverPacketCtor: fail1 (c0000017)