Subcategories

  • All Xen related stuff

    540 Topics
    5k Posts
    N
    @krul8761 I'm not certain (but maybe someone at Vates / @olivierlambert could correct any misgivings or misunderstandings I have on this) but after doing a lot of deep diving, research and experimentation, it appears that certain motherboard/CPU combinations are capable of "NV" - Nested Virtualization - and to varying degrees of effort involved to get it to work. I tried everything that I could think of, everything I stumbled upon to "try" throughout XCP-ng/XenServer and ProxMox forums, Vbox, even virtsh/vmm/libvirt, and I never found any solutions to work on the Dell PowerEdge platform (R710 and R730 models - which have Intel CPUs/chipsets), nor any Asus motherboard combos (Using AMD CPU/APUs - tested nearly a dozen with XCP-ng) nor any other hardware. It's noteworthy to mention that I have mostly tested "old and End-of-Life" types of hardware that I am attempting to keep alive and useful for random/various purposes and "learning tools" (eg DNS Servers, VPN Servers, "apt/yum/dnf cachers", "Virtual Machine Appliances" like Security / Firewall / Networking VMs, K8s Nodes, and the like). My expectations were fairly low for getting XCP-ng to do "NV" on most of these systems after I went through the pain of trying various kernels, "manually compiling", tweaking/hacking sys configs, customization with modprobe-ing, allllll the things (within my limited scope of understanding, as I am no Linux kernel developer) - Nothing worked. So I tried to re-think everything, and weigh the options. I will get real wordy and opinionated here, but maybe this will save others from spending a lot of time "getting stuck" or going down dead-end paths, as I certainly have in my thinking about "why I think Nested Virtualization is so important to me" (and how that thinking evolved with realities): My first thought: "Buy more stuff" Explore a known-good / user-reported / well documented hardware platform where Nested Virtualization works well with XCP-ng I decided against this, as I could "feel" that even if "something finally worked!" its days would likely be numbered and I could rely on NV-ed VMs for key services, as at any moment, there might be some breaking change that happened with yum update, or, if I attempted to "hold back" certain packages to maintain the frail functionality that I worked hard at trying to accomplish, there would be security concerns (2024 was wild, and 2025 looks to be even more so, in terms of security concerns) Then I shifted to what I felt was a more sane option: A "More Sane" Option?: Use a different Hypervisor that has known-good and stable functionality with Nested Virtualization capabilities. (Obvious choice, right? welllll...) What I've found here, is that there really are NOT very many decent options for Nested Virtualization. But there ARE working solutions for MOST edge cases, but not really a "clear best fit". How "Sane" is it to switch to VMware ESXi or "Workstation Pro", since that product line as "the most simple and functional NV" (arguably)? --- There used to be ESXi, but Broadcom wrecked that... though they did give us "VMware WorkStation Pro" in consolation. But for how long? And what's with the weird licensing? How security is it REALLY, with ALL those CVEs and newworthy security flaws with extremely sophisticated attackers? It seems like ESXi and other VMware products got rolled out as "polished turds" before the "sale" to Broadcom, in terms of stability and security. It's not just a problem with VMware/Broadcom, either. I want to be clear that I'm not solo bashing VMware/Broadcom, per say, but these are issues with any "Integral Platform" and/or Hypervisor, but in the case of (older?) ESXi and VMware products, such issues are somewhat exacerbated by a lack of transparency with Close Source, though. How sane is Hyper-V? Hyper-V has Nested Virtualization for most modern PC platforms and is backed by LOTS of "supporting communities and documentation" --- Hyper-V is actually not a terrible option, once you learn how to navigate the confusing GUI options and the weird "secret knowledge of PowerShell commands" that ACTUALLY "unlock" the feature that you THOUGHT you enabled in the GUI menus and rebooted 32 times to finally get the output of an often seemingly unrelated feature/tool/status that you interpret as "I finished step 12 of 47!" But what comes next, once you DO get Nested Virtualization working in Hyper-V? Pain. I haven't tried using the more modern iterations of "Windows Server 202X XX", but the earlier version and the "starter version" on Windows 10/11 Pro DOES have some interested use cases where its the path of least resistance. For example, Dell's (insanely resource hungry) "OMSA" / OpenManage Enterprise Appliance (Or whatever confusing naming its better known as in your particular workplace and homelab circles) has a ready-to-go "Hyper-V Appliance" that is... nice... but... you probably need at least 3+ Dell "Enterprise Products" to justify the CPU and RAM requirements for the more useful features - So systems with 32GB or less RAM aren't going to be able to have "always on Appliances" like these (again, Dell isn't the only one that does this - these are "Enterprise Grade Solutions", so there is an inherent "infinite money" expectation when using "Enterprise Vendor Closed Source tools" - They NEED more spend to keep making hardware/software synergies) Hyper-V is TOTALLY worth mentioning, and I realize I've been harsh on it, as it does have its place and use cases that will most likely fit what you are trying to accomplish @krul8761 But for a homelab or "experimental dev environment"? HyperV as a platform will take forever to "spin up", You will learn WAY more than you ever want to about strange MS Windows features, security measures (and flaws) and other various quirks, and the timesuck of trying to weed through what change happened in which release or "KB" for Windows is very real. HyperV (and basically all of Windows / Microsoft Products) have some of the most "Extensively confusing blend of old, new, stale and "in development" support forums and documentation - Microsoft does a Disturbingly good job (Capital D) and "covering its tracks", as you will notice that 50+% of the time you search for something Windows/MS specific with a non-EVIL search engine, the link will redirect you to the generic "search our forums!" page - HP is really getting good at doing this sort of "scrubbing", too. And some of it is good, but most of the details I search for tend to go "missing" and I find plenty of "just turn it on and off again" type of advice from issues that are 5+ years old. All that said, is THAT the Hypervisor you want to trust? I don't. BUT, again, HyperV DOES have use cases. Like its integration with WSL and Docker Desktop, and there are some interesting developments happening with "Shared / Partitioning GPUs Configurations" (That's a whole other rabbit hole - if you're into gaming or cheaping out on GPU costs for simple video stearming/transcoding stuff), but GOOD LUCK with sorting all that out with the "documentation" - You often have better luck with a handful of people that slap together 15+ step guides and the accompanying files and list of commands needed to get it working that end up in a GitHub Repo completely unrelated to MS (Case and point: https://github.com/DanielChrobak/Hyper-V-GPU-Partition-Guide ) - These "techniques" DO in fact work, but there are SO MANY caveats and "gotchas" that it becomes unrealistic to maintain them for very long. So "Yes, HyperV can and does work!" - but its a HOT MESS trying to debug and maintain such complicated, un-intuitive configurations with "part gui, part PowerShell, part Windows Settings, part "custom scripts you download and run" (IMO). Again, if the newer versions of Windows Server 202X have a better tool set for HyperV, I'm not aware, (nor have much interest), but I'm not bashing on Hyper-V just because its on OSS. It's because its a trainwreck. If one simple feature takes 2 days to try to understand, then track down all the "pieces", then test... only to find out "it doesn't work on my device/motherboard/etc", with nearly non-existent "feedback" to confirm or deny its even possible... there's no way it's going to be my go-to or recommendation. But maybe whatever cool-but-poorly implemented feature you might want will work for your specific blend of hardware. It has VERY rarely worked out for me, and even more rare "worth the effort involved" (specifically the "GPU sharing" and the "PCIe pass-through" implementations - https://woshub.com/passthrough-gpu-to-hyperv-vm/ - and this thread worth reading too - https://gist.github.com/Ruffo324/1044ceea67d6dbc43d35cae8cb250212#file-hyper-v-pci-passthroug-ps1) "Pick your poison" applies, here. What about virtsh/libvirt/vmm for Nested Virtualization? The short answer is "Yes you can", but its only slightly less convoluted than with HyperV (again, just IMO) This is an excellent article spelling it out with a basic Ubuntu set up using Nested Ubuntu and Windows installs - https://just.graphica.com.au/tips/nested-kvm/ BUT... while nesting / passthrough-ing and generally "enabling of all the coolest features" with libvirt/virtsh/kvm/qemu/(what is the actual "common name" for this Hypervisor, anyway?) you will probably have a BRUTAL journey, trying to match configurations for your specific "base OS" (If not Ubuntu or perhaps Rocky/Fedora) The JANKY way you have to configure "bridges" for your network adapters and the super awkward UI (if you're using VMM and not just pure CLI) turns me off pretty hard. Just my personal experience: At one point, I had everything working "perfectly" from a single NIC: WakeOnLan, Shared IP / CIDR from "Host-to-Guest", "vNICs" for each of my VMs, great! ... but then on reboot? It fubar-ed everything, and I gave up fighting with it. I finally used a 2nd USB NIC and that "worked", but... then there were odd quirks with all the other overly-complicated networking for libvirt/virtsh/VMM, too (too many and too complicated to remember, let alone list out). So if you want to use this fairly great "Type 1.5 Hypervisor" (It's an odd one to nail down the "Type" of, which reflects its power and capabilities). But given all of its issues and challenges, hard-to-find configurations, updates, and odd-but-not awful "XML-based" feature-enablements, it has a place for certain use cases, but is no "joy" to work with, either (The permissions issues alone is probably where most people give up on this one). TrueNAS Though? I'll throw an honorable mention out here to TrueNAS, too. But using TrueNAS as a virtualization platform is, again, in my opinion, similar to putting NOS boost on a semi truck. Cool idea, but... TrueNAS is an EXCELLENT tool for managing Network Attached Storage ("N.A.S." - imagine that, right?). It can take a "pile of hard drives" and transform them into an awesome, resilient, sometimes-high-performing-but-better-used-for-reliability configuration. As a virtualization platform? It's more of a "bell and whistle". If all you WANT is a (very) solid storage solution for your home or office that has a "side perk" of running something like a a few Docker Containers or low-resource VMs with private DNS servers, running an internal website, local database, etc, then its a great choice (and this is likely true for MOST people, even true of MOST small businesses). Last I checked, "Nested Virtualization is an experimental feature" in TrueNAS, just like with XCP-ng (likely for all the same reasons, too). You can even do Kubernetes on TrueNAS, too ( https://www.truenas.com/community/threads/kubernetes-cables-the-challenges.109901/ ) But building full scale applications, or trying to do something with it that warrants a "need" for Nested Virtualization? You're probably barking up the wrong tree (Yes, even with TrueNAS CORE, instead of SCALE). You might find ways to "make it work", but you're also spending a lot of time and energy in fighting with your platform, rather than building your app/idea/business/personal relationships. That said? I would call it "a good problem" if you are getting to the point where you have started to outgrow TrueNAS in an office/SOHO setting, and "Leveling up" if your home lab and desire for something more "production grade" or "cloud like" is what you're learning journey is pulling you towards. VirtualBox, Maybe? VirtualBox performance tends to be fairly awful. There are "things you can do" to help it out, sure, but I look back at all the years I used it and see how much of a "crutch" it really was (for a very long time). You can "sometimes" get nested Virtualization to work with it, but its tedious and painful (to my memory - I admit its been awhile) to "enable NV correctly" and the performance is already bad enough in the "un-nested VMs", so it gets exponentially worse as you start having less and less system resources to try to throw at it to get it "usable". So, again, if you just have a need for something "simple" like what I call an "inverted WSL", VBox is a solid choice (max 5 VMs total, never running more than 2-3 at a time). Such as if you are running Ubuntu/Debian (or whatever) Linux on bare metal, but still have a few tools you need/prefer in Windows (eg: "Proper" MS Office, MS SQL, HeidiSQL, mRemoteNG, XCP-ng Center, WinSCP, and the other long list of "Enterprise Tools" that you might need for your day job that are "Windows Only"). VirtualBox has a WAY better layout / GUI / toolset design than HyperV (IMO), but HyperV DOES have better performance (assuming use of the same disk and number of vCPU/RAM allocations) in my experience, but neither are usually "great" (one sucks less than the other). The worst part about using VirtualBox with Windows 11, aside from the "expensive" resource cost-to-performance, is that it can/will cause issues with other virtualization tools like WSL, Docker Desktop, HyperV and/or VMware Products. So, once again, there are nuances, pros and cons, and no clear "one size fits all" solutions, if you need to use any of those tools together, on Windows. But for Linux or MacOS "Host Machines" running VirtualBox? Maybe that's a great fit, but it's probably <1% of people that actually do that, so you're likely going to be struggling in the wilderness of forums and random blogs for answers to issues that arise, too. Summary of Some Rando on the Internet What I've come to conclude at the present moment (2025Q1): If you really truly absolutely need Nested Virtualization on a Type 1 Hypervisor for "home lab activities", either concede to using ProxMox or VMware/Broadcom Workstation Pro (pros and cons to both) If you just have a few edge cases where you need Nested Virtualization for a-handful-or-less VMs and no need for PCIe Pass-Throughs (for example, a few VMs with their own Docker Desktop Instances, or some kind of testing) --- BM Windows: HyperV, if you want WSL and/or Docker Desktop and have the "sweet spot hardware" that does something "fancy" with PCIe (GPU/HBA/RAID passthrough); VMWare WS Pro if you don't --- BM Linux: VMWare WS Pro or VirtualBox (or anything, really; Maybe "just KVM" would do the trick, too) If You Want a Desktop Environment with Many VMs capable of Nesting (or Not) but with PCIe pass-through "opportunities" --- BM Windows: MAYBE HyperV will work for your PCIe PT requirements, but if it doesn't you're basically SOL on Windows --- BM Linux: Libvirt/VMM/KVM is a viable option, assuming you have more than one NIC and considerable Linux skills If You Want/Need a Type 1 Hypervisor With Nested Virtualization and PassThrough: ESXi is going to be the winner, here. Warts and all. But its not exactly realistic to think most people are going to fork out that kind of money for a Full License to use in a Home Lab ProxMox is a "close second", though. It's not technically "Production Ready" in its "Community Version", but "close enough". If you Want a Feature Rich, Fully Functional, Production Grade Type 1 Hypervisor: I still prefer XCP-ng. It's far from perfect, but it's tooling and design are flexible and the performance tends to be more than marginally better than other Type 1s and MILES better than Type 2s; The backup features, ease of Service Integrations (iSCSI, NFS, SMB, etc) and broad support for Management Systems (Xen API, XOA, XCP-ngCenter, now XOA-Lite) makes a BIG difference. That's not even touching on the REALLY powerful stuff like "instant" VM cloning and Resource Pools What About Nesting to Combine "The Best Of"? This is what I would likely recommend most Home Labbers do. For those with the skills, experience, grit and confidence, I recommend finding a Solid Linux Distro that they really like for their workflows and install Libvirt/Virtsh/VMM. Once that is all in place, you can create Multiple Virtualized Type 1 Hypervisors and/or add in Type 2s for whatever edge cases you might have. That way, the sky is the limit, and all possibilities are open, its just a matter of deciding where to allocate the resources. This suggestion includes high-end desktops with 64 to 256GB of RAM and 6 to 12 core CPUs with "plenty" of Hard drives (not just capacity, but actual "count" of drives, for ZFS/RAID and bonus for NVMe/L2ARC). If you have a full Bare Metal system you want to dedicate to being a "proper server" then you could go with ProxMox, and virtualize XCP-ng, TrueNAS and even Windows Server. You will need a "Daily Driver" machine to access it "remotely", though. That can be a mini PC or a laptop (Theoretically, a smart phone could work, too... but... I'd never recommend it for more than an "in a pinch" type of situation). If you are just starting out on your Linux journey, and still want to stay on Windows for awhile longer, then you could also try using XCP-ng as a VirtualBox or Hyper-V guest VM, but you will have a "smoother" time with VMware WS Pro. It's worth noting that I have seen, and am hereby acknowledging, the great work done on sorting out getting "nested Hyper-V" working within Windows Guest VMs on XCP-ng/Xen - but it still "feels" like its a long way off from being a truly viable solution, rather than an experiment. That's my learnings from a lot of failures and successes working with "legit" Hypervisors. There are others, sure, but most of them are pretty awful to have not made it in the list. Hopefully that helps anyone else that stumbles upon these issues and breaks down enough to reach out and ask the communities for help in "solving the unsolveable". For the record, I am very much still looking forward to the day where Nested Virtualization is production ready on XCP-ng!
  • The integrated web UI to manage XCP-ng

    12 Topics
    229 Posts
    E
    @DustinB Hi Dustin, It's amazing what one can miss when setting something up, and I surely did miss that part. I now see much more in the way of management items. My project for the weekend will be XO/XOCE from source for my home rack. Thanks so much for seeing what I didn't. Take care, Eric
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    78 Topics
    974 Posts
    olivierlambertO
    Thanks @afk ! Your knowledge on the "other side" might be very helpful for us!
  • Hardware related section

    95 Topics
    967 Posts
    M
    @Andrew thanks for all your hard work. I guess time will tell if the Vates team has to add those defaults to their official package or maybe I just got "lucky"
  • The place to discuss new additions into XCP-ng

    231 Topics
    3k Posts
    olivierlambertO
    No specific plans as this kind of feature isn't really useful in Datacenters. However, if there's a way that's relatively universal, it's doable with the help of the community (ie contributions are very welcome and we'll review it seriously)
  • Change Default Grub Option In 8.3 To Serial Console

    2
    0 Votes
    2 Posts
    90 Views
    E
    @elvintmp75 I should have searched forum better lol. Because it’s UEFI the file is in a different location https://xcp-ng.org/forum/post/11939
  • Live Migrate fails with `Failed_to_suspend` error

    8
    +0
    0 Votes
    8 Posts
    353 Views
    R
    @randyrue Confirmed that after cold booting a VM to a new host I can then migrate it live to another new host.
  • XCP-ng 8.2.1 Guest UEFI Secure Boot

    12
    0 Votes
    12 Posts
    388 Views
    MathieuRAM
    I see that the bug is actually already fixed on the latest version (5.98.1).
  • Guest running kernel 6.8 hangs after a while

    Solved
    17
    0 Votes
    17 Posts
    1k Views
    T
    I believe Proxmox Backup Server kernel Linux pbs 6.8.4-2-pve has also the same issue. updating to Linux pbs 6.8.12-1-pve solves.
  • Protectli now available preinstalled with XCP-NG

    6
    +0
    1 Votes
    6 Posts
    307 Views
    olivierlambertO
    @darkbounce Short answer: it seems that Intel hybrid architecture is finding a way to make something that would result of Xen running the VM with the less "featured" CPU, meaning your VM will never use all P core instructions that aren't on E core, meaning... it will work without disabling one or another. Hard to tell how much is lost in terms of perfs vs the best Intel hybrid scheduler (which is on... Windows) but probably not that much on a machine with a reduced number of P cores like this one.
  • Mirror moved Permanently

    8
    0 Votes
    8 Posts
    406 Views
    olivierlambertO
    @yammy I hope you are not talking of installing Docker in the Dom0, because this is really a bad idea.
  • xo vm-export / vm-import issue with latest XCP-Ng 8.3

    10
    0 Votes
    10 Posts
    297 Views
    P
    @Pix [image: 1724423840376-capture-d-e-cran-2024-08-23-a-16.36.56.png] Looks like the different HW or Pool is an issue, i'll make more tests and report here if it's ok
  • Disable DHCP option 60 when PXE booting

    10
    0 Votes
    10 Posts
    232 Views
    P
    @PontusB i don't even know if disabling option 66 solves the issue but as soon as we add option 66 in our DHCP server it boots fine from XenServer (but without HA between the many PVS servers).
  • Issue after latest host update

    57
    0 Votes
    57 Posts
    8k Views
    M
    @stormi Thanks, looking forward.
  • Vates please work acquiring vmware in the future

    3
    0 Votes
    3 Posts
    258 Views
    T
    @olivierlambert There you go hehe
  • yum update, no more free space?

    8
    0 Votes
    8 Posts
    459 Views
    I
    @bloodyskullz If you still see the old ISO SR, the easiest way to migrate is simply by creating a new one and migrate the ISOs through XO to the new one. In regards to deletion of the old SR you need to check if it really is mapped to another drive or if the mapping was not working and it filled / if so, you might not be able to delete it
  • Guide to Replace Tianocore UEFI Logo

    1
    +0
    1 Votes
    1 Posts
    335 Views
    No one has replied
  • update via yum or via xoa?

    5
    0 Votes
    5 Posts
    275 Views
    robytR
    @bleader said in update via yum or via xoa?: yes you're basically doing an RPU manually. But it is indeed odd that the process is stuck at 0%, it should be fairly fast to do the install patches, no errors in the logs? i've another install all patches and install all. Now i'll use rpm update and see if speed is the same or not
  • Pool Tasks Don't Complete? Major Issues...

    6
    0 Votes
    6 Posts
    510 Views
    O
    @Danp Yes to both. I've probably restarted the toolstack at least a dozen times, mostly to clear hung tasks. I did notice some weird issues with a secondary SR being disconnected on xcp02, (one of 10 hosts, 9 after I ejected and forgot xcp01), but there's no disks on it. It wasn't being used for anything at all (yet), and it's fine on all the rest. That does lead me to think maybe it was a power bump that rebooted a switch or something though. Maybe it caused some kind of hangup with xcp01 and xcp02, and since xcp01 was the pool master, it cascaded to the other issues I've seen? Could that cause the VM's that were originally running on xcp02 to die and not be able to be recovered easily?
  • Ubuntu 24.04 VMs not reporting IP addresses to XCP-NG 8.2.1

    6
    +4
    0 Votes
    6 Posts
    1k Views
    J
    I just tried to install Ubuntu 24.04 to test it out, and I experienced the same problem with it not recognizing the IP address. I was first using the Ubuntu-provided package (xe-guest-utilities=7.20.2-0ubuntu1), which was failing. I then tried the package I had been using with my Ubuntu 22.04 servers that used to be part of the XCP-ng guest-tools.iso (xe-guest-utilities_7.20.0-9_amd64.deb) and had the same results. I mounted my current guest-tools.iso, which now has xe-guest-utilities_7.30.0-11_amd64.deb, and installed it. Now it was retrieving the IP address correctly. I'm not sure why the OP was still having trouble with that version (I'm using UEFI instead of BIOS, but I wouldn't think that would matter). I went ahead and tried out the Rust-based tools mentioned (xen-guest-agent_0.4.0_amd64.deb), and it was properly getting the IP address as well. I'm guessing there's some incompatibility (probably with the 6.x kernel) that was fixed between 7.20 and 7.30 (intentionally or accidentally). Given how much the Linux tools have changed over the years and the fact that they're not used for PV drivers anymore, is there a particular reason to use one over the other (legacy vs Rust)? What features do they really provide now? Is it just CPU/memory/disk/network status?
  • Other 2 hosts reboot when 1 host in HA enabled pool is powered off

    8
    0 Votes
    8 Posts
    217 Views
    H
    @ha_tu_su Here are the details, albeit little late than what I had promised. Mondays are...not that great. Below commands were executed on all 3 hosts after installation of xcp-ng. yum update wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/052b3dfff9c06b1765e51d8de72c90f2f90f475b/gistfile1.txt -O install && chmod +x install ./install --disks /dev/sdd --thin --force Then XO was installed on one of the hosts and a pool was created which consisted of 3 hosts. Before that NIC renumbering was done on VW1 was changed to match the NIC numbers for other 2 hosts. Then XOSTOR SR was created by executing follwoing on master host: xe sr-create type=linstor name-label=XOSTOR host-uuid=<MASTER_UUID> device-config:group-name=linstor_group/thin_device device-config:redundancy=2 shared=true device-config:provisioning=thin Then on host which is linstor controller below commands were executed. Each of the network has a /30 subnet. linstor node interface create xcp-ng-vh1 strg1 192.168.255.1 linstor node interface create xcp-ng-vh1 strg2 192.168.255.10 linstor node interface create xcp-ng-vh2 strg1 192.168.255.5 linstor node interface create xcp-ng-vh2 strg2 192.168.255.2 linstor node interface create xcp-ng-vw1 strg1 192.168.255.9 linstor node interface create xcp-ng-vw1 strg2 192.168.255.6 linstor node-connection path create xcp-ng-vh1 xcp-ng-vh2 strg_path strg1 strg2 linstor node-connection path create xcp-ng-vh2 xcp-ng-vw1 strg_path strg1 strg2 linstor node-connection path create xcp-ng-vw1 xcp-ng-vh1 strg_path strg1 strg2 After this HA was enabled on the pool by executing below commands on master host: xe pool-ha-enable heartbeat-sr-uuids=<XOSTOR_SR_UUID> xe pool-param-set ha-host-failures-to-tolerate=2 uuid=<POOL_UUID> After this some test VMs were created as mentioned in Original Post. Host failure case works as expected for VH1 and VH2 host. For VW1 when it is switched off, VH1 and VH2 also reboot. Let me know if any other information is required. Thanks.
  • A task keeps poping up every second or so

    7
    +0
    0 Votes
    7 Posts
    168 Views
    olivierlambertO
    In any case, you can ignore it.
  • VM migration is blocked during backup whereas no backup in progress

    5
    0 Votes
    5 Posts
    238 Views
    H
    @Danp Okay, you're a genius, I upgrade this morning after my backup. So that could explain my issue. the mentioned thread is exactly my issue, but I didn't find it when I was searching about my issue. Thanks for all !
  • can't start vm after host disconnect

    29
    0 Votes
    29 Posts
    2k Views
    olivierlambertO
    No, from the XCP-ng point of view, the VM is still running without any interruption.
  • XCP-ng Documentation - Roadmap

    Solved
    6
    0 Votes
    6 Posts
    448 Views
    J
    @olivierlambert @Marc-pezin Thank you very much for sorting this out!