Subcategories

  • All Xen related stuff

    540 Topics
    5k Posts
    N
    @krul8761 I'm not certain (but maybe someone at Vates / @olivierlambert could correct any misgivings or misunderstandings I have on this) but after doing a lot of deep diving, research and experimentation, it appears that certain motherboard/CPU combinations are capable of "NV" - Nested Virtualization - and to varying degrees of effort involved to get it to work. I tried everything that I could think of, everything I stumbled upon to "try" throughout XCP-ng/XenServer and ProxMox forums, Vbox, even virtsh/vmm/libvirt, and I never found any solutions to work on the Dell PowerEdge platform (R710 and R730 models - which have Intel CPUs/chipsets), nor any Asus motherboard combos (Using AMD CPU/APUs - tested nearly a dozen with XCP-ng) nor any other hardware. It's noteworthy to mention that I have mostly tested "old and End-of-Life" types of hardware that I am attempting to keep alive and useful for random/various purposes and "learning tools" (eg DNS Servers, VPN Servers, "apt/yum/dnf cachers", "Virtual Machine Appliances" like Security / Firewall / Networking VMs, K8s Nodes, and the like). My expectations were fairly low for getting XCP-ng to do "NV" on most of these systems after I went through the pain of trying various kernels, "manually compiling", tweaking/hacking sys configs, customization with modprobe-ing, allllll the things (within my limited scope of understanding, as I am no Linux kernel developer) - Nothing worked. So I tried to re-think everything, and weigh the options. I will get real wordy and opinionated here, but maybe this will save others from spending a lot of time "getting stuck" or going down dead-end paths, as I certainly have in my thinking about "why I think Nested Virtualization is so important to me" (and how that thinking evolved with realities): My first thought: "Buy more stuff" Explore a known-good / user-reported / well documented hardware platform where Nested Virtualization works well with XCP-ng I decided against this, as I could "feel" that even if "something finally worked!" its days would likely be numbered and I could rely on NV-ed VMs for key services, as at any moment, there might be some breaking change that happened with yum update, or, if I attempted to "hold back" certain packages to maintain the frail functionality that I worked hard at trying to accomplish, there would be security concerns (2024 was wild, and 2025 looks to be even more so, in terms of security concerns) Then I shifted to what I felt was a more sane option: A "More Sane" Option?: Use a different Hypervisor that has known-good and stable functionality with Nested Virtualization capabilities. (Obvious choice, right? welllll...) What I've found here, is that there really are NOT very many decent options for Nested Virtualization. But there ARE working solutions for MOST edge cases, but not really a "clear best fit". How "Sane" is it to switch to VMware ESXi or "Workstation Pro", since that product line as "the most simple and functional NV" (arguably)? --- There used to be ESXi, but Broadcom wrecked that... though they did give us "VMware WorkStation Pro" in consolation. But for how long? And what's with the weird licensing? How security is it REALLY, with ALL those CVEs and newworthy security flaws with extremely sophisticated attackers? It seems like ESXi and other VMware products got rolled out as "polished turds" before the "sale" to Broadcom, in terms of stability and security. It's not just a problem with VMware/Broadcom, either. I want to be clear that I'm not solo bashing VMware/Broadcom, per say, but these are issues with any "Integral Platform" and/or Hypervisor, but in the case of (older?) ESXi and VMware products, such issues are somewhat exacerbated by a lack of transparency with Close Source, though. How sane is Hyper-V? Hyper-V has Nested Virtualization for most modern PC platforms and is backed by LOTS of "supporting communities and documentation" --- Hyper-V is actually not a terrible option, once you learn how to navigate the confusing GUI options and the weird "secret knowledge of PowerShell commands" that ACTUALLY "unlock" the feature that you THOUGHT you enabled in the GUI menus and rebooted 32 times to finally get the output of an often seemingly unrelated feature/tool/status that you interpret as "I finished step 12 of 47!" But what comes next, once you DO get Nested Virtualization working in Hyper-V? Pain. I haven't tried using the more modern iterations of "Windows Server 202X XX", but the earlier version and the "starter version" on Windows 10/11 Pro DOES have some interested use cases where its the path of least resistance. For example, Dell's (insanely resource hungry) "OMSA" / OpenManage Enterprise Appliance (Or whatever confusing naming its better known as in your particular workplace and homelab circles) has a ready-to-go "Hyper-V Appliance" that is... nice... but... you probably need at least 3+ Dell "Enterprise Products" to justify the CPU and RAM requirements for the more useful features - So systems with 32GB or less RAM aren't going to be able to have "always on Appliances" like these (again, Dell isn't the only one that does this - these are "Enterprise Grade Solutions", so there is an inherent "infinite money" expectation when using "Enterprise Vendor Closed Source tools" - They NEED more spend to keep making hardware/software synergies) Hyper-V is TOTALLY worth mentioning, and I realize I've been harsh on it, as it does have its place and use cases that will most likely fit what you are trying to accomplish @krul8761 But for a homelab or "experimental dev environment"? HyperV as a platform will take forever to "spin up", You will learn WAY more than you ever want to about strange MS Windows features, security measures (and flaws) and other various quirks, and the timesuck of trying to weed through what change happened in which release or "KB" for Windows is very real. HyperV (and basically all of Windows / Microsoft Products) have some of the most "Extensively confusing blend of old, new, stale and "in development" support forums and documentation - Microsoft does a Disturbingly good job (Capital D) and "covering its tracks", as you will notice that 50+% of the time you search for something Windows/MS specific with a non-EVIL search engine, the link will redirect you to the generic "search our forums!" page - HP is really getting good at doing this sort of "scrubbing", too. And some of it is good, but most of the details I search for tend to go "missing" and I find plenty of "just turn it on and off again" type of advice from issues that are 5+ years old. All that said, is THAT the Hypervisor you want to trust? I don't. BUT, again, HyperV DOES have use cases. Like its integration with WSL and Docker Desktop, and there are some interesting developments happening with "Shared / Partitioning GPUs Configurations" (That's a whole other rabbit hole - if you're into gaming or cheaping out on GPU costs for simple video stearming/transcoding stuff), but GOOD LUCK with sorting all that out with the "documentation" - You often have better luck with a handful of people that slap together 15+ step guides and the accompanying files and list of commands needed to get it working that end up in a GitHub Repo completely unrelated to MS (Case and point: https://github.com/DanielChrobak/Hyper-V-GPU-Partition-Guide ) - These "techniques" DO in fact work, but there are SO MANY caveats and "gotchas" that it becomes unrealistic to maintain them for very long. So "Yes, HyperV can and does work!" - but its a HOT MESS trying to debug and maintain such complicated, un-intuitive configurations with "part gui, part PowerShell, part Windows Settings, part "custom scripts you download and run" (IMO). Again, if the newer versions of Windows Server 202X have a better tool set for HyperV, I'm not aware, (nor have much interest), but I'm not bashing on Hyper-V just because its on OSS. It's because its a trainwreck. If one simple feature takes 2 days to try to understand, then track down all the "pieces", then test... only to find out "it doesn't work on my device/motherboard/etc", with nearly non-existent "feedback" to confirm or deny its even possible... there's no way it's going to be my go-to or recommendation. But maybe whatever cool-but-poorly implemented feature you might want will work for your specific blend of hardware. It has VERY rarely worked out for me, and even more rare "worth the effort involved" (specifically the "GPU sharing" and the "PCIe pass-through" implementations - https://woshub.com/passthrough-gpu-to-hyperv-vm/ - and this thread worth reading too - https://gist.github.com/Ruffo324/1044ceea67d6dbc43d35cae8cb250212#file-hyper-v-pci-passthroug-ps1) "Pick your poison" applies, here. What about virtsh/libvirt/vmm for Nested Virtualization? The short answer is "Yes you can", but its only slightly less convoluted than with HyperV (again, just IMO) This is an excellent article spelling it out with a basic Ubuntu set up using Nested Ubuntu and Windows installs - https://just.graphica.com.au/tips/nested-kvm/ BUT... while nesting / passthrough-ing and generally "enabling of all the coolest features" with libvirt/virtsh/kvm/qemu/(what is the actual "common name" for this Hypervisor, anyway?) you will probably have a BRUTAL journey, trying to match configurations for your specific "base OS" (If not Ubuntu or perhaps Rocky/Fedora) The JANKY way you have to configure "bridges" for your network adapters and the super awkward UI (if you're using VMM and not just pure CLI) turns me off pretty hard. Just my personal experience: At one point, I had everything working "perfectly" from a single NIC: WakeOnLan, Shared IP / CIDR from "Host-to-Guest", "vNICs" for each of my VMs, great! ... but then on reboot? It fubar-ed everything, and I gave up fighting with it. I finally used a 2nd USB NIC and that "worked", but... then there were odd quirks with all the other overly-complicated networking for libvirt/virtsh/VMM, too (too many and too complicated to remember, let alone list out). So if you want to use this fairly great "Type 1.5 Hypervisor" (It's an odd one to nail down the "Type" of, which reflects its power and capabilities). But given all of its issues and challenges, hard-to-find configurations, updates, and odd-but-not awful "XML-based" feature-enablements, it has a place for certain use cases, but is no "joy" to work with, either (The permissions issues alone is probably where most people give up on this one). TrueNAS Though? I'll throw an honorable mention out here to TrueNAS, too. But using TrueNAS as a virtualization platform is, again, in my opinion, similar to putting NOS boost on a semi truck. Cool idea, but... TrueNAS is an EXCELLENT tool for managing Network Attached Storage ("N.A.S." - imagine that, right?). It can take a "pile of hard drives" and transform them into an awesome, resilient, sometimes-high-performing-but-better-used-for-reliability configuration. As a virtualization platform? It's more of a "bell and whistle". If all you WANT is a (very) solid storage solution for your home or office that has a "side perk" of running something like a a few Docker Containers or low-resource VMs with private DNS servers, running an internal website, local database, etc, then its a great choice (and this is likely true for MOST people, even true of MOST small businesses). Last I checked, "Nested Virtualization is an experimental feature" in TrueNAS, just like with XCP-ng (likely for all the same reasons, too). You can even do Kubernetes on TrueNAS, too ( https://www.truenas.com/community/threads/kubernetes-cables-the-challenges.109901/ ) But building full scale applications, or trying to do something with it that warrants a "need" for Nested Virtualization? You're probably barking up the wrong tree (Yes, even with TrueNAS CORE, instead of SCALE). You might find ways to "make it work", but you're also spending a lot of time and energy in fighting with your platform, rather than building your app/idea/business/personal relationships. That said? I would call it "a good problem" if you are getting to the point where you have started to outgrow TrueNAS in an office/SOHO setting, and "Leveling up" if your home lab and desire for something more "production grade" or "cloud like" is what you're learning journey is pulling you towards. VirtualBox, Maybe? VirtualBox performance tends to be fairly awful. There are "things you can do" to help it out, sure, but I look back at all the years I used it and see how much of a "crutch" it really was (for a very long time). You can "sometimes" get nested Virtualization to work with it, but its tedious and painful (to my memory - I admit its been awhile) to "enable NV correctly" and the performance is already bad enough in the "un-nested VMs", so it gets exponentially worse as you start having less and less system resources to try to throw at it to get it "usable". So, again, if you just have a need for something "simple" like what I call an "inverted WSL", VBox is a solid choice (max 5 VMs total, never running more than 2-3 at a time). Such as if you are running Ubuntu/Debian (or whatever) Linux on bare metal, but still have a few tools you need/prefer in Windows (eg: "Proper" MS Office, MS SQL, HeidiSQL, mRemoteNG, XCP-ng Center, WinSCP, and the other long list of "Enterprise Tools" that you might need for your day job that are "Windows Only"). VirtualBox has a WAY better layout / GUI / toolset design than HyperV (IMO), but HyperV DOES have better performance (assuming use of the same disk and number of vCPU/RAM allocations) in my experience, but neither are usually "great" (one sucks less than the other). The worst part about using VirtualBox with Windows 11, aside from the "expensive" resource cost-to-performance, is that it can/will cause issues with other virtualization tools like WSL, Docker Desktop, HyperV and/or VMware Products. So, once again, there are nuances, pros and cons, and no clear "one size fits all" solutions, if you need to use any of those tools together, on Windows. But for Linux or MacOS "Host Machines" running VirtualBox? Maybe that's a great fit, but it's probably <1% of people that actually do that, so you're likely going to be struggling in the wilderness of forums and random blogs for answers to issues that arise, too. Summary of Some Rando on the Internet What I've come to conclude at the present moment (2025Q1): If you really truly absolutely need Nested Virtualization on a Type 1 Hypervisor for "home lab activities", either concede to using ProxMox or VMware/Broadcom Workstation Pro (pros and cons to both) If you just have a few edge cases where you need Nested Virtualization for a-handful-or-less VMs and no need for PCIe Pass-Throughs (for example, a few VMs with their own Docker Desktop Instances, or some kind of testing) --- BM Windows: HyperV, if you want WSL and/or Docker Desktop and have the "sweet spot hardware" that does something "fancy" with PCIe (GPU/HBA/RAID passthrough); VMWare WS Pro if you don't --- BM Linux: VMWare WS Pro or VirtualBox (or anything, really; Maybe "just KVM" would do the trick, too) If You Want a Desktop Environment with Many VMs capable of Nesting (or Not) but with PCIe pass-through "opportunities" --- BM Windows: MAYBE HyperV will work for your PCIe PT requirements, but if it doesn't you're basically SOL on Windows --- BM Linux: Libvirt/VMM/KVM is a viable option, assuming you have more than one NIC and considerable Linux skills If You Want/Need a Type 1 Hypervisor With Nested Virtualization and PassThrough: ESXi is going to be the winner, here. Warts and all. But its not exactly realistic to think most people are going to fork out that kind of money for a Full License to use in a Home Lab ProxMox is a "close second", though. It's not technically "Production Ready" in its "Community Version", but "close enough". If you Want a Feature Rich, Fully Functional, Production Grade Type 1 Hypervisor: I still prefer XCP-ng. It's far from perfect, but it's tooling and design are flexible and the performance tends to be more than marginally better than other Type 1s and MILES better than Type 2s; The backup features, ease of Service Integrations (iSCSI, NFS, SMB, etc) and broad support for Management Systems (Xen API, XOA, XCP-ngCenter, now XOA-Lite) makes a BIG difference. That's not even touching on the REALLY powerful stuff like "instant" VM cloning and Resource Pools What About Nesting to Combine "The Best Of"? This is what I would likely recommend most Home Labbers do. For those with the skills, experience, grit and confidence, I recommend finding a Solid Linux Distro that they really like for their workflows and install Libvirt/Virtsh/VMM. Once that is all in place, you can create Multiple Virtualized Type 1 Hypervisors and/or add in Type 2s for whatever edge cases you might have. That way, the sky is the limit, and all possibilities are open, its just a matter of deciding where to allocate the resources. This suggestion includes high-end desktops with 64 to 256GB of RAM and 6 to 12 core CPUs with "plenty" of Hard drives (not just capacity, but actual "count" of drives, for ZFS/RAID and bonus for NVMe/L2ARC). If you have a full Bare Metal system you want to dedicate to being a "proper server" then you could go with ProxMox, and virtualize XCP-ng, TrueNAS and even Windows Server. You will need a "Daily Driver" machine to access it "remotely", though. That can be a mini PC or a laptop (Theoretically, a smart phone could work, too... but... I'd never recommend it for more than an "in a pinch" type of situation). If you are just starting out on your Linux journey, and still want to stay on Windows for awhile longer, then you could also try using XCP-ng as a VirtualBox or Hyper-V guest VM, but you will have a "smoother" time with VMware WS Pro. It's worth noting that I have seen, and am hereby acknowledging, the great work done on sorting out getting "nested Hyper-V" working within Windows Guest VMs on XCP-ng/Xen - but it still "feels" like its a long way off from being a truly viable solution, rather than an experiment. That's my learnings from a lot of failures and successes working with "legit" Hypervisors. There are others, sure, but most of them are pretty awful to have not made it in the list. Hopefully that helps anyone else that stumbles upon these issues and breaks down enough to reach out and ask the communities for help in "solving the unsolveable". For the record, I am very much still looking forward to the day where Nested Virtualization is production ready on XCP-ng!
  • The integrated web UI to manage XCP-ng

    12 Topics
    229 Posts
    E
    @DustinB Hi Dustin, It's amazing what one can miss when setting something up, and I surely did miss that part. I now see much more in the way of management items. My project for the weekend will be XO/XOCE from source for my home rack. Thanks so much for seeing what I didn't. Take care, Eric
  • Section dedicated to migrations from VMWare, HyperV, Proxmox etc. to XCP-ng

    78 Topics
    974 Posts
    olivierlambertO
    Thanks @afk ! Your knowledge on the "other side" might be very helpful for us!
  • Hardware related section

    95 Topics
    967 Posts
    M
    @Andrew thanks for all your hard work. I guess time will tell if the Vates team has to add those defaults to their official package or maybe I just got "lucky"
  • The place to discuss new additions into XCP-ng

    231 Topics
    3k Posts
    olivierlambertO
    No specific plans as this kind of feature isn't really useful in Datacenters. However, if there's a way that's relatively universal, it's doable with the help of the community (ie contributions are very welcome and we'll review it seriously)
  • cluster slave no connection to pool

    1
    0 Votes
    1 Posts
    108 Views
    No one has replied
  • Guest tools in nested XCP-ng

    Solved
    2
    0 Votes
    2 Posts
    136 Views
    olivierlambertO
    Hi, It's not possible.
  • XCP/Vates support hours

    2
    0 Votes
    2 Posts
    196 Views
    olivierlambertO
    Hi, No.
  • Hosts fencing after latest 8.2.1 update

    4
    0 Votes
    4 Posts
    289 Views
    J
    Right, so yeah I did just that - disabled HA and have been keeping an eye on the logs as well as general performance in the env. Glad to know my actions align with your recommendations at least There are a couple of hosts that get a lot of these sorts of messages in the kern.log: Apr 25 08:15:33 oryx kernel: [276757.645457] vif vif-21-3 vif21.3: Guest Rx ready Apr 25 08:15:54 oryx kernel: [276778.735509] vif vif-22-3 vif22.3: Guest Rx stalled Apr 25 08:15:54 oryx kernel: [276778.735522] vif vif-21-3 vif21.3: Guest Rx stalled Apr 25 08:16:04 oryx kernel: [276788.780828] vif vif-21-3 vif21.3: Guest Rx ready Apr 25 08:16:04 oryx kernel: [276788.780836] vif vif-22-3 vif22.3: Guest Rx ready Am I wrong to attribute this to issues within specific VMs (i.e. not a hv performance issue)? I know one of the VMs that causes these is a very old centos 5 testing VM one of my devs use and the messages stop when it's powered down. Is there any way to easily associate those vifs with the actual VMs they are attached to? My google-foo failed me for that. Other than that, I noticed my nic firmware is a bit old on the X710-da2's I use so I'm going through and upgrading those with no noticeable changes. I'm fairly hesitant to re-enable HA without tracking down the root cause.
  • Stuck in maintenance mode after joining pool.

    2
    +7
    0 Votes
    2 Posts
    351 Views
    DanpD
    @DwightHat said in Stuck in maintenance mode after joining pool.: When I try to look at the host in Xen Orchestra it just says "An error has occurred". Check to browser's Dev Tools console when this happens. It will likely contain some additional details. You likely need to check the logs to find out why you are encountering this issue. Many times the "stuck in maintenance mode" issue is related to an unmountable storage repository. https://docs.xcp-ng.org/troubleshooting/log-files/
  • Whatchdog support for Linux and Windows guests

    4
    0 Votes
    4 Posts
    285 Views
    olivierlambertO
    This post specifically: https://xcp-ng.org/forum/post/57441 The person had issues to speak in English but wanted to tell this configuration worked. You can use the xen_wdt backend to rely on a Xen operation to force restart the VM. I have no idea how it works on Windows.
  • 0 Votes
    12 Posts
    887 Views
    A
    @lawrencesystems I think this is quite common when you need to test certain scenarios with multiple hypervisors (backup, migrations, etc.). You only need a couple of HVs with a few tiny running VMs. We have done this setup with nested esxi many times for testing purposes. And since e.g. Ubuntu and Windows work this way, the problem is probably specific to Debian (and maybe others?).
  • PXE Boot a VM and use HTTP for kernel/initrd download

    4
    0 Votes
    4 Posts
    530 Views
    olivierlambertO
    Question for @gduperrey or @stormi
  • USB pass-through device with wrong product and vendor identifiers on 8.2

    Unsolved
    3
    0 Votes
    3 Posts
    209 Views
    I
    @infodavid Finally, I follow an existing topic and configure nut-server on the hypervisor to access the ups via usb. I know that Olivier is not fully aligned with the fact that the host is modified but IMO it is an acceptable change on my XCP-NG host.
  • Veeam and XCP-ng

    Solved
    32
    0 Votes
    32 Posts
    9k Views
    jordanJ
    @jasonnix XOA does the primary things Veeam does. Backups, backup copies, full/incremental replication, and file level restores. The only thing I've ran into that XOA can't do is something like restoring active directory objects directly to a domain controller, or native SQL Server level restores without having to restore the whole VM. Those are the only instances where I'd consider running the Veeam agent directly. XOA works perfectly otherwise.
  • This topic is deleted!

    1
    0 Votes
    1 Posts
    25 Views
    No one has replied
  • VMs are abruptly getting shutdown

    14
    0 Votes
    14 Posts
    744 Views
    J
    @lritinfra Something to consider also the HPE Intelligent Provisioning is the main way, outside of HPE iLO, HPE SUM or HPE SPP to update the server's hardware firmware. If you aren't using individual RPMs or SCEXE files for the task. With HPE Intelligent Provisioning and HPE SPP being able to update, both firmware and BIOS. As not all of the updates for firmware will be in a compatible format, for use with HPE iLO. I'm not sure if it has changed but an Administrator Password set on the BIOS (at minimum), also locks out (disables) access to the Erase option on the HPE Intelligent Provisioning. At least it does on my only HPE Server running an up to date BIOS, HPE iLO and HPE Intelligent Provisioning. Thus disabled HPE Intelligent Provisioning doesn't help with being up to date enough to fix vulnerabilities and bugs at hardware or firmware level.
  • 1 Votes
    6 Posts
    815 Views
    R
    @rjt Note to self about creating and managing appliances at xe cli. xe help --all | egrep -i '(appliance)' # find xe appliance related commands. appliance-assert-can-be-recovered, appliance-create, appliance-destroy, appliance-list, appliance-param-clear, appliance-param-get, appliance-param-list, appliance-param-set, appliance-recover, appliance-shutdown, appliance-start,
  • XCP-ng on a laptop - turning off the monitor

    5
    0 Votes
    5 Posts
    503 Views
    D
    @abudef I ran into the same thing on a dell latitude 5411, but I eventually found a workaround. In the BIOS under System Configuration there was an option called Unobtrusive Mode, and when enabled, a specific key combination will turn off the display and speakers, independent of the OS. The same key combination will then turn them back on when needed. While an automatic feature would be preferable, the manual option meets my needs since it would be rare to need to use the laptop screen. The one downside for me is that if power is lost long enough that the battery drains, the screen will need manual intervention after power is restored - the BIOS will turn the machine back on when it receives power, but won't put the screen back in its previous state. I put a sticker on the laptop with the key combination so I will remember it when needed.
  • XAPI servic failed

    8
    0 Votes
    8 Posts
    460 Views
    DanpD
    @sushant-diwakar I think you misunderstood @olivierlambert as he was referring to the support ticket that you opened with Vates, which is still open and awaiting a response from you. I restored the forum thread that you deleted because it contains information that is pertinent to this discussion. I'm leaving this topic separate for now, but IMO there really was no valid reason to start this new thread instead of continuing the prior one.
  • Pool Master unreachable

    8
    0 Votes
    8 Posts
    554 Views
    S
    @sushant-diwakar
  • Issues with PCIE Passthrough

    20
    0 Votes
    20 Posts
    2k Views
    J
    @ImThatFluffy said in Issues with PCIE Passthrough: @john-c Yea im not sure, it was either an issue with the way I had Debian setup or compatibility things booted up Ubuntu 22.04LTS with the HWE kernel and it worked perfectly. Well if you are using Ubuntu Linux 22.04.1 LTS or one of the later point releases then it would be using a Linux Kernel version 6.1 or later, when its a HWE kernel. So any bugs from earlier versions of the kernel would have been fixed, also the Intel ARC graphics hardware would have been released during one of the point releases. On the Debian Linux front a distribution version earlier than version 12.0 would have been unlikely to have complete properly functioning support, due to that release being the first one with the Linux kernel version of 6.1 or later.
  • Imbedded Docker

    12
    0 Votes
    12 Posts
    3k Views
    S
    @DustinB said in Imbedded Docker: Has anyone else done this, and can provide benefits or faults in doing so, besides the obvious that this isn't officially supported? I am actually going through the process of trying this right now, and am having significant difficulties with the xscontainer-prepare-vm piece - it doesn't work. So far, I have built a Docker VM, made sure all prerequisites are in there, and then run this script. It does insert an ssh-rsa key into my user's authorized_keys file, but the public key it inserts doesn't actually work. The host is not able to ssh into the VM due to the certificate not matching and requires a password, which does not work because it can't pass the VM check. Has anyone else seen this behaviour before?
  • Issues with Windows 11 VM

    5
    0 Votes
    5 Posts
    1k Views
    planedropP
    I've got passthrough to work a number of times without issue, the only thing I had to make sure of was that all devices related to the GPU were passed through completely. Are you following the docs step by step? I have a Ubuntu VM running with a 2060 passed through right now, works flawlessly and even survived a power loss on the host.
  • How do I/should I disable the local webserver

    Solved
    11
    0 Votes
    11 Posts
    895 Views
    J
    @olivierlambert Thank you. We'll be looking into it when we upgrade the hardware.