Current priority of our storage dev ( @ronan-a ) is to finish LINSTOR implementation on SMAPIv1, so SMAPIv3 work is paused until we finish that first. I'll be happy to have another storage developer, but we are still a small team.
Initial brief test seems ok 👍
Will see if i can do more of the tests later...
Updated from 8.1 via yum which caused windows 10 & windows 2019 server to hang on the tiano logo.
Interestingly a debian 10 uefi vm worked fine...
After the update to uefistored both windows VMs started in recovery and did whatever it is windows does besides spin dots on your screen 🤔
After a reboot both Windows 10 2004 and windows 2019 server booted just fine 👍
I have good experience with WiBu Key dongles and a "Matrix USB-Key" (also license dongle) but couldn't get an Aladdin HASP working.
I can pass it through (it's visible and attached) but the VM doesn't show it in device manger (Windows 10 1909). I gave up for now and will probably use a network USB thingie from SEH - already have 2 of their devices running in different environments and they work flawlessly.
Yeah, coffee lake support wasn't added until about a year ago. It was added separately from most of the other supported iGPUs. I believe its kernel 5.1 or newer that adds native coffee lake igpu support.
For the tests we had a need for 1 week ago, it's now fine, however I'll probably update this thread once we have an ISO image that can be tested by users that have such hardware. Note: it's not about using the GPU itself, but simply about making sure that the hypervisor works well with the changes we made to replace the not-built-by-us gpumon tool whose absence would make XCP-ng 8.1 unbootable (as we sadly found out after the release) with a dummy one built by us.
Now is time for the tests I was talking about earlier. XCP-ng 8.2 beta is now available with our dummy gpumon and we need users who have nVIDIA GPUs to test it and give us feedback. There may be situations we have not tested where our dummy gpumon is not enough to make the XAPI happy, despite the fact that we don't support nVIDIA vGPUs (proprietary software from Citrix required for that feature).
Hi Dan2462, I would like you to go for the following steps. In the star just choose the virtual machine and pick the option that says File > Export to OVF. Now add a name for the OVF file and mention a directory in which you need to save it. Here mention if you want to export the virtual machine as an OVF, a folder with individual files, or as an OVA, a single-file archive. At the end press Export to start the procedure of OVF export. This step will take some time and a status bar lets you know about the progress of the export process. Apart from that, https://appuals.com/export-import-vm-oracle/ this website would be really helpful for you in getting to know much more about it. I hope that by following these steps you would be able to resolve this issue.
We are aware of it 🙂 Our "bandwidth" right now is not enough to investigate further.
edit: it's more a generic library than a real product, so it's nice but that would require a reasonable amount of work to turn it into something "turnkey". However, we are definitely keeping an eye on it 🙂
If I understand correctly, the "ISO" you are asking for is for building drivers. I don't intend on creating a specific ISO just for that. Everything is available in yum repositories to build drivers (usually you just need make, gcc and kernel-devel), and there's also a docker image at https://github.com/xcp-ng/xcp-ng-build-env
I do prefer Olivier's suggestion, though: going through support, because drivers built for one user through support are made available to other users through our repositories.