(contribute to speed up the process), don't have the skills, time and or resources to do it ourselves, and besides wouldn't know where to start to code this in.
Again, money. Just because we are business, doesn't mean we have money to spend on things like this. It's not so much as we won't spend money, its more like there are other priorities, especially considering that we have only just come out of a very severe and long drought, and COVID-19, which has severely impacted our business. So on one hand, having this feature would be a large help, as we can cut corners a little on GPU's, but on the other hand, sorting this feature out ourselves is resources and such that we simply don't have spare.
In our case, we have plenty of redundant systems, to the point I can easily, and comfortably move everything over to VMWare so we have this feature. Like others, we simply have to take the path of least resistance.
Personally not sure if we will move platforms yet as we can get by on some M4000 cards for now.
So I have been working on this repo for about a month now and it is in a far worse state than I ever imagined. I showed the codebase to some golang pro's and they pretty much roasted every aspect of the original source code. It uses multiple depreciated packages, breaks multiple best practices that golang considers to be fundamental to portability and future-proofing, has a complete lack of any documentation on a variety of esoteric calls, and the source code does not compile against GOOS=freebsd.
I was hoping I could just make some quick changes to clean up this repo for the XCP-NG community but this is a huge endeavor to get in a good place for the future. I would very much like to continue work on this but it has become clear to me that a small team of developers will be required to finish this in a timely fashion. I would suggest creating a new repo called XCP-guest-tools as a properly upgraded version of this repo would rewrite 40%-60% of the original source code to bring it up to a reasonable standard. I also think at that point it would be extremely unlikely that Citrix would ever merge the changes so splitting off to a newly named package would go a long way to avoid confusion about versions and source.
Current priority of our storage dev ( @ronan-a ) is to finish LINSTOR implementation on SMAPIv1, so SMAPIv3 work is paused until we finish that first. I'll be happy to have another storage developer, but we are still a small team.
Initial brief test seems ok
Will see if i can do more of the tests later...
Updated from 8.1 via yum which caused windows 10 & windows 2019 server to hang on the tiano logo.
Interestingly a debian 10 uefi vm worked fine...
After the update to uefistored both windows VMs started in recovery and did whatever it is windows does besides spin dots on your screen
After a reboot both Windows 10 2004 and windows 2019 server booted just fine
I have good experience with WiBu Key dongles and a "Matrix USB-Key" (also license dongle) but couldn't get an Aladdin HASP working.
I can pass it through (it's visible and attached) but the VM doesn't show it in device manger (Windows 10 1909). I gave up for now and will probably use a network USB thingie from SEH - already have 2 of their devices running in different environments and they work flawlessly.
Yeah, coffee lake support wasn't added until about a year ago. It was added separately from most of the other supported iGPUs. I believe its kernel 5.1 or newer that adds native coffee lake igpu support.
For the tests we had a need for 1 week ago, it's now fine, however I'll probably update this thread once we have an ISO image that can be tested by users that have such hardware. Note: it's not about using the GPU itself, but simply about making sure that the hypervisor works well with the changes we made to replace the not-built-by-us gpumon tool whose absence would make XCP-ng 8.1 unbootable (as we sadly found out after the release) with a dummy one built by us.
Now is time for the tests I was talking about earlier. XCP-ng 8.2 beta is now available with our dummy gpumon and we need users who have nVIDIA GPUs to test it and give us feedback. There may be situations we have not tested where our dummy gpumon is not enough to make the XAPI happy, despite the fact that we don't support nVIDIA vGPUs (proprietary software from Citrix required for that feature).
Hi Dan2462, I would like you to go for the following steps. In the star just choose the virtual machine and pick the option that says File > Export to OVF. Now add a name for the OVF file and mention a directory in which you need to save it. Here mention if you want to export the virtual machine as an OVF, a folder with individual files, or as an OVA, a single-file archive. At the end press Export to start the procedure of OVF export. This step will take some time and a status bar lets you know about the progress of the export process. Apart from that, https://appuals.com/export-import-vm-oracle/ this website would be really helpful for you in getting to know much more about it. I hope that by following these steps you would be able to resolve this issue.
We are aware of it Our "bandwidth" right now is not enough to investigate further.
edit: it's more a generic library than a real product, so it's nice but that would require a reasonable amount of work to turn it into something "turnkey". However, we are definitely keeping an eye on it