XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Intel iGPU passthough

    Scheduled Pinned Locked Moved Hardware
    39 Posts 10 Posters 9.1k Views 11 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • X Offline
      xerxist @bullerwins
      last edited by xerxist

      @bullerwins

      Tried docker/kubernetes Manjaro(just to switch to a later kernel)

      Changed it to run as root.
      Changed the permissions on the /dev/dri to 777.
      Its just weird as it clearly sees the device.

      4a999a7d-6686-415a-b940-5460c15326ea-image.png

      Actually got it to repond with FFMPEG

      ef83fdf3-1c6c-4f6a-96fb-e298e6ee504e-image.png

      It gives an error and the GPU stays like that so it hangs, which is what happens to Plex too I guess.

      1 Reply Last reply Reply Quote 0
      • tjkreidlT Offline
        tjkreidl Ambassador @bullerwins
        last edited by

        @bullerwins Why would the driver have to be world writable (permissions 777) which seems like a security risk unless the /dev directory itself isn't. But still, a writable driver area seems very strange.

        X 1 Reply Last reply Reply Quote 0
        • X Offline
          xerxist @tjkreidl
          last edited by xerxist

          @tjkreidl

          It was to rule out any permission problems.
          But it seems more on the iommu side of things where it goes wrong.
          I’ve just tried the same on proxmox and it works right away. (Well not right away, like you need to do some grub adjustments and load modules) Not sure 🤔 what I can do to fix it. Does the xcp ng kernel take these iommu grub settings too?

          GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

          and these modules

          vfio
          vfio_iommu_type1
          vfio_pci
          vfio_virqfd

          tjkreidlT 1 Reply Last reply Reply Quote 0
          • tjkreidlT Offline
            tjkreidl Ambassador @xerxist
            last edited by

            @xerxist I honestly do not know, but it seems it cannot hurt to try.

            X 1 Reply Last reply Reply Quote 0
            • X Offline
              xerxist @tjkreidl
              last edited by xerxist

              Giving this another try 🙂

              @tjkreidl

              I couldn't find those modules so this is probably not something in xcp-ng.

              @bullerwins

              Is your vm running in EFI or BIOS?

              X bullerwinsB 2 Replies Last reply Reply Quote 0
              • X Offline
                xerxist @xerxist
                last edited by xerxist

                Got it working !!!!! 🙂

                Changed the VM from UEFI to BIOS and it started working.

                55120676-0c12-46cb-bdb3-7a986113f5c1-image.png

                Still though why would it need it be BIOS instead of UEFI

                Need to do some more testing as I also disabled something in the BIOS of the NUC11 for ASPM to so I'll bring up another VM with UEFI and test it again.

                tjkreidlT 1 Reply Last reply Reply Quote 0
                • tjkreidlT Offline
                  tjkreidl Ambassador @xerxist
                  last edited by

                  @xerxist Good news! Some boot mechanisms will work one way or the other, or in some cases, even either one of them.
                  It's also important to keep the BIOS up-to-date.

                  X 1 Reply Last reply Reply Quote 0
                  • X Offline
                    xerxist @tjkreidl
                    last edited by

                    @tjkreidl

                    Strange it works on UEFI too now.
                    Only thing that changed is "ASPM off" in the BIOS of the NUC.
                    Need to try that too on the NUC 13.

                    X 1 Reply Last reply Reply Quote 0
                    • X Offline
                      xerxist @xerxist
                      last edited by

                      NUC13 is still a no go with ASPM turned off.

                      Not sure if the kernel needs to recognize it as it doesn't give me the type like on the NUC11

                      4e3231e4-61e9-4101-abbb-ae2f90fdbd2f-image.png

                      But the NUC11 is confirmed working fine BIOS or UEFI

                      1 Reply Last reply Reply Quote 0
                      • bullerwinsB Offline
                        bullerwins @xerxist
                        last edited by

                        @xerxist in BIOS mode, i would say it was the default for my ubuntu VM

                        X 1 Reply Last reply Reply Quote 0
                        • X Offline
                          xerxist @bullerwins
                          last edited by

                          @bullerwins

                          Seems you would need at least Kernel > 5.15 for this to work on the NUC 12-13.
                          Not sure what got implemented/fixed there but it would need to be back ported for this to work.

                          bullerwinsB 1 Reply Last reply Reply Quote 0
                          • bullerwinsB Offline
                            bullerwins @xerxist
                            last edited by

                            @xerxist my ubuntu 22.04 install came with kernel 5.15, i have it updated regularly but it doens't update the kernel it seems. But newer fresh installs of ubuntu 22.04 install a newer kernel. I'll check out if the kernel needs to be manually updated

                            X 1 Reply Last reply Reply Quote 0
                            • X Offline
                              xerxist @bullerwins
                              last edited by xerxist

                              @bullerwins

                              Not in the VM itself.
                              I even went to kernel 6.6 and try it in there, all give the same issue.

                              Something on the hypervisor side in the kernel I meant. This is 4.x something with allot of backports.

                              I'll probably just wait a wile before moving full XCP-NG.
                              But its a very nice system 👍

                              bullerwinsB 1 Reply Last reply Reply Quote 0
                              • bullerwinsB Offline
                                bullerwins @xerxist
                                last edited by

                                @xerxist have you tried with the 8.3 beta of XCP-ng? I believe it's got a newer kernel maybe?

                                X 1 Reply Last reply Reply Quote 0
                                • X Offline
                                  xerxist @bullerwins
                                  last edited by

                                  @bullerwins

                                  Yes I'm running 8.3 Beta

                                  1 Reply Last reply Reply Quote 0
                                  • H Offline
                                    hawkpro
                                    last edited by

                                    @bullerwins @xerxist

                                    Is this just the mediated (gvt-g?) device passthrough, so the XCP side/server maintains video but a VM can make use of the resources as well?

                                    I am very interested in this (plex, frigate type use) as a stepping stone away from Proxmox.

                                    Thanks

                                    F 1 Reply Last reply Reply Quote 0
                                    • F Online
                                      flakpyro @hawkpro
                                      last edited by

                                      In my testing of this, iGPU passthru works fine in Linux but in Windows the device will show an error in the device manager, disable/enabling the drive in the device manager will allow it to work, until next reboot.

                                      1 Reply Last reply Reply Quote 0
                                      • C Offline
                                        CJ
                                        last edited by

                                        @bullerwins @xerxist @flakpyro

                                        What are you using for display output on the host since you're passing the iGPU to the VM?

                                        F X 2 Replies Last reply Reply Quote 0
                                        • F Online
                                          flakpyro @CJ
                                          last edited by

                                          @CJ Im running server grade hardware that has remote lights out management with iKVM support. Otherwise yeah you would lose access to the display output.

                                          1 Reply Last reply Reply Quote 0
                                          • X Offline
                                            xerxist @CJ
                                            last edited by

                                            @CJ

                                            No output just need the Intel quick sync.

                                            C 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post