XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • stormiS

      XCP-ng 8.3 betas and RCs feedback 🚀

      Watching Ignoring Scheduled Pinned Locked Moved News
      792
      5 Votes
      792 Posts
      2m Views
      stormiS
      This is the end for this nice and useful thread, as XCP-ng 8.3 is not a beta nor a RC anymore: it's a supported release now. However, we still need your feedback, as we publish update candidates ahead of their official release, for users to test them. Right now, there's a security update candidate which is to be tested. I strongly invite everyone who is currently subscribed to this thread to now subscribe to the new, dedicated thread: XCP-ng 8.3 updates announcements and testing, and to verify that their settings allow sending notification e-mails and/or other forms of notification.
    • stormiS

      XCP-ng 8.2 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      717
      2 Votes
      717 Posts
      2m Views
      gduperreyG
      XCP-ng 8.2 has just reached its end of life, but the adventure continues with XCP-ng 8.3 (and other versions to come). You can read the communication on this point on our blog: https://xcp-ng.org/blog/2025/09/16/xcp-ng-8-2-lts-reached-its-end-of-life/ To continue benefiting from updates and developments, we invite you, if you haven't already done so, to upgrade your systems to XCP-ng 8.3. A relevant thread has been around for quite some time if you want to participate in early testing of the updates: https://xcp-ng.org/forum/topic/9964/xcp-ng-8-3-updates-announcements-and-testing/
    • olivierlambertO

      XOSTOR hyperconvergence preview

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      458
      1
      6 Votes
      458 Posts
      749k Views
      J
      I have amazing news! After the upgrade to xcp-ng 8.3, I retested velero backup, and it all just works Completed Backup jonathon@jonathon-framework:~$ velero --kubeconfig k8s_configs/production.yaml backup describe grafana-test Name: grafana-test Namespace: velero Labels: objectset.rio.cattle.io/hash=c2b5f500ab5d9b8ffe14f2c70bf3742291df565c velero.io/storage-location=default Annotations: objectset.rio.cattle.io/applied=H4sIAAAAAAAA/4SSQW/bPgzFvwvPtv9OajeJj/8N22HdBqxFL0MPlEQlWmTRkOhgQ5HvPsixE2yH7iji8ffIJ74CDu6ZYnIcoIMTeYpcOf7vtIICji4Y6OB/1MdxgAJ6EjQoCN0rYAgsKI5Dyk9WP0hLIqmi40qjiKfMcRlAq7pBY+py26qmbEi15a5p78vtaqe0oqbVVsO5AI+K/Ju4A6YDdKDXqrVtXaNqzU5traVVY9d6Uyt7t2nW693K2Pa+naABe4IO9hEtBiyFksClmgbUdN06a9NAOtvr5B4DDunA8uR64lGgg7u6rxMUYMji6OWZ/dhTeuIPaQ6os+gTFUA/tR8NmXd+TELxUfNA5hslHqOmBN13OF16ZwvNQShIqpZClYQj7qk6blPlGF5uzC/L3P+kvok7MB9z0OcCXPiLPLHmuLLWCfVfB4rTZ9/iaA5zHovNZz7R++k6JI50q89BXcuXYR5YT0DolkChABEPHWzW9cK+rPQx8jgsH/KQj+QT/frzXCdduc/Ca9u1Y7aaFvMu5Ang5Xz+HQAA//8X7Fu+/QIAAA objectset.rio.cattle.io/id=e104add0-85b4-4eb5-9456-819bcbe45cfc velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.4+rke2r1 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: grafana Excluded: <none> Resources: Included cluster-scoped: <none> Excluded cluster-scoped: volumesnapshotcontents.snapshot.storage.k8s.io Included namespace-scoped: * Excluded namespace-scoped: volumesnapshots.snapshot.storage.k8s.io Label selector: <none> Or label selector: <none> Storage Location: default Velero-Native Snapshot PVs: true Snapshot Move Data: true Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 30m0s ItemOperationTimeout: 4h0m0s Hooks: <none> Backup Format Version: 1.1.0 Started: 2025-10-15 15:29:52 -0700 PDT Completed: 2025-10-15 15:31:25 -0700 PDT Expiration: 2025-11-14 14:29:52 -0800 PST Total items to be backed up: 35 Items backed up: 35 Backup Item Operations: 1 of 1 completed successfully, 0 failed (specify --details for more information) Backup Volumes: Velero-Native Snapshots: <none included> CSI Snapshots: grafana/central-grafana: Data Movement: included, specify --details for more information Pod Volume Backups: <none included> HooksAttempted: 0 HooksFailed: 0 Completed Restore jonathon@jonathon-framework:~$ velero --kubeconfig k8s_configs/production.yaml restore describe restore-grafana-test --details Name: restore-grafana-test Namespace: velero Labels: objectset.rio.cattle.io/hash=252addb3ed156c52d9fa9b8c045b47a55d66c0af Annotations: objectset.rio.cattle.io/applied=H4sIAAAAAAAA/3yRTW7zIBBA7zJrO5/j35gzfE2rtsomymIM45jGBgTjbKLcvaKJm6qL7kDwnt7ABdDpHfmgrQEBZxrJ25W2/85rSOCkjQIBrxTYeoIEJmJUyAjiAmiMZWRtTYhb232Q5EC88tquJDKPFEU6GlpUG5UVZdpUdZ6WZZ+niOtNWtR1SypvqC8buCYwYkfjn7oBwwAC8ipHpbqC1LqqZZWrtse228isrLqywapSdS0z7KPU4EQgwN+mSI8eezSYMgWG22lwKOl7/MgERzJmdChPs9veDL9IGfSbQRcGy+96IjszCCiyCRLQRo6zIrVd5AHEfuHhkIBmmp4d+a/3e9Dl8LPoCZ3T5hg7FvQRcR8nxt6XL7sAgv1MCZztOE+01P23cvmnPYzaxNtwuF4/AwAA//8k6OwC/QEAAA objectset.rio.cattle.io/id=9ad8d034-7562-44f2-aa18-3669ed27ef47 Phase: Completed Total items to be restored: 33 Items restored: 33 Started: 2025-10-15 15:35:26 -0700 PDT Completed: 2025-10-15 15:36:34 -0700 PDT Warnings: Velero: <none> Cluster: <none> Namespaces: grafana-restore: could not restore, ConfigMap:elasticsearch-es-transport-ca-internal already exists. Warning: the in-cluster version is different than the backed-up version could not restore, ConfigMap:kube-root-ca.crt already exists. Warning: the in-cluster version is different than the backed-up version Backup: grafana-test Namespaces: Included: grafana Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io, csinodes.storage.k8s.io, volumeattachments.storage.k8s.io, backuprepositories.velero.io Cluster-scoped: auto Namespace mappings: grafana=grafana-restore Label selector: <none> Or label selector: <none> Restore PVs: true CSI Snapshot Restores: grafana-restore/central-grafana: Data Movement: Operation ID: dd-ffa56e1c-9fd0-44b4-a8bb-8163f40a49e9.330b82fc-ca6a-423217ee5 Data Mover: velero Uploader Type: kopia Existing Resource Policy: <none> ItemOperationTimeout: 4h0m0s Preserve Service NodePorts: auto Restore Item Operations: Operation for persistentvolumeclaims grafana-restore/central-grafana: Restore Item Action Plugin: velero.io/csi-pvc-restorer Operation ID: dd-ffa56e1c-9fd0-44b4-a8bb-8163f40a49e9.330b82fc-ca6a-423217ee5 Phase: Completed Progress: 856284762 of 856284762 complete (Bytes) Progress description: Completed Created: 2025-10-15 15:35:28 -0700 PDT Started: 2025-10-15 15:36:06 -0700 PDT Updated: 2025-10-15 15:36:26 -0700 PDT HooksAttempted: 0 HooksFailed: 0 Resource List: apps/v1/Deployment: - grafana-restore/central-grafana(created) - grafana-restore/grafana-debug(created) apps/v1/ReplicaSet: - grafana-restore/central-grafana-5448b9f65(created) - grafana-restore/central-grafana-56887c6cb6(created) - grafana-restore/central-grafana-56ddd4f497(created) - grafana-restore/central-grafana-5f4757844b(created) - grafana-restore/central-grafana-5f69f86c85(created) - grafana-restore/central-grafana-64545dcdc(created) - grafana-restore/central-grafana-69c66c54d9(created) - grafana-restore/central-grafana-6c8d6f65b8(created) - grafana-restore/central-grafana-7b479f79ff(created) - grafana-restore/central-grafana-bc7d96cdd(created) - grafana-restore/central-grafana-cb88bd49c(created) - grafana-restore/grafana-debug-556845ff7b(created) - grafana-restore/grafana-debug-6fb594cb5f(created) - grafana-restore/grafana-debug-8f66bfbf6(created) discovery.k8s.io/v1/EndpointSlice: - grafana-restore/central-grafana-hkgd5(created) networking.k8s.io/v1/Ingress: - grafana-restore/central-grafana(created) rbac.authorization.k8s.io/v1/Role: - grafana-restore/central-grafana(created) rbac.authorization.k8s.io/v1/RoleBinding: - grafana-restore/central-grafana(created) v1/ConfigMap: - grafana-restore/central-grafana(created) - grafana-restore/elasticsearch-es-transport-ca-internal(failed) - grafana-restore/kube-root-ca.crt(failed) v1/Endpoints: - grafana-restore/central-grafana(created) v1/PersistentVolume: - pvc-e3f6578f-08b2-4e79-85f0-76bbf8985b55(skipped) v1/PersistentVolumeClaim: - grafana-restore/central-grafana(created) v1/Pod: - grafana-restore/central-grafana-cb88bd49c-fc5br(created) v1/Secret: - grafana-restore/fpinfra-net-cf-cert(created) - grafana-restore/grafana(created) v1/Service: - grafana-restore/central-grafana(created) v1/ServiceAccount: - grafana-restore/central-grafana(created) - grafana-restore/default(skipped) velero.io/v2alpha1/DataUpload: - velero/grafana-test-nw7zj(skipped) Image of working restore pod, with correct data in PV [image: 1760568537496-34d87db1-19ae-4348-8d4e-6599375d7634-image.png] Velero installed from helm: https://vmware-tanzu.github.io/helm-charts Version: velero:11.1.0 Values --- image: repository: velero/velero tag: v1.17.0 # Whether to deploy the restic daemonset. deployNodeAgent: true initContainers: - name: velero-plugin-for-aws image: velero/velero-plugin-for-aws:latest imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /target name: plugins configuration: defaultItemOperationTimeout: 2h features: EnableCSI defaultSnapshotMoveData: true backupStorageLocation: - name: default provider: aws bucket: velero config: region: us-east-1 s3ForcePathStyle: true s3Url: https://s3.location # Destination VSL points to LINSTOR snapshot class volumeSnapshotLocation: - name: linstor provider: velero.io/csi config: snapshotClass: linstor-vsc credentials: useSecret: true existingSecret: velero-user metrics: enabled: true serviceMonitor: enabled: true prometheusRule: enabled: true # Additional labels to add to deployed PrometheusRule additionalLabels: {} # PrometheusRule namespace. Defaults to Velero namespace. # namespace: "" # Rules to be deployed spec: - alert: VeleroBackupPartialFailures annotations: message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} partialy failed backups. expr: |- velero_backup_partial_failure_total{schedule!=""} / velero_backup_attempt_total{schedule!=""} > 0.25 for: 15m labels: severity: warning - alert: VeleroBackupFailures annotations: message: Velero backup {{ $labels.schedule }} has {{ $value | humanizePercentage }} failed backups. expr: |- velero_backup_failure_total{schedule!=""} / velero_backup_attempt_total{schedule!=""} > 0.25 for: 15m labels: severity: warning Also create the following. apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: linstor-vsc labels: velero.io/csi-volumesnapshot-class: "true" driver: linstor.csi.linbit.com deletionPolicy: Delete We are using Piraeus operator to use xostor in k8s https://github.com/piraeusdatastore/piraeus-operator.git Version: v2.9.1 Values: --- operator: resources: requests: cpu: 250m memory: 500Mi limits: memory: 1Gi installCRDs: true imageConfigOverride: - base: quay.io/piraeusdatastore components: linstor-satellite: image: piraeus-server tag: v1.29.0 tls: certManagerIssuerRef: name: step-issuer kind: StepClusterIssuer group: certmanager.step.sm Then we just connect to the xostor cluster like external linstor controller.
    • olivierlambertO

      CBT: the thread to centralize your feedback

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      455
      1 Votes
      455 Posts
      585k Views
      olivierlambertO
      Okay, I thought the autoscan was only for like 10 minutes or so, but hey I'm not deep down in the stack anymore
    • olivierlambertO

      VMware migration tool: we need your feedback!

      Watching Ignoring Scheduled Pinned Locked Moved Migrate to XCP-ng
      318
      1
      5 Votes
      318 Posts
      339k Views
      R
      On vmware u would need als vcenter for this kind of features. And as u can easy deploy an empty xoa, why would this be an issue?
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      296
      1 Votes
      296 Posts
      92k Views
      D
      @acebmxer All looks good from the output. There might be some VMs that need fixes for Windows dbx updates to work properly, I'll try to add that in the official version.
    • stormiS

      XCP-ng 8.3 public alpha 🚀

      Watching Ignoring Scheduled Pinned Locked Moved News
      264
      7 Votes
      264 Posts
      300k Views
      stormiS
      We just released XCP-ng 8.3 beta 1 ! I opened a new thread for us to discuss it and for you to provide feedback: https://xcp-ng.org/forum/topic/7464/xcp-ng-8-3-beta Thanks for all the feedback already provided here, and see you on this new thread! In order not to miss anything (and, let's be honest, for me to be sure that messages on the new thread reach you all), the best course of action is: open the new thread right now and use the "watch" button. [image: 1687457779384-53fac025-6e0c-465b-97ab-5ca73a97bd93-image.png] And let's answer this common and legitimate question: how to upgrade from alpha to beta ? Well, there's nothing to do, just update as usual. In fact, you might already be in beta state. However, as indicated in the blog post, we need a lot of testing of the installer, so it's also an option to start from the installation ISO again.
    • N

      Epyc VM to VM networking slow

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      260
      0 Votes
      260 Posts
      180k Views
      ForzaF
      @dinhngtu said in Epyc VM to VM networking slow: @Forza There's a new script here that will help you check the VM's status wrt. the Fix 1. Thank you. It does indeed look like the EPYC fix is active in XOA. [07:25 22] xoa:~$ python3 epyc-fix-check.py 'xen-platform-pci' PCI IO mem address is 0xFB000000 Grant table cacheability fix is ACTIVE. Has Vates checked if a newer kernel would help the network performance with XOA? Current kernel is: linux-image-amd64/oldstable,now 6.1.148-1 amd64 [installed] When trying to install any of the newer kernels (6.12.43-*) it immediately fails dependency check: [07:30 22] xoa:~$ apt install linux-image-6.12.43+deb12- linux-image-6.12.43+deb12-amd64 linux-image-6.12.43+deb12-cloud-amd64-unsigned linux-image-6.12.43+deb12-amd64-dbg linux-image-6.12.43+deb12-rt-amd64 linux-image-6.12.43+deb12-amd64-unsigned linux-image-6.12.43+deb12-rt-amd64-dbg linux-image-6.12.43+deb12-cloud-amd64 linux-image-6.12.43+deb12-rt-amd64-unsigned linux-image-6.12.43+deb12-cloud-amd64-dbg [07:30 22] xoa:~$ apt install linux-image-6.12.43+deb12-amd64 Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-image-6.12.43+deb12-amd64 : PreDepends: linux-base (>= 4.12~) but 4.9 is to be installed E: Unable to correct problems, you have held broken packages.
    • daveD

      Alert: Control Domain Memory Usage

      Watching Ignoring Scheduled Pinned Locked Moved Solved Compute
      194
      0 Votes
      194 Posts
      270k Views
      F
      Its not solving it, but you can run echo 3 > /proc/sys/vm/drop_caches to release some of the cache again, without interfering with running processes. [root@host2 ~]# free -m total used free shared buff/cache available Mem: 15958 3308 158 8 12491 2355 Swap: 1023 177 846 [root@host2 ~]# echo 3 > /proc/sys/vm/drop_caches [root@host2 ~]# free -m total used free shared buff/cache available Mem: 15958 3308 2598 10 10051 2751 Swap: 1023 177 846
    • olivierlambertO

      Introduce yourself!

      Watching Ignoring Scheduled Pinned Locked Moved Off topic
      179
      2 Votes
      179 Posts
      192k Views
      nikadeN
      @john-c said in Introduce yourself!: @TS79 said in Introduce yourself!: Hi. I'm a cloud solutions architect, with around 25 years of working experience in servers, storage, networking (your typical infrastructure stuff) and about 20 years of virtualisation. I started up a homelab many years ago, and through (too) many evolutions, I've ended up with Lenovo M710q mini PCs running XCP-ng, with another mini PC providing NFS storage (with backup and replication to cater for problems and failures). Absolutely love XCP-ng and am promoting it wherever I can. I've architected and kicked off a project at my employer to replace VMware with XCP-ng, so I'm keen to use the forum to read other people's real-world experiences with storage and host specs, hurdles to avoid, and any tips & tricks. Looking forward to interacting with the community more and more. When checking out Xen Orchestra make sure to look at both the Host Maintenance Mode and the SR Maintenance Mode. I came up with the idea for the SR Maintenance mode during the Covid-19 lock down in the UK. The Vates staff developed and implemented the idea, I pitched it as a useful tool for large infrastructures. The reason being that pools (especially large ones) can have multiple shared storage implemented as SRs. The maintenance mode for SR permits, some of the SR to be put in maintenance mode when the backing separate bare metal hardware is in maintenance, while keeping others not in this situation active. So your less likely to need to put the host in maintenance mode, thus improving pools which are using HA, increasing the up time further. So the VMs aren't affected, when the VMs have been migrated to another storage SR, thus aiding in reducing down time for the VMs. This is a great feature, but I havent used it - How does it work? Is it something like: You put the SR in maintenance mode VM's are migrated to another shared SR Notification about SR maintenance completed?
    • olivierlambertO

      New Rust Xen guest tools

      Watching Ignoring Scheduled Pinned Locked Moved Development
      158
      3 Votes
      158 Posts
      107k Views
      DustyArmstrongD
      Testing the agent out on Arch Linux (mainly due to the spotty 'support' in the AUR/generally) and it is working fine - better than what I had before (which did not report VM info properly). I've set it up as a systemd service to replace the previous one I had, also working as expected. This would be fun to contribute towards.
    • olivierlambertO

      Netdata package is now available in XCP-ng

      Watching Ignoring Scheduled Pinned Locked Moved News
      131
      1
      4 Votes
      131 Posts
      143k Views
      ForzaF
      @grapesmc at one point I had the idea to set up a xcp-ng build environment and build netdata in there, then simply copy it over to the xcp-ng hosts. Unfortunately I was not able to dedicate the time to this so far.
    • olivierlambertO

      Our future backup code: test it!

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      128
      5 Votes
      128 Posts
      35k Views
      Tristis OrisT
      I created a new CR job for another VM and it worked. However, it didn't work with the XO VM. So maybe the root cause of the problem is that the old CR copies have disappeared. Maybe they still exist, but I can't find them?
    • stormiS

      XCP-ng 8.0.0 Beta now available!

      Watching Ignoring Scheduled Pinned Locked Moved News
      123
      10 Votes
      123 Posts
      129k Views
      olivierlambertO
      When you boot in UEFI mode, press "e" to edit the boot command line, you have a line to change the memory for dom0.
    • N

      Non-server CPU compatibility - Ryzen and Intel

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      116
      0 Votes
      116 Posts
      86k Views
      L
      Hi, I don't know if my question can be part of this thread. I apologize in advance... I bought a new system based on "AMD Ryzen 9 9950X" installed on "ASUS PRIME B850-PLUS-CSM". My target is to install two VM: linux based Ubuntu; Windows 11 In a possible future, I would like to install a graphic card for windows 11 CAD applications... I would like to ask if xcp-ng can run this environment... Thank you Claudio
    • AlexanderKA

      Nested Virtualization of Windows Hyper-V on XCP-ng

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      111
      1
      0 Votes
      111 Posts
      102k Views
      G
      @stormi said in Nested Virtualization of Windows Hyper-V on XCP-ng: Actually, Xen never officially supported Nested Virtualization. It was experimental, and broke when other needed changes were made to Xen. Now there's work to be done to make it fully supported, and this won't happen before the final release of XCP-ng 8.3. This will be documented in the release notes. This is also an issue for us internally as we create a lot of virtual pools for our tests. I read through a lot of the earlier posts and finally started scrolling to find this, which is the answer I was looking for. Why do I care? There is a Microsoft evaluation learning lab for things like Intune that runs in Hyper-V, basically a bunch of VHD (x) that get spawned as needed. Applications I need to teach myself. Running XCP-NG 8.3 current updates for this lab. If it doesn't happen, then I'll just need to throw an eval version of Windows Server on something else like an HP T740 to run these labs, not the biggest issue for me. Link for the labs if anyone is curious (free with an email registration like all the evals): https://www.microsoft.com/en-us/evalcenter/evaluate-mem-evaluation-lab-kit I'd think direct Docker support would be a higher priority than nested virtualization with a focus on Hyper-V. But that's just me.
    • Z

      XOA Error when installing

      Watching Ignoring Scheduled Pinned Locked Moved Solved Xen Orchestra
      107
      1
      0 Votes
      107 Posts
      37k Views
      G
      I did the manual installation via the XVA. It worked for me.
    • P

      Nvidia Quadro P400 not working on Ubuntu server via GPU/PCIe passthrough

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      106
      1
      0 Votes
      106 Posts
      58k Views
      B
      I'm having similar issue with A400 on xcp-ng8.3 Proprietary driver fails with following message when running nvidia-smi : NVRM: GPU 0000:00:05.0: RmInitAdapter failed! (0x24:0x72:1568) [ 44.619030] NVRM: GPU 0000:00:05.0: rm_init_adapter failed, device minor number 0 [ 45.095040] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint. [ 45.144703] nvidia-uvm: Loaded the UVM driver, major device number 241. system is actually loading the driver : [ 6.026970] xen: --> pirq=88 -> irq=36 (gsi=36) [ 6.027485] nvidia 0000:00:05.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem [ 6.029010] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 550.144.03 Mon Dec 30 17:44:08 UTC 2024 [ 6.063945] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 550.144.03 Mon Dec 30 17:10:10 UTC 2024 [ 6.118261] [drm] [nvidia-drm] [GPU ID 0x00000005] Loading driver [ 6.118265] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:00:05.0 on minor 1 xl pci-assignable-list gives : 0000:43:00.0 0000:43:00.1 and gpu is assigned as passthrough,, but when listing test VM i have empty list of devices.. [23:06 epycrep ~]# xl pci-list Avideo-nvidia [23:35 epycrep ~]# Not sure if i want to try more before switching gpu to something else. Any hint where to look for ? Server is gigabyte g292-z20 wih epyc 7402p and single gpu for testing. IOMMU enabled.
    • RIX_ITR

      High Fan Speed Issue on Lenovo ThinkSystem Servers

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      101
      1 Votes
      101 Posts
      69k Views
      A
      @gduperrey (unrelated to this fan issue) I loaded the new standard 8.2 testing kernel on my NUC11 and it seems to boot a little faster and also no longer complains about some APIC devices.
    • S

      Gpu passthrough on Asrock rack B650D4U3-2L2Q will not work

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      99
      0 Votes
      99 Posts
      32k Views
      R
      @steff22 said in Gpu passthrough on Asrock rack B650D4U3-2L2Q will not work: @ravenet Ok I think the pro gpu's work better without so many bugs. I switched to Nvidia RTX 4070. With the Nvidia gpu and this works as it should and gets an image on the screen on the first try. So there must be something wrong with the amd drivers plus that error with the amd reset bug I've setup a test system with a ryzen 7700x and a regular radeon 7600xt, non-pro gpu. Will let you know results. Shouldn't be an issue, it's same hardware and even the drivers work across them. Though they do have different bios on cards